virtualirfan

Chairing a session at HotStorage 2010

I am honored to have been asked to chair a session at the HotStorage 2010 workshop on Boston. Take a look at the program. My session includes two very interesting papers:

Funnily enough, Jiri chose the session title to be “All Aboard HMS Beagle”. Here’s his explanation: “the session name refers to Charles Darwin’s ship named Beagle. I chose the name because there isn’t really much technical commonality other than the words Adaptive and Evolution (hence the reference)”.

If folks are in the area, please consider registering and popping in. USENIX workshops are always very exciting mixers for industry and academia.

Black-Box Performance Control for High-Volume Non-Interactive Systems

One of the interesting papers presented at USENIX 2009 was “Black-Box Performance Control for High-Volume Non-Interactive Systems” [pdf][html[slides]. Since this is right up my alley, I paid close attention and took some notes. The paper was authored by several IBM Research folks: Chunqiang Tang, Sunjit Tara, Rong N. Chang and Chun Zhang.

First of all, this is interesting and thought-provoking work. However, the paper deals with a very constrained environment of throughput-centric systems and with only a single pool of threads. I have reservations about the general applicability of the system to, say, disk scheduling. Nevertheless, their black box treatment of the system (multiple unknown bottlenecks) is quite interesting and it really made me wonder how else it could be extended. The main problem is that if you have multiple controls in the system (e.g. cpu, memory, disk, etc) that the effective online search they are performing will get really tricky. Nevertheless, good food for thought.

Program Committee Membership of VPACT 2009

There is an active Call for Papers for the Second International Workshop on Virtualization Performance: Analysis, Characterization, and Tools (VPACT’09). I feel very lucky to have been asked to serve on the program committee (PC) for this excellent workshop by Peter Varman, the general chair. Peter is a superb researcher and I really like the work that he and his students have been doing in the area of QoS for storage systems. PC membership means that I’ll reviewing papers submitted to the conference and selecting the best ones for presentation and for publication in the proceedings.

If you have interesting ideas that you’d like to run by the research community in the following areas, please do consider submitting your work.

The workshop is intended as a venue for researchers and practitioners in academia and industry to present their unpublished results in the area of virtualization research. Papers are solicited on topics including, but not limited to the following aspects of virtual machine (VM) execution:
• VM analytical performance modeling
• VM performance tools for tracing, profiling, and simulation
• VM benchmarking and performance metrics
• Workload characterization in a virtualized environment
• Evaluation of resource scheduling
• Models and metrics for new VM usages
• VM energy and power modeling

Take a look at the list of program committee members to see if you recognize any of those names.

Seat belts & air bags versus belts & suspenders

Just listening to Alyssa Henry’s keynote talk at FAST ’09. She is the General Manager of S3 at Amazon. She used a great analogy to explain the difficult choice of which thing to spend resources on to protect against failures in a highly distributed system. For some things we choose to have expensive redundancy, e.g. we use both seat belts as well as air bags. Protecting one’s life in a catastrophic situation is important enough to warrant the extra expense. But we tend not to use both waist belts as well as suspenders 🙂

Alyssa also talked about “retry” as an important part of building resilient systems. To handle failures in distributed systems where messages may be lost or nodes may go down, just retry. But what about a message to charge a customer some amount of money? Do you really want to resend that request? The point was that they needed to think about making some operations idempotent by design.

According to Alyssa, the next failure after retry was solved, was surge/overload. Retries can be overwhelming to a system recovering from failure. So rate limiting might be used e.g. exponential backoff. Related are cache time-to-live (TTL) leases expiring but the underlying system which is the source of the data is down. As that system is comming back up, it would get overwhelmed. Alyssa suggested to try extending the TTL to keep the underlying system from breaking down when it comes back up. For example, there is a service at Amazon that checks if a customer’s Account is live. In case that service is down, it’s client systems just continue to assume that the customer is still in good standing.

She also talked about trading consistency with availability. When you write to S3, they will send data to multiple data centers. They write pointers to more datacenters than the data itself.

VMware BoF at USENIX FAST 2009

The USENIX Conference on File and Storage Technologies (FAST) is the premier place to send papers on all things storage. The program committee is usually the who’s who of the field. For the last few years, VMware has been holding a birds of a feather (BoF) session on the intersection of virtualization and storage/filesystem technologies. The BoF chair this year is a good friend of mine, Ajay Gulati.

Ajay has setup a really cool program that I think will attract a large crowd. Take a look at the following and be sure to drop by if you are lucky enough to be attending the conference (or even if you are not, but find yourself in the area, you are welcome to drop by our meeting room). I’m particularly excited about the demos!

Storage Technologies and Challenges in Virtualized Environments
VMware Vendor BoF
Thursday, February 26, 7:30 p.m.–8:30 p.m., San Francisco C

Do you wonder what VMware has to do with storage? Are you interested in learning about VMware technologies beyond core server virtualization? Do you want to get a glimpse of some of the future products and what storage applications they can enable?

Join engineers from VMware in a discussion about a number of novel storage-related technologies that VMware has been working on. We will also discuss some of the currently open problems and challenges related to better storage performance and management.

We will give two live demos:
1) Online storage migration (Storage VMotion)
2) Transparent and efficient workload characterization of VM workloads inside ESX Server

In addition, there will be a number of manned stations with posters and demos of technologies such as Distributed Storage IO Resource Management, VMware’s Cluster File System (VMFS), ESX’s Pluggable Storage Stack, VM aware storage (VMAS) and our dynamic Virtual Machine instrumentation tool called VProbes.

Bandwidth peaks but latency keeps increasing

As part of our PARDA research, we examined how IO latency varies with increases in overall load (queue length) at the array using one to five hosts accessing the same storage array. The attached image (Figure 6 from the paper) shows the aggregate throughput and average latency observed in the system, with increasing contention at the array. The generated workload is a uniform 16 KB IOs, 67% reads and 70% random, while keeping 32 IOs outstanding from each host. It can be clearly seen that, for this experiment, throughput peaked at three hosts, but overall latency continues to increase with load. In fact, in some cases, beyond a certain level of workload parallelism, throughput can even drop.





An important question to consider for application performance is whether bandwidth is more important or latency. If the former, then pushing the outstanding IOs higher might make sense up to a point. However, for latency sensitive workloads, it is better to provide a target latency and to stop increasing the load (outstanding IOs) on the array beyond that point. The latter is the key observation that PARDA is built around. We use a control equation that uses an input target latency goal beyond which the array can be considered to be overloaded. Using our equation, we modify the outstanding IOs count across VMware ESX hosts in a distributed fashion to stay close to the target IO latency. In the paper, we also detail how our equation also incorporates proportional sharing and fairness. Our experimental results show the technique to be effective.

PARDA O’ PARDA

Ajay Gulati, Carl Waldspurger and I have just finished work on a distributed IO scheduling paper for the upcoming FAST 2009 conference. So I wanted to provide an update. PARDA is a research project to design a proportional-share resource scheduler that can provide service differentiation for IO like VMware already provides for CPU and Memory. In plain terms, how can we deliver better throughput and lower response times to the more important VMs irrespective of which host in a cluster they run on.

This is a really interesting and challenging problem. A bunch of us first started brainstorming in this area 2 years ago but despite several attempts, for over a year, we couldn’t come up with a comprehensive solution. For one thing, IO scheduling is a very hard problem. Second, there aren’t existing research papers that tackle our particular flavor of the problem (cluster filesystem). To top it off, the problem sounds easy at first blush, encouraging a lot of well-intentioned but ultimately misleading attempts.

Ajay and I first published a paper on our idea to use flow control (think TCP-style) to solve this problem at the SPEED 2008 workshop in Feb ’08 and the feedback from the research community was encouraging (this later became the basis for an ACM SIGOPS Operating Systems Review article, October 2008). Since then Ajay, Carl and I have worked out the major issues with this new technique resulting in the FAST paper.

The paper is entitled “PARDA: Proportional Allocation of Resources for Distributed Storage Access”.