virtualirfan

Tag Archive for VMware

HotStorage 2011 Program Committee Meeting

Folks: just a heads up … The HotStorage 2011 PC meeting is being held on Monday April 11.

As program chair this year, I’m hosting the meeting at VMware which will be including some of the brightest minds in storage research in the world. Building E, here we come. If there are brown-outs, it’s probably due to the sheet amount of brainpower assembled there 🙂 Here is the full list.

Several people are flying in, some are local and yet others are dialing in. When assembling this PC, I had several objectives. One of course was to collect up the top brains in this fast-moving area of research. Another was diversity, of every type. I’m super impressed with the group of people who agreed to serve on the committee.

As for the program, we had a record number of submissions (60% more than last year) which just goes to show you how active this area is. The review rounds are done leading up to Monday’s meeting. I’ve been spending time organizing the papers to optimize our team. There are so many good ones that I’m sure the selection process will not be easy.

As is appropriate for all academic venues of good repute, HotStorage has a very strict conflicts policy. So, even as chair, I’ll sit out some paper discussions to avoid even the potential appearance of conflicts against some papers from colleagues or ex-colleagues. The same applies to all PC members. Another thing that I have done is require extra reviewing for PC member papers which lifts the quality bar for them.

I’ll post interesting tid bits from the meeting later.

Getting Paid to Run vscsiStats? :)

The most awesome thing I’ve heard in a while is effectively getting paid to run and share vscsiStats data. See Chad Sakac’s on this topic blog posting on this topic.

I should ask for royalties 😉

More seriously, this is very interesting and a win-win. Getting real customer data is always difficult and Chad has got it figured out. Customers on the other hand are assured that their data is anonymized (besides, vscsiStats doesn’t capture any real customer data anyway, just the workload characteristics) and get a cool, super-useful tool in return.

Look forward to more vendors trying this … 🙂

vscsiStats gone viral?

Folks, is it just me or does vscsiStats seem to have gone viral? Here’s a couple of the posts that are seeing a lot of retweets.

Irfan

ps. I haven’t mentioned here that you can follow me on twitter with the handle @virtualirfan

Program Committee Membership of VPACT 2009

There is an active Call for Papers for the Second International Workshop on Virtualization Performance: Analysis, Characterization, and Tools (VPACT’09). I feel very lucky to have been asked to serve on the program committee (PC) for this excellent workshop by Peter Varman, the general chair. Peter is a superb researcher and I really like the work that he and his students have been doing in the area of QoS for storage systems. PC membership means that I’ll reviewing papers submitted to the conference and selecting the best ones for presentation and for publication in the proceedings.

If you have interesting ideas that you’d like to run by the research community in the following areas, please do consider submitting your work.

The workshop is intended as a venue for researchers and practitioners in academia and industry to present their unpublished results in the area of virtualization research. Papers are solicited on topics including, but not limited to the following aspects of virtual machine (VM) execution:
• VM analytical performance modeling
• VM performance tools for tracing, profiling, and simulation
• VM benchmarking and performance metrics
• Workload characterization in a virtualized environment
• Evaluation of resource scheduling
• Models and metrics for new VM usages
• VM energy and power modeling

Take a look at the list of program committee members to see if you recognize any of those names.

Bandwidth peaks but latency keeps increasing

As part of our PARDA research, we examined how IO latency varies with increases in overall load (queue length) at the array using one to five hosts accessing the same storage array. The attached image (Figure 6 from the paper) shows the aggregate throughput and average latency observed in the system, with increasing contention at the array. The generated workload is a uniform 16 KB IOs, 67% reads and 70% random, while keeping 32 IOs outstanding from each host. It can be clearly seen that, for this experiment, throughput peaked at three hosts, but overall latency continues to increase with load. In fact, in some cases, beyond a certain level of workload parallelism, throughput can even drop.





An important question to consider for application performance is whether bandwidth is more important or latency. If the former, then pushing the outstanding IOs higher might make sense up to a point. However, for latency sensitive workloads, it is better to provide a target latency and to stop increasing the load (outstanding IOs) on the array beyond that point. The latter is the key observation that PARDA is built around. We use a control equation that uses an input target latency goal beyond which the array can be considered to be overloaded. Using our equation, we modify the outstanding IOs count across VMware ESX hosts in a distributed fashion to stay close to the target IO latency. In the paper, we also detail how our equation also incorporates proportional sharing and fairness. Our experimental results show the technique to be effective.

PARDA O’ PARDA

Ajay Gulati, Carl Waldspurger and I have just finished work on a distributed IO scheduling paper for the upcoming FAST 2009 conference. So I wanted to provide an update. PARDA is a research project to design a proportional-share resource scheduler that can provide service differentiation for IO like VMware already provides for CPU and Memory. In plain terms, how can we deliver better throughput and lower response times to the more important VMs irrespective of which host in a cluster they run on.

This is a really interesting and challenging problem. A bunch of us first started brainstorming in this area 2 years ago but despite several attempts, for over a year, we couldn’t come up with a comprehensive solution. For one thing, IO scheduling is a very hard problem. Second, there aren’t existing research papers that tackle our particular flavor of the problem (cluster filesystem). To top it off, the problem sounds easy at first blush, encouraging a lot of well-intentioned but ultimately misleading attempts.

Ajay and I first published a paper on our idea to use flow control (think TCP-style) to solve this problem at the SPEED 2008 workshop in Feb ’08 and the feedback from the research community was encouraging (this later became the basis for an ACM SIGOPS Operating Systems Review article, October 2008). Since then Ajay, Carl and I have worked out the major issues with this new technique resulting in the FAST paper.

The paper is entitled “PARDA: Proportional Allocation of Resources for Distributed Storage Access”.

Easy and Efficient Disk I/O Workload Characterization in VMware ESX Server

I published an academic paper at the IEEE International Symposium on Workload Characterization (IISWC 2007) in September that I want to spend some time talking about. The paper was entitled “Easy and Efficient Disk I/O Workload Characterization in VMware ESX Server”. Here’s the abstract:

Collection of detailed characteristics of disk I/O for workloads is the first step in tuning disk subsystem performance. This paper presents an efficient implementation of disk I/O workload characterization using online histograms in a virtual machine hypervisor-VMware ESX Server. This technique allows transparent and online collection of essential workload characteristics for arbitrary, unmodified operating system instances running in virtual machines. For analysis that cannot be done efficiently online, we provide a virtual SCSI command tracing framework. Our online histograms encompass essential disk I/O performance metrics including I/O block size, latency, spatial locality, I/O interarrival period and active queue depth. We demonstrate our technique on workloads of Filebench, DBT-2 and large file copy running in virtual machines and provide an analysis of the differences between ZFS and UFS filesystems on Solaris. We show that our implementation introduces negligible overheads in CPU, memory and latency and yet is able to capture essential workload characteristics.

Security myth laid to rest (correction issued)

It seems a lot of people agreed with my previous post on the security of virtual switches. These include the originator of the information that prompted my blog post. Chris Wolf himself posted comments recognizing his misunderstanding. I think Chris did a great job of quickly following up after my blog post and getting in touch with us to resolve the misunderstanding. Kudos to him. Read his comments for yourself.