Caches are pervasive in modern storage systems. Designed to accelerate data access by exploiting locality, a cache provides an essential service in modern computing. Operating systems and databases maintain in-memory buffer caches containing hot blocks. Here, a hot block is one that is considered likely to be reused. Storage cache using flash memory are popular…
Folks: just a heads up … The HotStorage 2011 PC meeting is being held on Monday April 11.
As program chair this year, I’m hosting the meeting at VMware which will be including some of the brightest minds in storage research in the world. Building E, here we come. If there are brown-outs, it’s probably due to the sheet amount of brainpower assembled there 🙂 Here is the full list.
Several people are flying in, some are local and yet others are dialing in. When assembling this PC, I had several objectives. One of course was to collect up the top brains in this fast-moving area of research. Another was diversity, of every type. I’m super impressed with the group of people who agreed to serve on the committee.
As for the program, we had a record number of submissions (60% more than last year) which just goes to show you how active this area is. The review rounds are done leading up to Monday’s meeting. I’ve been spending time organizing the papers to optimize our team. There are so many good ones that I’m sure the selection process will not be easy.
As is appropriate for all academic venues of good repute, HotStorage has a very strict conflicts policy. So, even as chair, I’ll sit out some paper discussions to avoid even the potential appearance of conflicts against some papers from colleagues or ex-colleagues. The same applies to all PC members. Another thing that I have done is require extra reviewing for PC member papers which lifts the quality bar for them.
I’ll post interesting tid bits from the meeting later.
I’m deeply honored to have been asked by USENIX to serve as the Program Chair for the 3nd Workshop on Hot Topics in Storage and File Systems (HotStorage ’11).
The workshop CfP is about to come out any day. I just finished assembling the program committee and writing the workshop overview last week. HotStorage is an awesome place to send your cool ideas. The program committee is absolutely top notch. How top-notch, you ask? Well, you can deal with a little suspense … I don’t want to jump the gun on the CfP yet.
So, start working on those papers … 🙂
The program for the the 30th International Conference on Distributed Computing Systems was recently put up. I had the honor of being a member of the program committee for this prestigious venue. The program itself looks amazing and I’d encourage folks to take a look.
Here’s just a couple of papers I think are worth reading:
- A New Buffer Cache Design Exploiting both Temporal and Content Localities by Jin Ren and Qing Yang
- Mistral: Dynamically Managing Power, Performance, and Adaptation Cost in Cloud Infrastructures by Gueyoung Jung, Matti Hiltunen, Kaustubh Joshi, Richard Schlichting and Calton Pu
I am honored to have been asked to chair a session at the HotStorage 2010 workshop on Boston. Take a look at the program. My session includes two very interesting papers:
Funnily enough, Jiri chose the session title to be “All Aboard HMS Beagle”. Here’s his explanation: “the session name refers to Charles Darwin’s ship named Beagle. I chose the name because there isn’t really much technical commonality other than the words Adaptive and Evolution (hence the reference)”.
If folks are in the area, please consider registering and popping in. USENIX workshops are always very exciting mixers for industry and academia.
One of the interesting papers presented at USENIX 2009 was “Black-Box Performance Control for High-Volume Non-Interactive Systems” [pdf][html[slides]. Since this is right up my alley, I paid close attention and took some notes. The paper was authored by several IBM Research folks: Chunqiang Tang, Sunjit Tara, Rong N. Chang and Chun Zhang.
First of all, this is interesting and thought-provoking work. However, the paper deals with a very constrained environment of throughput-centric systems and with only a single pool of threads. I have reservations about the general applicability of the system to, say, disk scheduling. Nevertheless, their black box treatment of the system (multiple unknown bottlenecks) is quite interesting and it really made me wonder how else it could be extended. The main problem is that if you have multiple controls in the system (e.g. cpu, memory, disk, etc) that the effective online search they are performing will get really tricky. Nevertheless, good food for thought.
Several influential bloggers have now picked up on the PARDA research paper and its implication to the future of storage resource management. Here a few of note:
- Gartner’s Cameron Haight: PARDA the Plan?
- Virtualization Review’s Rick Vanover: Next Storage Frontier
- VMware’s Duncan Epping: Project PARDA
Many thanks to these individuals for their favorable coverage.
There is an active Call for Papers for the Second International Workshop on Virtualization Performance: Analysis, Characterization, and Tools (VPACT’09). I feel very lucky to have been asked to serve on the program committee (PC) for this excellent workshop by Peter Varman, the general chair. Peter is a superb researcher and I really like the work that he and his students have been doing in the area of QoS for storage systems. PC membership means that I’ll reviewing papers submitted to the conference and selecting the best ones for presentation and for publication in the proceedings.
If you have interesting ideas that you’d like to run by the research community in the following areas, please do consider submitting your work.
The workshop is intended as a venue for researchers and practitioners in academia and industry to present their unpublished results in the area of virtualization research. Papers are solicited on topics including, but not limited to the following aspects of virtual machine (VM) execution:
• VM analytical performance modeling
• VM performance tools for tracing, profiling, and simulation
• VM benchmarking and performance metrics
• Workload characterization in a virtualized environment
• Evaluation of resource scheduling
• Models and metrics for new VM usages
• VM energy and power modeling
Take a look at the list of program committee members to see if you recognize any of those names.
Just listening to Alyssa Henry’s keynote talk at FAST ’09. She is the General Manager of S3 at Amazon. She used a great analogy to explain the difficult choice of which thing to spend resources on to protect against failures in a highly distributed system. For some things we choose to have expensive redundancy, e.g. we use both seat belts as well as air bags. Protecting one’s life in a catastrophic situation is important enough to warrant the extra expense. But we tend not to use both waist belts as well as suspenders 🙂
Alyssa also talked about “retry” as an important part of building resilient systems. To handle failures in distributed systems where messages may be lost or nodes may go down, just retry. But what about a message to charge a customer some amount of money? Do you really want to resend that request? The point was that they needed to think about making some operations idempotent by design.
According to Alyssa, the next failure after retry was solved, was surge/overload. Retries can be overwhelming to a system recovering from failure. So rate limiting might be used e.g. exponential backoff. Related are cache time-to-live (TTL) leases expiring but the underlying system which is the source of the data is down. As that system is comming back up, it would get overwhelmed. Alyssa suggested to try extending the TTL to keep the underlying system from breaking down when it comes back up. For example, there is a service at Amazon that checks if a customer’s Account is live. In case that service is down, it’s client systems just continue to assume that the customer is still in good standing.
She also talked about trading consistency with availability. When you write to S3, they will send data to multiple data centers. They write pointers to more datacenters than the data itself.