virtualirfan

Storage Cache Analysis Reinvented with SHARDS

Caches are pervasive in modern storage systems. Designed to accelerate data access by exploiting locality, a cache provides an essential service in modern computing. Operating systems and databases maintain in-memory buffer caches containing hot blocks. Here, a hot block is one that is considered likely to be reused. Storage cache using flash memory are popular as a cost-effective way to reduce application latency and offload work from rotating disks. This has recently been a trend both in networked as well as server-side storage. Today, virtually all storage devices — ranging from individual disk drives to large storage arrays — include significant caches composed of RAM or flash memory.

Since storage caches are typically comprised of relatively fast, expensive storage, they are inherently a scarce resource, and are commonly shared among multiple clients. As a result, optimizing cache allocations is important, and approaches
for estimating workload performance as a function of cache size are particularly valuable.

There is a wrinkle here though. Yes, we do want to optimize storage cache allocation. But two thorny problems get in the way:

  • storage cache performance is highly non-linear in cache size, and
  • the benefit of storage caches varies widely by workload.
Example of an ideal storage cache utility curve

Example of an ideal storage cache utility curve

The ideal situation is illustrated like in this graphic. All we really want is a curve which can tell us for each workload how much cache is needed. Simple to state. If we had such a curve (called a hit ratio curve), we would be able to see the current hit ratio. We could also instantly see how much more cache is needed to get to the desired performance level.

So, couldn’t we use working-set estimation to perform cache allocation? Unfortunately not. Whereas, a working-set size estimate provides valuable information, it doesn’t measure data reuse, nor does it predict changes in performance as cache allocations are varied. A miss-ratio curve (MRC) can fill this gap. However, MRC computation has traditionally been too resource-intensive to be broadly practical. This is further exacerbated by increasing storage cache sizes in commodity systems with nonvolatile memories.

A resurgence in research interest has yielded new algorithmic improvements that make online MRC computation feasible for the first time. I’m excited to announce that our team at CloudPhysics has developed revolutionary technology advancing the state of the art in storage cache management. Our technique makes MRC computation so cheap, it can be run online, even in commodity, embedded systems. Please reach out if you would like to use this in your storage or database system.

Depiction of storage cache architectures where MRCs can enable cache analytics.

Depiction of storage cache architectures where MRCs can enable cache analytics.

Here is a visual representation of the breakthrough idea and its applicability across any type of cache.

Today, we are releasing our paper for presentation at the prestigious USENIX FAST ’15 conference. Please download “Efficient MRC Construction with SHARDS” and leave comments below.

 

7 comments for “Storage Cache Analysis Reinvented with SHARDS

  1. Hao Luo
    April 30, 2015 at 1:12 pm

    Dear Mr. Ahmad, it is nice to meet you and delightful to read your post and the FAST paper, which is very impressive and enlightening to me. I am a PhD student in University of Rochester. IMHO, my own research is very relevant to the work that was presented here. Instead of modeling storage cache, we are modeling the MRC for on-chip memory like the hardware cache. I have been working on efficient MRC construction algorithm for three years as my PhD research and have developed tools and models to the problem in different scenarios. My work focuses on the impact of multiple workloads when sharing the cache. Workloads can share the cache space with each other or they can compete for cache space. This problem is quite natural in CPU caches and I wonder if this could be applied on storage cache.

Leave a Reply

Your email address will not be published. Required fields are marked *