Archive for the ‘Value’ Category

NetApp at Cisco Live : Another look into the future

March 5, 2014 3 comments

Hybrid CloudIt’s no secret that Cisco has been investing a LOT of resources into cloud, and from those investments,  we’ve recently seen them release a corresponding amount of fascinating new technology that I think will change the IT landscape in some really big ways. For that reason, I think this year’s Cisco Live in Melbourne is going to have a lot of relevance to a  broad cross section of the IT community, from application developers, all the way down to storage guys like us.

For that reason, I’m really pleased to be presenting a Cisco partner case study along with Anuj Aggarwal who is the Technical account manager on the Cisco Alliance team for NetApp. Most of you wont have met Anuj, but if you’ve got the time, you should take the time to get to know him while he’s down here. Anuj  knows more about network security, and the joint work NetApp and Cisco  has been doing on hybrid cloud than anyone else I know, and this makes him a very interesting person to spend some time with, especially if you need to know more about the future of  networking and storage in an increasingly cloudy IT environment.

I’m excited by this year’s presentation, because it expands on, and in many ways completes a lot of the work Cisco and NetApp have been doing since NetApp’s first appearance at Cisco Live in Melbourne in 2010 as a platinum sponsor, where we presented to a select few on  SMT, or more formally Secure Multi-Tenancy.

Read more…

Too many ideas not enough time.

July 2, 2012 Leave a comment

A couple of weeks ago I went to an internal NetApp event called Foresight where our field technology experts from around the world meet with our technical directors, product managers and senior executives. A lot of the time is spent talking about recent developments that the field is seeing develop into new trends and how that intersects with current  technology roadmaps. We get to see new stuff that’s in development and generally get to spend about a week thinking and talking about futures. The cool thing about this is that while the day to day work of helping clients solve their pressing problems means that we often don’t get to see the forest for the trees, this kind of event lifts us high above the forest canopy to see a much broader picture.

Read more…

Big Data: Who Cares?

June 26, 2012 Leave a comment

Big data: Who cares?

Despite the hype being generated by big data, many are questioning the actual implications for business. With data expected to grow at 40% a year and the digital universe recently breaking the zettabyte barrier, much of the buzz is based just on the absolute wow factor of how big is big.

Putting aside the hype about the amount of data, it’s also important to note that in addition to the increased volume, the nature of the data is also changing. Most of this growth is unstructured machine- and consumer-generated data. Digital technologies are moving to denser media, photos have all gone digital, videos are using higher resolution, and advanced analytics requires more storage.

In short, the amount of data is increasing and the data objects themselves are getting bigger. But so what? As an IT organization why should I care? There are two big reasons.

Business Advantage

In all of this unstructured data there is real business value. What businesses need to do is identify the specific opportunities they have to use very large and unstructured data sets and generate real world results.

An example of the benefits that can be derived from big data analytics is monitoring customer buying habits, integrated with real-time alerts. If a regular customer does something unexpected – like order 300 times more than they usually do – the company can immediately re-route inventory to satisfy their need. For businesses in retail environments, being able to store and analyse hours, days, months and years of surveillance video allows optimised product placement and service locations.

As the business advantage of big data analysis becomes more apparent, the new consumers of business intelligence will come from places like marketing and product operations, rather than the traditional consumer of business intelligence, the finance department. As these departments have expectations of immediate results, it may be impossible to satisfy cost effectively with existing solutions. This is driving increasing interest in leveraging the approach of open source solutions like Hadoop for specific data marts, at a price point that fits within these departments’ ability to provide project funding. From an IT perspective, it’s vital to ensure your infrastructure is managed in a way that will allow it to adapt to these changing needs.

Cost of Compliance

Based on current trends, at some point the cost of compliance will break your budget. Conformance requirements mean you need to keep an ever-growing amount of increasingly valuable data. Compliance to new laws may require data be kept forever and be retrieved immediately when required. Additionally as existing infrastructure scales, the complexity of managing and protecting the data becomes impractical. Eventually you will have to think differently about your data and keep more while spending less.

As an IT organisation you may be thinking that your own data growth will soon be stretching the limits of your infrastructure. A way to define big data is to look at your existing infrastructure, the amount of data you have now, and the amount of growth you’re experiencing. Is it starting to break your existing processes? If so, where?

At NetApp we’ve taken a practical approach to helping our customers with their big data challenges. It’s not about some future unknown state that requires retraining your staff with new competencies or changing the way you do business. It’s about what can you do today that can make a real difference. If you can find the value in your universe of data and help your company turn it into real business advantage, then you will instrumental so its success and fufill the promise that IT has held out to the business. If not, IT risks being considered as just another cost centre with an increasingly cloudy future.

Categories: Big Data, Marketing, Value

More Records ??

November 14, 2011 8 comments

–This has been revised based on some comments I’ve received since the original posting, check the comment thread if you’re interested what/why–

I came in this morning with an unusually clear diary, and took the liberty to check the newsfeeds for NetApp and EMC, this is when I came across an EMC press release entitled  “EMC VNX SETS PERFORMANCE DENSITY RECORD WITH LUSTRE —SHOWCASES “NO COMPROMISE” HPC STORAGE“.

I’ve been doing some research on Lustre and HPC recently, and that claim surprised me more than a little, so I checked it out, maybe there’s a VNX sweetspot for HPC that I wasnt expecting. The one thing that stood out straight away was . “EMC® is announcing that the EMC® VNX7500 has set a performance density record with Lustre—delivering read performance density of 2GB/sec per rack” (highlight mine)

In the first revision of this I had some fun pointing out the lameness of that particular figure, (e.g. “From my perspective, measured on a GB/sec per rack, 2GB/sec/rack is pretty lackluster”) , but EMC aren’t stupid (or at least their engineers aren’t, though I’m not so sure about their PR agency at this point), so it turns out that this was one of those things where it seems that EMC’s PR people didn’t actually listen to what the engineers were saying, and it looks like they’ve missed out a small but important word, and that word is “unit”.  This becomes apparent if you take a look at the other stuff in that press release “8 GB/s read and 5.3 GB/s write sustained performance, as measured by XDD benchmark performed on a 4U dual storage processor”. This gives us 2GB/sec/rack unit which actually sounds kind of impressive. So let’s dig a little deeper, what we’ve got is a 4U dual storage processor that gets some very good raw throughput numbers, about 1.5x, or 150% faster in fact  on a “per controller” basis than the figures used on the E5400 press release I referenced earlier, so on that basis I think EMC has done a good job. But this is where the PR department starts stretching the truth again by leaving out some fairly crucial pieces of information. Notably that crucial information that the 2GB/sec/rack unit is for 4U controller is a 2U VNX7500SPE with 2U standby power supply which is required when the 60 drive dense shelves are used exclusively (which is the case for the VNX Lustre Proof of Concept information shown in their brochures), and this configuration doesn’t include any of the rack units required for the actual storage. Either that, or its a 2U VNX7500SPE with a 2U shelf , and no standby power supply that seems to be mandatory component of a VNX solution, and I cant quite believe that EMC would do that.

If we compare the VNX to the E5400, you’ll notice that controllers and standby power supplies  alone consume 4U of rack space without adding any capacity, whereas the E5400 controllers are much much smaller, and they fit directly into a 2U or 4U disk shelf (or DAE’s in EMC terminology) which means a 4U E4500 based solution is something you can actually use, as the required disk capacity is already there in the 4U enclosure.

Lets go through some worked calculations, to show how this works. In order to add capacity in the densest possible EMC configuration, you’d need to add an additional 4RU shelf with 60 drives in it. Net result 8RU, 60 drives, and up to 8 GB/s read and 5.3 GB/s write (the press release doesn’t make it clear whether a VNX7500 can actually drive that much performance from only 60 drives, my suspicion is that it cannot, otherwise we would have seen something like that in the benchmark). Meausred on a GB/s per RU basis this ends up as only 1 GB/sec per Rack Unit, not the 2 GB/sec per Rack Unit which I believe was the point of the “record setting” configuration. And just for kicks as you add more storage to the solution that number goes down as shown for the “dual VNX7500/single rack solution that can deliver up to 16GB/s sustained read performance” to about 0.4 GB/sec per Rack Unit. Using the configurations mentioned in EMC’s proof of concept configuration  you end up with around 0.666 GB/sec per Rack Unit, all of which are a lot less than the 2 GB/sec/RU claimed in the press release

If you wanted to have the highest performing configurations using a “DenseStak” solution within those 8RU with an E5400 based Lustre solution, you’d put in another e5400 unit with an additional 60 drives Net result 8RU, 120 drives, and 10 GB read and 7 GB/sec write (and yes we can prove that we can get this kind of performance from 120 drives). Meausured on a GB/s per RU basis this ends up as 1.25 GB/sec per Rack Unit. That’s good, but its still not the magic number mentioned in the EMC press release, however if you were to use a “FastStak” solution, those numbers would pretty much double (thanks to using 2RU disk shelves instead of 4RU disk shelves) which would give you controller performance density of around 2.5 GB/sec per Rack Unit.

Bottom line, for actual usable configurations a NetApp solution has much better performance density using the same measurements EMC used for their so called “Record Setting” benchmark result.

In case you think I’m making these numbers up, they are confirmed in the NetApp whitepaper wp-7142 which says

The FastStak reference configuration uses the NetApp E5400 scalable
storage system as a building block. The NetApp E5400 system is designed
to support up to 24 2.5-inch SAS drives, in a 2U form factor.
Up to 20 of these building blocks can be contained in an industry-standard
40U rack. A fully loaded rack delivers performance of up to 100GB/sec
sustained disk read throughput, 70GB/sec sustained disk write throughput,
and 1,500,000 sustained IOPS.
According to IDC, the average supercomputer produces 44GB/sec,
so a single FastStak rack is more than fast enough to meet the I/O
throughput needs of many installations.

While I’ll grant that this result is achieved with more hardware, it should be remembered that the key to good HPC performance is in part about the ability to efficiently throw hardware at a problem. From a storage point of view this means having the ability to scale performance with capacity. In this area the DenseStak and FastStak solutions are brilliantly matched to the requrements of, and the prevailing technology used, in High Performance Computing. Rather than measuring on a GB/sec/rack unit I think a better measure would be “additional sequential performance per additional gigabyte”. Measured on a full rack basis, the NetApp E5400 based solution ends up at around 27MB/sec/GB for the DenseStak, or 54MB/sec/GB for the FastStak. In comparison, the fastest EMC solution as referenced in the “record setting” press release comes in at about 10MB/sec of performance for every GB of provisioned capacity or about 22MB/sec/GB for the configuration in the proof of concept brochure . Any way you slice this, the VNX just doesn’t end up looking like a particularly viable or competetive option.

The key here is that Lustre is designed as a scale out architecture. The E5400 solution is built as a scale out solution by using Lustre to aggregate the performance of the multiple carfully matched E5400 controllers, whereas the VNX7500 used in the press release is relatively poorly matched scale-up configuration which is being shoe-horned into use case it wasn’t designed for.

In terms of performance per rack unit, or performance per GB there simply isn’t much out there that comes close to a E5400 based Lustre solution, certainly not from EMC, as even Isilon, their best Big Data offering, falls way behind. The only other significant questions that remain are how much do they cost to buy, and how much power do they consume ?

I’ve seen the pricing for EMC’s top of the range VNX 7500, and its not cheap, its not even a little bit cheap, and the ultra-dense stuff shown in the proof of concept documents is even less not cheap than their normal stuff. Now I’m not at liberty to discuss our pricing strategy in any detail on this blog, but I can say that in terms of “bang per buck”, the E5400 solutions are very very competetive, and the power impact of the E5400 controller inside of 60 drive dense shelf is pretty much negligible. I don’t have the specs for the power draw on a VNX7500 and its associated external power units , but I’m guessing it adds around as much as a shelf of disks, the power costs of which add up over the three year lifecycle typically seen in these kinds of environments.

From my perspective the VNX7500 is a good general purpose box, and EMC’s engineers have every right to be proud of the work they’ve done on it, but positioning this as a “record setting” controller for performance dense HPC workloads on Lustre, is stretching the truth just a little too far for my liking.  While the 10GB/sec/rack mentioned in the press release might sound like a lot for those of us who’ve spent their lives around transaction processing systems, for HPC, 10GB/sec/rack simply doesnt cut it. I know this, the HPC community knows this, and I suspect most of the reputable HPC focussed engineers in EMC also know this.

It’s a pity though that EMC’s PR department is spinning this for all they’re worth ; I struggle with how they can possibly assert that they’ve set any kind of performance density record for any kind of realistic Lustre implementation, when the truth is that they are so very very far behind. Maybe their PR dept has been reading 1984, because claiming record setting performance in this context requires some of the most bizarre Orwellian doublespeak I’ve seen in some time.

Breaking Records … Revisited

November 3, 2011 10 comments

So today I found out that we’d broken a few records of our own few days ago, which was, at least from my perspective associated with surprisingly little fanfare with the associated press release coming out late last night. I’d like to say that the results speak for themselves, and to an extent they do. NetApp now holds the top two spots, and four out of the top five results on the ranking ladder. If this were the olympics most people would agree that this represents a position of PURE DOMINATION. High fives all round, and much chest beating and downing of well deserved delicious amber beverages.

So, apart from having the biggest number (which is nice), what did we prove ?

Benchmarks are interesting to me because they are the almost perfect intersection of my interests in both technical storage performance  and marketing and messaging. From a technical viewpoint, a benchmark can be really useful, but it only provides a relatively small number of proof points, and extrapolating beyond those, or making generalised conclusions is rarely a good idea.

For example, when NetApp released their SPC-1 benchmarks a few years ago, it proved a number of things

1. That under heavy load which involved a large number of random writes, a NetApp arrays performance remained steady over time

2. That this could be done while taking multiple snapshots, and more importantly while deleting and retiring them while under heavy load

3. That this could be done with RAID-6 and with a greater capacity efficiency as measured by  RAW vs USED than any other submission

4. That this could be done at better levels of performance than an equivalently configured  commonly used “traditional array” as exemplified by EMCs CX3-40

5. That the copy on write performance of the snapshots on an EMC array sucked under heavy load (and by implication similar copy on write snapshot implementations on other vendors arrays)

That’s a pretty good list of things to prove, especially in the face of considerable unfounded misinformation being put out at the time, and which, surprisingly is still bandied about despite the independently audited proof to the contrary. Having said that, this was not a “my number is the biggest”, exercise which generally proves nothing more than how much hardware you had available in your testing lab at the time.

A few months later we published another SPC-1 result which showed that we could pretty much doubl the numbers we’d achieved in the previous generation at a lower price per IOP with what was at the time a very competetive submission.

About two years after that we published yet another SPC-1 result with the direct replacement for the controller used in the previous test (3270 vs 3170). What this test didnt do was to show how much more load could be placed on the system, what it did do was to show that we could give our customers more IOPS at a lower latency with half the number of spindles .   This was the first time we’d submitted an SPC-1e result which foucussed on energy efficiency. It showed, quite dramatically how effective our FlashCache technology was under a heavy random write workload. Its interesting to compare that submission with the previous one for a number of reasons, but for the most part, this benchmark was about Flashcache effectiveness.

We did a number of other benchmarks including Spec-SFS benchmarks that also proved the remarkable effectiveness of the Flashcache technology, showing how it could make SATA drives perform as better than Fibre channel drives, or dramatically reduce the number of fibre channel drives required to service a given workload. There were a couple of other benchmarks done which I’ll grant were “hey look at how fast our shiny new boxes can run”, but for the most part, these were all done with configurations we’d reasonably expect a decent number our customers to actually buy (no all SSD configurations).

In the mean time EMC released some “Lab Queen” benchmarks, at first I thought that EMC were trying to prove just how fast their new X-blades were for processing CIFS and NFS traffic. They did this by configuring the back end storage system in such a rediculously overengineered way as to remove any possibility that they could cause a bottleneck in any way, either that or EMC’s block storage devices are way slower than most people would assume. From an engineering perspective I think they guys in Hopkington who created those X-blades did a truly excellent job, almost 125,000 IOPS per X-Blade using 6 CPU cores is genuinely impressive to me, even if all they were doing was  processing NFS/CIFS calls. You see, unlike the storage processors in a FAS or Isilon array, the X-Blade, much like the Network Processor in a SONAS system, or an Oceanspace N8500 relies on a back end block processing device to handle RAID , block checksums, write cache coherency and physical data movement to and from the disks, all of which is non-trivial work. What I find particularly interesting is that in all the benchmarks I looked at for these kinds of systems, the number of back end block storage systems was usually double that of the front end, which infers to me either that the load placed on back end systems by these benchmarks is higher than the load on the front end, or  more likely that the front end / back end architecture is very sensitive to any latency on the back end systems which means the back end systems get overengineered for benchmarks. My guess is after seeing the “All Flash DMX” configuration is that Celerra’s performance is very adversly affected by even slight increases in latency in the back end and that we start seeing some nasty manifestations of little law in these architectures under heavy load.

A little while later after being present at a couple of EMC presentations (one at Cisco Live, the other at a SNIA event, where EMC staff were fully aware of my presence), it became clear to me exactly why EMC did these “my number is bigger than yours” benchmarks. Ther marketing staff at corporate created a slide that compared all of the current SPC benchmarks in a way that was accurate, compelling and completely misleading all at the same time, at least as far as the VNX portion goes. Part of this goes back to the way that vendors, including I might say Netapp, use an availability group as a point of aggregation when reporting peformance numbers, this is reasonably fair as adding Active/Active or Active/Passive availability generally slows things down due to the two phase commit nature of write caching in modular storage environments. However, the configuration of the EMC VNX VG8 Gateway/EMC VNX5700 actually involves 5 separate availability groups (1xVG8 Gateway system with 4+1 redundancy, and and 4x VNX5700 with 1+1 redundancy). Presenting this as one aggregated peformance number without any valid point of aggregation smacks of downright dishonesty to me. If NetApp had done the same thing, then, using only 4 availabilty groups, we could have claimed over 760,000 IOPS by combining 4 of our existing 6240 configurations, but we didnt, because frankly doing that is in my opinion on the other side of the fine line where marketing finesse falls off the precipice into the shadowy realm of deceptive practice.

Which brings me back to my original question, what did we prove with our most recent submissions, well three things come to mind

1. That Netapp’s Ontap 8.1 Cluster mode solution is real, and it performs briliiantly

2. It scales linearly as you add nodes (more so than the leading competitors)

3. That scaling with 24 big nodes gives you better performance and better efficiency than scaling with hundreds of smaller nodes (at least for the SPEC benchmark)

This is a valid configuration using a single vserver as a point of aggregation across the cluster, and trust me, this is only the beginning.

As always, comments and criticism is welcome.



How does capacity utilisation affect performance ?

September 10, 2011 7 comments

A couple of days ago I saw an email asking “what is the recommendation for maximum capacity utilization that will not cause performance degradation”. On the one hand this kind of question annoys me because for the most part it’s borne out of some the usual FUD which gets thrown at NetApp on a regular basis, but on the other, even though correctly engineering storage for consistent performance rarely, if ever, boils down to any single metric, understanding capacity utilisation and its impact on performance is an important aspect of storage design.

Firstly, for the record, I’d like to reiterate that the performance characteristics of every storage technology I’m aware of that is based on spinning disks decreases in proportion to the amount of capacity consumed.

With that out of the way, I have to say that as usual, the answer to the question of how does capacity utilisation affect performance is, “it depends”, but for the most part, when this question is asked, it’s usually asked about high performance write intensive applications like VDI, and some kinds of online transaction processing, and email systems.

If you’re looking at that kind of workload, then you can always check out good old TR-3647 which talks specifically about a write intensive high performance workloads where it says

The Data ONTAP data layout engine, WAFL®, optimizes writes to disk to improve system performance and disk bandwidth utilization. WAFL optimization uses a small amount of free or reserve space within the aggregate. For write-intensive, high-performance workloads we recommend leaving available approximately 10% of the usable space for this optimization process. This space not only ensures high-performance writes but also functions as a buffer against unexpected demands of free space for applications that burst writes to disk

I’ve seen other benchmarks using synthetic workloads where a knee in the performance curve begins to be seen at between 98% and 100% of the usable capacity after WAFL reserve is taken away, I’ve also seen performance issues when people completely fill all the available space and then hit it with lots of small random overwrites (especially misaligned small random overwrites). This is not unique to WAFL, which is why it’s a bad idea generally to fill up all the space in any data structure which is subjected to heavy random write workloads.

Having said that for the vast majority of workloads you’ll get more IOPS per spindle out of a netapp array at all capacity points than you will out of any similarly priced/configured box from another vendor

Leaving the FUD aside, (the complete rebuttal of which requires a fairly deep understanding of ONTAP’s performance achitecture)  when considering capacity and its effect on performance on a NetApp FAS array it’s worth keeping the following points in mind.

  1. For any given workload, and array type you’re only ever going to get a fairly limited number transactions per 15K RPM disk, usually less than 250
  2. Array performance is usually determined  by how many disks you can throw at the workload
  3. Most array vendors bring more spindles to the workload by using RAID-10 which uses twice the amount of disks for the same capacity, NetApp uses RAID-DP which does not automatically double the spindle density
  4. In most benchmarks (check out SPC-1), NetApp uses all but 10% of the available space (in line with TR-3647) which allows the user to use approximately 60% of the RAW capacity  while still achieving the same kinds of IOPS/drive that more other vendors are only able to do using 30% of the RAW capacity. i.e at the same performance per drive we offer 10% more usable capacity than the other vendors could theoretically attain using RAID-10.

The bottom line is, that even without dedupe or thin provisioning or anything else you can store twice as much information in a FAS array for the same level of performance as most competing solutions using RAID-10

While that is true, it’s worth mentioning it does have one drawback. While the IOPS/Spindle is more or less the same, the IOPS density measured in IOPS/GB on the NetApp SPC-1 results is about half that of the competing solutions, (same IOPS , 2x as much data = half the density). While that is actually harder to do because you have a lower cache:data ratio, if you have an application that requires very dense IOPS/GB (like some VDI deployments for example), then you might not be allocate all of that extra capacity to that workload.  This in my view gives you three choices.

  1. Don’t use the extra capacity, just leave it as unused freespace in the aggregate which will make it easier to optimise writes
  2. Use that extra capacity for lower tier workloads such as storing snapshots or a mirror destination, or archives etc, and set those workloads to a low priority using FlexShare
  3. Put in a FlashCache card which will double the effective number of IOPS (depending on workload of course) per spindle, which is less expensive and resource consuming than doubling the number of disks

If you dont do this, then you may run into a situation I’ve heard of  in a few cases where our storage efficiencies allowed the user to put too many hot workloads on not enough spindles, and unfortunately this is probably the basis for the  “Anecdotal Evidence”  that allows the Netapp Capacity / Performance FUD to be perpetuated. This is innacurate because it has less to do with the intricacies of ONTAP and WAFL, and far more to do with systems that were originally sized for a workload of X having a workload of 3X placed on them because there was still capacity available on Tier-1 disk capacity, long after all the performance had been squeezed out of the spindles by other workloads.

Keeping your storage users happy, means not only managing the available capacity, but also managing the available performance. More often than not, you will run out of one before you run out of the other and running an efficient IT infrastructure means balancing workloads between these two resources. Firstly this means you have to spend at least some time measuring, and monitor both the capacity and performance of your environment. Furthermore you should also set your system up to it’s easy to migrate and rebalance workloads across other resource pools, or be able to easily add performance to your existing workloads non disruptively which can be done via technologies such as Storage DRS in vSphere 5, or ONTAP’s Data motion and Virtual storage tiering features.

When it comes to measuring your environment so you can take action before the problems arise, NetApp has a number of excellent tools to monitor the performance of your storage environment. Performance Advisor gives you visualization and customised alerts and thresholds for the detailed inbuilt performance metrics available on every FAS Array, and OnCommand Insight Balance provides deeper reporting and predictive analysis of your entire virtualised infrastructure including non-NetApp hardware.

Whether you use NetApp’s tools or someone elses, the important thing is that you use them, and take a little time out of you day to find out which metrics are important and what you should do when thresholds or high watermarks are breached. If you’re not sure about this for your NetApp environment, feel free to ask me here, or better still open up a question in the Netapp communities which has a broader constituency than this blog.

While I appreciate that it’s tempting to just fall back to old practices, and overengineer Tier-1 storage so that there is little or no possibility of running out of IOPS before you run out of capacity, this is almost always incredibly wasteful and has in my experience resulted in storage utilisation rates of less than 20%, and  drives the costs/GB for “Tier-1” storage to unsustainable and uneconomically justifiable heights. The time may come when storage administrators are given the luxury of doing this again, or you may be in one of those rare industries where cost is no object, but unless you’ve got that luxury, it’s time to brush up on your monitoring and storage workload rebalancing and optimisation skills. Doing more with less is what it’s all about.

As always, comments and contrary views are welcomed.

Categories: Performance, Value

Why Archive ?

November 22, 2010 3 comments


During a discussion I had at the SNIA blogfest, I mentioned that I’d written a whitepaper around archiving and I promised that I’d send it on. It took me a while to get around to this, but I finally dug it out from my archive, which is implemented in a similar way to the example policy at the bottom of this post.,  it only took me about a minute to search and retreive it once I’d started the process of looking for it from a FAS array that was far enough away from me to incur a 60ms RTT latency. Overall I was really happy with the result.

The document was in my archive because I wrote it almost two years ago, since then a number of things have changed, however the fundamental principals have not, I’ll work on updating this when things less busy, probably sometime around January ’11. On a final note, because I wrote this a couple of years ago when my job role was different than it is today, this document is considerably more “salesy” than my usual blog posts, it shouldnt be construed as a change in direction for this blog in general.


There are a number of approaches that can be broadly classified as some form of Archiving, including Hierarchical Storage Management (HSM), and Information Lifecycle Management (ILM).  All of these approaches aim to improve the IT environment by

  • Lowering Overall Storage Costs
  • Reducing the backup load and improving restore times
  • Improving  Application Performance
  • Making it easier to find and classify data

The following kinds of claims are common in the marketing material promoted by vendors of Archiving software and hardware

“By using Single Instance Storage, data compression and an ATA-based archival storage system (as opposed to a high performance, Fibre Channel device), the customer was able to reduce storage costs by $160,000 per terabyte of messages during the first three years that the joint EMC / Symantec solution was deployed. These cost savings were just the beginning, as the customers were also able to maintain their current management headcount despite a 20% data growth and the time it took to restore messages was drastically reduced. By archiving the messages and files, the customer was also able to improve electronic discovery retrieval times as all content is searchable by keywords.”

These kinds of results while impressive assume a number of things that are not true in NetApp implementations

  • A price difference between primary storage and archive storage of over $US160,000 per TB.
  • Backup and restores are performed from tape using bulk data movement methods
  • Modest increases in storage capacities require additional headcount

In many NetApp environments, the price difference between the most expensive tier of storage and the least expensive simply does not justify the expense and complexity of implementing an archiving system based on a cost per TB alone.

For file serving environments, many file shares can be stored effectively on what would be traditionally thought of as “Tier-3” storage with high-density SATA drives, RAID-6 and compression / deduplication. This is because unique NetApp technologies such as WAFL and RAID-DP provide the performance and reliability required for many file serving environments. In addition, the use of NetApp SnapVault replication based data protection, for backup and long term retention means that full backups are no longer necessary. The presence or absence of the kinds static data typically moved into archives has little or no impact on the time it takes to perform backups, or make data available in the case of disaster.

Finally, the price per GB and IOPS for NetApp storage has fallen consistently in line with the trend in the industry as a whole. Customers can lower their storage costs by purchasing and implementing storage only as required. NetApp FAS array’s ability to non-disruptively add new storage, or move excess storage capacity and I/O from one volume to the other within an aggregate makes this approach both easy, and practical.

While the benefits of archiving for NetApp based file serving environments may be marginal, archiving still has significant advantages for email environments, particularly Microsoft Exchange. The reasons for this are as follows

  1. Email is cache “unfriendly” and generally needs many dedicated disk spindles for adequate performance.
  2. Email messages are not modified after they have been sent/received
  3. There is a considerable amount of “noise” in email traffic (spam, jokes, social banter etc)
  4. Small Email Stores are easier to cache, which can significantly improve performance and reduce the hardware requirements for both the email servers and the underlying storage
  5. Email is more likely to be requested during legal discovery
  6. Enterprises now consider Email to be a mission critical application and some companies still mandate a tape backup of their email environments for compliance purposes.

Choosing the right Archive Storage

It’s about the application

EMC and NetApp take very different approaches to archive storage, each of which works well in a large number of environments. An excellent discussion on the details of this can be found in the NetApp whitepaper WP-7055-1008 Architectural Considerations for Archive and Compliance Solutions. For most people however, the entire process of archive is driven not at the storage layer, but by the archive applications. These applications do an excellent job of making the underlying functionality of the storage system transparent to the end user, however the user is still exposed to the performance and reliability of the storage underlying the archives.

Speed makes a difference

Centera was designed to be “Faster than Optical” and while it has surpassed this relatively low bar, its performance doesn’t come close to even the slowest NetApp array. This is important, because the amount of data that can be pushed onto the archive layer is determined not just by IT policy, but also by user acceptance and satisfaction with the overall solution. The greater the user acceptance, the more aggressive the archiving can be, which results in lower TCO and faster ROI.

Protecting the Archive

While the archive storage layer needs to be reliable, it should be noted that without the archive application and its associated indexes, the data is completely inaccessible, and may as well be lost. It might be possible to rebuild the indexes and application data from the information in the archive alone, often this process may be unacceptably long. Protecting the archive involves protecting the archive data store, the full text indexes, and the associated databases in a consistent manner at a single point in time.

Migrating from an Existing Solution

Many companies already have archiving solutions in place, but would like to change their underlying storage system to something faster and more reliable. Fortunately archiving applications build the capability to migrate date from one kind back-end storage to another into their software. The following diagrams show how this can be achieved for EmailXtender and DiskXtender to move data from Centera to NetApp.

Some organizations would prefer to completely replace their existing archiving solutions including hardware and software. For these customers NetApp collaborates with organizations such as Procedo (, to make this process fast and painless.


As mentioned previously, the cost and complexity of traditional archiving infrastructure may not add sufficient value to a NetApp file-serving environment, as many of the problems it solves are already addressed by core NetApp features. This does not mean that some form of storage tiering could not or should not be implemented on FAS to reduce the amount of NetApp primary capacity.

One easy way of doing this is by taking advantage of the flexibility of the built in backup technology. This is an extension of the “archiving” policy used by many customers, where the backup system is used for archive as well.  The approach of mixing backup and archive is rightly discouraged by most storage management professionals, the reasons for doing so in traditional tape based backup environments don’t apply.

The reasons for this are

  • Snapshot and replication based backups are not affected by capacity as only changed blocks are ever moved or stored
  • The backups are immediately available, and can be used for multiple purposes
  • Backups are stored on high reliability disk in space efficient manner using both non-duplication and de-duplication techniques
  • Files can be easily found via existing user interfaces such as Windows Explorer or external search engines

In general, SnapVault destinations use the highest density SATA drives with the most aggressive space savings policies applied to them. These policies and techniques, which may not be suitable for high performance file sharing environments, provide the lowest cost per TB of any NetApp offering. This combined with the ability to place the SnapVault destination in a remote datacenter may relieve the power, space and cooling requirements of increasingly crowded datacenters.

An example policy

Many companies file archiving requirements are straightforward, and do not justify the detailed capabilities provided by archiving applications. For example, a company might implement the following backup and archive policy

  • All files are backed up on a daily basis with daily recovery points kept for 14 days, weekly recovery points will be kept for two months and monthly recovery points kept for seven years.
  • Any file that has not been accessed in the last sixty days will be removed from primary storage and will need to be accessed from the archive

This is easily addressed in a SnapVault environment through the use of the following

  • Daily backups are transferred from the primary system to the SnapVault repository
  • Daily recovery points (snapshots) are kept on both the primary storage system and the SnapVault repository for 14 days
  • Weekly recovery points (snapshots) are kept only on the SnapVault repository
  • Monthly recovery points (snapshots) are kept only on the SnapVault repository
  • A simple shell script/batch file is executed after each successful daily backup which deletes any file from the primary volume that has not been accessed in thirty days
  • Users are allocated a drive mapping to their replicated directories on the SnapVault destination.
  • Optionally the Primary systems and SnapVault repository may be indexed by an application such as the Kazeon IS1200, or Google enterprise search.

Users then need to be informed that old files will be deleted after thirty days, and that they can access backups of their data, including the files that have been deleted from primary storage by looking through the drive that is mapped to the SnapVault repository, or optionally via the enterprise search engines user access tools.

By removing the files from primary storage, instead of the traditional “stub” approach favoured by many archive vendors, the overall performance of the system will be improved by reducing the metadata load, and users will be able to more easily find active files by having fewer files and directories on the primary systems.


Many Organisations archiving requirements can be met by simply adding additional SATA disk to the current production system replicated via SnapMirror to the current DR system – rather than managing separate archive platforms.

This architecture provides flexibility and scalability over time and reduces management overhead. Tape can also be used for additional backup and longer term storage if required. SnapLock provides the non-modifiable WORM like capability required of an archive without additional hardware (a software licensable feature, see more detail at ).

Categories: Archive, Data Protection, Value
%d bloggers like this: