Archive

Archive for the ‘Marketing’ Category

NetApp to talk business critical applications at vForum2012

October 31, 2012 Leave a comment

Cloud technology is transforming the IT landscape, and represents huge opportunities for Australian businesses. Private Cloud technology, with server virtualisation at its core, helps speed up business processes as well as help IT departments overcome challenges such as increasing infrastructure costs and the exponential growth of data.

An agile Private Cloud infrastructure provides business with many benefits including the flexibility to scale up or down to meet the needs of any situation, while simultaneously reducing infrastructure costs. Many of NetApp’s customers have seen significant improvements to the efficiency of their IT departments since adopting an agile data infrastructure based on cloud technology. Server virtualization has increased productivity, enabling them to transition away from inefficient applications and silo based infrastructure.

Like any new technology, adopting a cloud based infrastructure is not without it challenges. Virtual infrastructure complexities, concerns about performance, availability, security and cost have traditionally prevented many organisations from virtualizing business critical applications.

vForum 2012 provides a fantastic opportunity for technology decision makers to meet with NetApp representatives to learn more about how adopting a Private cloud infrastructure and server virtualization can increase efficiency in their business. It will also provide a stage for NetApp to address common storage challenges about virtualizing business critical applications, including:

  • The need to maintain optimal application availability for the business
  • Virtualization and sharing of storage resources
  • Security and containerization of business critical applications
  • Prioritization of business critical applications on a shared storage infrastructure
  • Agility to bring applications online quicker
  • Not meeting recovery point objectives (RPO) due to backups not completing in the defined window
  • Not meeting recovery time objective (RTO) as the process is too slow
  • Needing large amounts of storage to create database copies for environments such as dev/test and needing vast amounts of time to do so
  • Long time to provision storage or add capacity and deploy applications
  • Low overall storage utilisation

Make sure you don’t miss out on hearing from Vaughn Stewart, NetApp director of cloud computing and virtualization evangelist, who will be presenting on how storage is a key enabler for business critical application transformation and the rise  of the software defined data centre.

For those keen to engage in a deep-dive session with one of our virtualization experts, 1:1 sessions are available and can be booked, on the day, at the NetApp platinum stand.

What else?

  • See NetApp’s VMworld award-winning demo
  • See how clustered ONTAP can simplify data mobility by moving workloads non-disruptively between storage nodes
  • Learn about NetApp’s planned integration with VMware VVOLS (coming in 2013)
  • See Virtual Storage Console with vSphere 5.1 and clustered ONTAP in action
  • Learn about why FlexPod (TM) is the best of breed data centre platform to accelerate your transition to the Cloud
  • Explore how NetApp’s big data solutions can turn your big data threat into opportunities

And for all the Mo Bros (and Mo Sistas) out there, come by our NetApp platinum stand at vForum2012 and support NetApp’s Movember campaign. Help us  to raise funds and awareness for men’s health issues, including prostate cancer and male mental health.

See you at vForum 2012 (14-15 November) at the Sydney Convention and Exhibition Centre. Register here: https://isa.infosalons.biz/reg/vforum12s/login.asp?src=vmware_eng_eventpage_web&elq=&ossrc=&xyz=

Categories: Marketing, Virtualisation Tags:

Big Data: Who Cares?

June 26, 2012 Leave a comment

Big data: Who cares?

Despite the hype being generated by big data, many are questioning the actual implications for business. With data expected to grow at 40% a year and the digital universe recently breaking the zettabyte barrier, much of the buzz is based just on the absolute wow factor of how big is big.

Putting aside the hype about the amount of data, it’s also important to note that in addition to the increased volume, the nature of the data is also changing. Most of this growth is unstructured machine- and consumer-generated data. Digital technologies are moving to denser media, photos have all gone digital, videos are using higher resolution, and advanced analytics requires more storage.

In short, the amount of data is increasing and the data objects themselves are getting bigger. But so what? As an IT organization why should I care? There are two big reasons.

Business Advantage

In all of this unstructured data there is real business value. What businesses need to do is identify the specific opportunities they have to use very large and unstructured data sets and generate real world results.

An example of the benefits that can be derived from big data analytics is monitoring customer buying habits, integrated with real-time alerts. If a regular customer does something unexpected – like order 300 times more than they usually do – the company can immediately re-route inventory to satisfy their need. For businesses in retail environments, being able to store and analyse hours, days, months and years of surveillance video allows optimised product placement and service locations.

As the business advantage of big data analysis becomes more apparent, the new consumers of business intelligence will come from places like marketing and product operations, rather than the traditional consumer of business intelligence, the finance department. As these departments have expectations of immediate results, it may be impossible to satisfy cost effectively with existing solutions. This is driving increasing interest in leveraging the approach of open source solutions like Hadoop for specific data marts, at a price point that fits within these departments’ ability to provide project funding. From an IT perspective, it’s vital to ensure your infrastructure is managed in a way that will allow it to adapt to these changing needs.

Cost of Compliance

Based on current trends, at some point the cost of compliance will break your budget. Conformance requirements mean you need to keep an ever-growing amount of increasingly valuable data. Compliance to new laws may require data be kept forever and be retrieved immediately when required. Additionally as existing infrastructure scales, the complexity of managing and protecting the data becomes impractical. Eventually you will have to think differently about your data and keep more while spending less.

As an IT organisation you may be thinking that your own data growth will soon be stretching the limits of your infrastructure. A way to define big data is to look at your existing infrastructure, the amount of data you have now, and the amount of growth you’re experiencing. Is it starting to break your existing processes? If so, where?

At NetApp we’ve taken a practical approach to helping our customers with their big data challenges. It’s not about some future unknown state that requires retraining your staff with new competencies or changing the way you do business. It’s about what can you do today that can make a real difference. If you can find the value in your universe of data and help your company turn it into real business advantage, then you will instrumental so its success and fufill the promise that IT has held out to the business. If not, IT risks being considered as just another cost centre with an increasingly cloudy future.

Categories: Big Data, Marketing, Value

Breaking Records … Revisited

November 3, 2011 10 comments

So today I found out that we’d broken a few records of our own few days ago, which was, at least from my perspective associated with surprisingly little fanfare with the associated press release coming out late last night. I’d like to say that the results speak for themselves, and to an extent they do. NetApp now holds the top two spots, and four out of the top five results on the ranking ladder. If this were the olympics most people would agree that this represents a position of PURE DOMINATION. High fives all round, and much chest beating and downing of well deserved delicious amber beverages.

So, apart from having the biggest number (which is nice), what did we prove ?

Benchmarks are interesting to me because they are the almost perfect intersection of my interests in both technical storage performance  and marketing and messaging. From a technical viewpoint, a benchmark can be really useful, but it only provides a relatively small number of proof points, and extrapolating beyond those, or making generalised conclusions is rarely a good idea.

For example, when NetApp released their SPC-1 benchmarks a few years ago, it proved a number of things

1. That under heavy load which involved a large number of random writes, a NetApp arrays performance remained steady over time

2. That this could be done while taking multiple snapshots, and more importantly while deleting and retiring them while under heavy load

3. That this could be done with RAID-6 and with a greater capacity efficiency as measured by  RAW vs USED than any other submission

4. That this could be done at better levels of performance than an equivalently configured  commonly used “traditional array” as exemplified by EMCs CX3-40

5. That the copy on write performance of the snapshots on an EMC array sucked under heavy load (and by implication similar copy on write snapshot implementations on other vendors arrays)

That’s a pretty good list of things to prove, especially in the face of considerable unfounded misinformation being put out at the time, and which, surprisingly is still bandied about despite the independently audited proof to the contrary. Having said that, this was not a “my number is the biggest”, exercise which generally proves nothing more than how much hardware you had available in your testing lab at the time.

A few months later we published another SPC-1 result which showed that we could pretty much doubl the numbers we’d achieved in the previous generation at a lower price per IOP with what was at the time a very competetive submission.

About two years after that we published yet another SPC-1 result with the direct replacement for the controller used in the previous test (3270 vs 3170). What this test didnt do was to show how much more load could be placed on the system, what it did do was to show that we could give our customers more IOPS at a lower latency with half the number of spindles .   This was the first time we’d submitted an SPC-1e result which foucussed on energy efficiency. It showed, quite dramatically how effective our FlashCache technology was under a heavy random write workload. Its interesting to compare that submission with the previous one for a number of reasons, but for the most part, this benchmark was about Flashcache effectiveness.

We did a number of other benchmarks including Spec-SFS benchmarks that also proved the remarkable effectiveness of the Flashcache technology, showing how it could make SATA drives perform as better than Fibre channel drives, or dramatically reduce the number of fibre channel drives required to service a given workload. There were a couple of other benchmarks done which I’ll grant were “hey look at how fast our shiny new boxes can run”, but for the most part, these were all done with configurations we’d reasonably expect a decent number our customers to actually buy (no all SSD configurations).

In the mean time EMC released some “Lab Queen” benchmarks, at first I thought that EMC were trying to prove just how fast their new X-blades were for processing CIFS and NFS traffic. They did this by configuring the back end storage system in such a rediculously overengineered way as to remove any possibility that they could cause a bottleneck in any way, either that or EMC’s block storage devices are way slower than most people would assume. From an engineering perspective I think they guys in Hopkington who created those X-blades did a truly excellent job, almost 125,000 IOPS per X-Blade using 6 CPU cores is genuinely impressive to me, even if all they were doing was  processing NFS/CIFS calls. You see, unlike the storage processors in a FAS or Isilon array, the X-Blade, much like the Network Processor in a SONAS system, or an Oceanspace N8500 relies on a back end block processing device to handle RAID , block checksums, write cache coherency and physical data movement to and from the disks, all of which is non-trivial work. What I find particularly interesting is that in all the benchmarks I looked at for these kinds of systems, the number of back end block storage systems was usually double that of the front end, which infers to me either that the load placed on back end systems by these benchmarks is higher than the load on the front end, or  more likely that the front end / back end architecture is very sensitive to any latency on the back end systems which means the back end systems get overengineered for benchmarks. My guess is after seeing the “All Flash DMX” configuration is that Celerra’s performance is very adversly affected by even slight increases in latency in the back end and that we start seeing some nasty manifestations of little law in these architectures under heavy load.

A little while later after being present at a couple of EMC presentations (one at Cisco Live, the other at a SNIA event, where EMC staff were fully aware of my presence), it became clear to me exactly why EMC did these “my number is bigger than yours” benchmarks. Ther marketing staff at corporate created a slide that compared all of the current SPC benchmarks in a way that was accurate, compelling and completely misleading all at the same time, at least as far as the VNX portion goes. Part of this goes back to the way that vendors, including I might say Netapp, use an availability group as a point of aggregation when reporting peformance numbers, this is reasonably fair as adding Active/Active or Active/Passive availability generally slows things down due to the two phase commit nature of write caching in modular storage environments. However, the configuration of the EMC VNX VG8 Gateway/EMC VNX5700 actually involves 5 separate availability groups (1xVG8 Gateway system with 4+1 redundancy, and and 4x VNX5700 with 1+1 redundancy). Presenting this as one aggregated peformance number without any valid point of aggregation smacks of downright dishonesty to me. If NetApp had done the same thing, then, using only 4 availabilty groups, we could have claimed over 760,000 IOPS by combining 4 of our existing 6240 configurations, but we didnt, because frankly doing that is in my opinion on the other side of the fine line where marketing finesse falls off the precipice into the shadowy realm of deceptive practice.

Which brings me back to my original question, what did we prove with our most recent submissions, well three things come to mind

1. That Netapp’s Ontap 8.1 Cluster mode solution is real, and it performs briliiantly

2. It scales linearly as you add nodes (more so than the leading competitors)

3. That scaling with 24 big nodes gives you better performance and better efficiency than scaling with hundreds of smaller nodes (at least for the SPEC benchmark)

This is a valid configuration using a single vserver as a point of aggregation across the cluster, and trust me, this is only the beginning.

As always, comments and criticism is welcome.

Regards

John

Breaking Records ?

January 19, 2011 4 comments

I’m in the middle of digesting what was actually released in EMC’s recent launch. For the most part there isn’t anything really that new: lots of unsupported hype like, “3 times simpler, 3 times faster.” Faster than what, exactly? From a technical perspective the only thing that’s really interesting or surprising is the VNXe and that was less interesting than I expected because I thought they were going to refresh their entire range using that technology. So it looks like they’ve given up trying to make that scale for the moment.

So much of what they’ve done copies or validates what we’ve already done at NetApp:

  • Simplified software packaging
  • Launching a lot of stuff at the same time
  • New denser shelves with small form-factor drives
  • An emphasis on storage efficiency
  • An emphasis of flash as a caching layer
  • The ideal match between unified storage and virtualized environments

The biggest change that I see is that they now appear to be shipping all their controllers with unified capability from the start, enabled via a software upgrade which is something EMC has criticised us for in the past. Now they acknowledge that the only way to compete with NetApp effectively is to try to be as much like us as they possibly can. This might explain why EMC in Australia isn’t going to sell the “Block only” VNX 5100. SearchStorage.com.au had this report:

EMC’s new VNX 5100 (pictured), a block-only storage device, won’t go on sale in Australia becaus “We did not see great enough demand to see that particular system,” according to Mark Oakey, the company’s Marketing Manager for Storage Platforms in Australia and New Zealand. “We’ll continue with the Clariion CX4 120,” he told SearchStorage ANZ. “It has more or less the same capabilities.”

Most of the interesting capabilities they’re touting came last year with FLARE 30 and DART 6.0 (two of their operating systems). Even the VMax stuff they’re pushing during the launch came out via a software upgrade without a lot of fanfare in December, so as far as I can see their “record breaking announcement” consists of announcing a whole bunch of things they’d already done along with some new tin.

Things they didn’t announce:

  1. Multistore equivalency
  2. V-Series equivalency
  3. Unified replication capabilities
  4. A commercial grade VMware based “Virtual Storage Array”- The new low end box is based on Linux
  5. A scale out roadmap for their “Unified” platform
  6. Any significant change in their management software strategy or offering
  7. Block level deduplication for their unified arrays
  8. Clarification on where their newly acquired scale out Isilon systems fit within their new “Unified” ecosystem.

Overall EMC did a catch up release to try and maintain pace with NetApp innovation, and nothing they’ve done or released represents a significant new threat. If this is

“the most significant midrange announcement in EMC’s 30-year history”

according toi Rich Napolitano, President, Unified Storage Division at EMC, then EMC will continue to play catch up as NetApp redefines Unified Storage and its role in shared infrastructure.

Categories: EMC, Hyperbole, Marketing