Software Defined Networking – The Next Frontier

April 16, 2013 1 comment

The following part of the post came from content I wrote for Evolve a newsletter we publish out of ANZ. It’s a a little long and technical for an executive focused newsletter, which is partly why  it gets a little bit rushed in the end. What I’d like to do is to expand a little more on what I believe are the choices that can be made when separating the control and data planes in a software defined storage architecture, where the industry, and in-particular NetApp is today, where things are likely to go, and most importantly how to get value from this architectural shift.

Introduction

CIOs face the constant challenge of turning rapid technological developments into business advantage. If this was not difficult enough, there are often times when multiple technologies are simultaneously released into the market, changing the IT landscape. The datacentre is currently on the cusp of such a revolution.

As it was for workforce mobility and cloud computing, it is the network that will be at the centre of these transformations. A network connects resources to intelligence and allows us to redefine what a datacenter is, and how we consume its properties.

It isn’t just incredible speed and massive bandwidth that is causing this transformation, but the fruition of an idea that’s been in development for the last decade, and that idea is Software-Defined Networking or SDN.

Software Defined Networking

This disruptive trend in the networking industry rediscovers the old idea of separating the control and data planes in network equipment. In other words, SDN liberates the higher-level network management functions from their ties to individual boxes and instead offers the vision of a “network operating system”. This allows networked applications to provision and control their networking needs using high-level open programming interfaces provided by an SDN “network controller”. The promise of this approach has meant that in a few short years, Software-Defined Networking has turned from a simple idea meant to enable new academic networking research into a potentially industry-changing technology trend.

The reason for this is that the network virtualization technology that is part of SDN is the missing piece that completes the vision of a software-defined datacenter, where compute, network and storage resources are elastic and dynamically adaptable. This network virtualization not only completes this vision, it raises the bar on how the different virtualized components integrate and interact in new, direct and more dynamic ways. This changes what IT will expect from their storage infrastructure.

SDN and the implications for Storage

Infrastructure managers who see the promise of a software defined datacentre are beginning to see storage as an important part of the infrastructure they desire to manage within the context of an SDN. However, this is only possible if the storage infrastructure itself can be separated between software that controls and manages data, and the infrastructure that stores, copies and retrieves that data. In short, storage needs to have its own control and data planes, working seamlessly as an extension of the SDN infrastructure that will be the core of the next generation datacentre.

Part of the reason for wanting to separate the control plane and liberate the storage control software from the hardware is that software defined storage allows offloading the computationally heavy aspects of storage management related functions like RDMA protocol handling, advanced data lifecycle management, caching and compression. The availability of large amounts of CPU power within private and public clouds opens all kinds of possibilities to both network and storage management that were simply not feasible before.

With more intelligence built into the Control-Plane, storage architects are now able to take full advantage of the other two major changes in the Data-Plane. The first, and perhaps the most interesting is the increasing affordability of solid state memory such as Flash and post-Flash technology such as PCM and STT-RAM.

Optimising Performance

Phase Change Memory (PCM) and Spin-transfer torque random-access memory (STT-RAM), have the access speeds and byte addressable characteristics of the Dynamic RAM (DRAM) used in servers today, with the added and transformational benefit of the solid state persistence of Flash. These technologies are significantly more expensive than Flash is today, but the predictions are these technologies will surpass even the cheapest forms of Flash memory within five or six years. Regardless of which technology wins, the trends are clear; within a few years the majority of a server’s storage performance requirements will be served from some form of solid state storage within the server itself. When this is combined with new network technology and software like SAP HANA, it has major implications for storage design and implementation. Imagine how your infrastructure would change if every server had terabytes of super-fast solid state memory connected together via ultra-low latency intelligent networking. The fact is that many of the reasons we implement shared storage for mission critical applications today, would simply disappear.

Optimising Capacity

The second major change is the demand to store and process massive amounts of data that increases as we are able extract more value from that data through Big Data analysis. This coincides with a dramatic reduction in the cost of storing that data. Very high density SATA drives with capacities in excess of 10TB per disk are coming, but in order to surpass some hard quantum-physics level limitations they will use new storage techniques such as shingled writes and will be built optimally to store, but never overwrite or erase data. This means the storage characteristics at the Data-Plane will be fundamentally different from those we are familiar with today. Furthermore, even with these improvements in the costs and density of magnetic disk, the economics of power consumption and datacentre real-estate means that tape is becoming attractive again for long term archival storage. Finally, the economies of scale that large cloud providers have and the availability of massive computing power they are able to place in close proximity to that data means that those cloud providers will have a compelling value proposition for storing a large proportion of an organisation’s cold data.

Regardless of where and how this data is stored, the challenges of securing and finding that data, and managing the lifecycles of this massive amount of information means traditional methods of using files, folders and directories simply won’t be viable in the long term. New access and management techniques built on-top of object based access to data such as Amazon’s S3 and the open standards based CDMI interfaces will be the management and data access protocols of choice.

Conclusion

In the end the only way to effectively combine the speed and performance of solid-state storage with the scale and price advantages of capacity optimised storage is by using a software defined storage infrastructure. It is the intelligence of the separated Control-Plane powered by commodity CPU that allows infrastructure managers and datacentre architects to take advantage of these two massive trends.

While this all talks about what will happen in the future, unlike other vendors who are only just beginning to talk about building a software defined storage infrastructure, NetApp has been planning for this future for many years now.

  • Clustered Data-ONTAP was built on the principal of separating the Data-Plane and the Control-Plane and is ready to take advantage of the trends in software defined networking as they evolve and are deployed into datacentres over the next few years
  • NetApp’s fully supported ONTAP-Edge software runs in a virtual machine, allowing the full power of ONTAP’s advanced data management functionality on commodity DAS, and NetApp’s V-Series controllers performs the same function at extreme scale for the largest and most mission critical environments
  • NetApp has released at no cost to the customer Flash-Accel technology that allows commodity SSD’s and 3rd Party PCI based Flash cards to act as an extension of our storage cache for virtualized environments. This extends the work we have done in our separation of control and data-planes for our existing customers who have not yet moved to Clustered Data ONTAP
  • NetApp has partnered with Amazon to provide private storage for AWS which allows the massive on-demand compute power to be coupled with NetApp’s storage in Amazon’s datacenters
  • NetApp already provides open standards based advanced programming and automation interfaces through offerings such as NetApp Workflow Automater, the Cloud Data Management Interface, SMI-S, and continues to lead the industry in providing programmable software defined storage. These aren’t just technology tick box items, but technology that drives significant competitive advantages such to companies like ING DIRECT’s “Bank in a Box”.

These are just a few of the things we’ve already done, the foundations have already been set and what NetApp will be building and bringing to the market over the next few years will truly redefine what storage is inside the datacenter, and the value it can bring to IT and the organisations it serves

A very long reply

April 11, 2013 5 comments

This blog post is essentially a very long comment reply to Darius Zheng at Oracle on his blog

https://blogs.oracle.com/si/entry/7420_spec_sfs_torches_netapp#comments

I suspect it is so long that it probably needs more formatting to be readable, so I’m posting it here too

—–BEGIN REPLY—-

Thanks for posting the reply, again, I think you’re missing my point

 DZ …  “Oracle has 2.5x Performance for 1/2 the cost of a Netapp”

 A more accurate statement is that “The Oracle 7420 attained a benchmark result that was 2.5x better for 1/2 the _List Price_ of a NetApp 3270 array in 2011 that had significantly less hardware.

 What this says is that Oracles List Price is a significantly lower than NetApp’s List Price. You could say the same thing about the difference between the price of a Hyundai i30 and an Audi A3.

 Secondarily, as I pointed out in previous comments the rest of the results also says that the Oracle solution make relatively inefficient usage of CPU and Memory when compared to a NetApp system that achieves similar performance.

 Yes the list price of the NetApp system is significantly higher than an equivalently performing 7420, but this is a marketing and pricing issue, not a technical one. In general I like to stick to technical merits, because pricing is a fickle thing that can be adjusted at the stroke of a pen, technology requires a lot more work to get things right.

In the end, how this list price differentiation translates into what these solutions will actually cost a customer is highly debatable. I do a LOT of research into street prices as part of my job,  and in general storage is increasingly purchased as part of an overall upgrade and this is where the issues get murky very quickly as margins are moved around various components within the infrastructure to subsidise discounting in other areas. Having said that, I will let you in on something, based on the data I have, for many quarters, in terms of average $/RAW TB paid by customers in my market, Oracle customers paid about 25% MORE for V7000 storage than paid by Netapp customers for storage on FAS32xx and that only recently did Oracle begin to reach pricing parity with NetApp.  We could argue the ways the analyst arrived at those figures, but from my analysis the trend is clear across almost all vendors and array families vis. The correlation between customer $/TB is strongly correlated with the implied manufacturing costs, and very poorly correlated with the vendor list prices. The main exceptions to this are new product introductions when there is a compelling new and unique value propositions (e.g. DataDomain) or when vendors buy business at very low or even negative margin in order to seed the market (e.g. XIV in the early days)

Now personally, I disagree with NetApp’s list pricing policy, however there are reasons why that list price is so much higher than the actual street price most people pay. Many of those reaons have to do with boring things like long term pricing contracts. If you’d like to turn this into a marketing discussion around pricing strategies, I’m cool with that, but I don’t think the people that read either of our blogs are overly interested. However I will say this again, the price people pay in the end, has more to do with the costs of manufacture, and a solution that gets more performance out of less hardware will generally cost the customer less, especially if the operational expenses are lower.

DZ .. “Why wouldn’t a customer want more CPU and Cache?”

Why would someone want less CPU or Cache ? … because it costs them less, either in street pricing terms, or in the cost of powering or cooling them. And yes, I believe that that a 7420 controller with more than eleven times as many CPU cores, and more than one hundred and sixty times as much DRAM will chew a lot more power and cooling than a 3270 controller.

It’s not just the cost of the electricity (carbon footprint and green ethics aside), its also the opportunity cost of using that power for something else. Data centers have finite resources for power and many (most) are very close to the point where you cant add more systems. In those environments, Power hungry systems that aren’t running business generating applications are not viewed kindly.

JM Interpretaiton of DZ … “Happy to do a power consumption comparison, where is the netapp information ?”

I’ve answered that in a simlar question to me on my blog at storagewithoutborders.com – See the blog-post URL in a previous comment re getting access to power consumption figures.

DZ .. You say the Netapp cache is SO efficient and you talk about an old non relevant 3160 SPEC SFS post

I referenced the “non relevant 3160 SPEC SFS post” because it is relevant, being the place where NetApp tested the same controller with a combination of flash acceleration, no flash acceleration, with both SATA, and FC/SAS spindles. The specific one I referenced was the most comparable configuration that includes flash and a 300GB 15K disks which as I pointed out achieved 1080 IOPS/15K Spindle with a cache that was 7.6% of the fileset size.

If you prefer I could have used the more recent (though still old) 6240 dual node config which uses 450GB 15K disks and achieved cache that achieved 662 IOPS per drive but with a cache that was a mere 4.5% of the file-set size, or the 24 Node 6240 config which achieved 875 IOPS per drive with a cache that was 7.6% of the file-set size. As you can see a modest amount of flash improves the IOPS/disk enormously, and there is a good correlation between more flash as a percentage of the working sets and better results in terms of IOPS/Disk. Before you ask, as far as I can tell, the main reason for the difference in IOPS/spindle between the 24 Nodes 6240 and the old 3160 with a similar cache size as a percentage of the fileset, is that NetApp’s scale-out benchmark used worst case paths to from the client to the data to provide a  squeaky clean implementation of SPEC’s uniform access rule.

DZ .. “You fail to mention that the 3270 gets a MEASLY 281 IOPS per drive and that the 3250 gets a whopping 300 IOPS per drive. So your point is that the 3250 was done to compare with the 3270? What was the 3270 done for?”

Neither the 3270, nor the 3250 benchmarks used flashcache, so the IOPS/spindle are going to be good, but not stellar. I don’t know exactly why we didn’t use flash in the old 3270 benchmark maybe its because SPEC-SFS is a better indication of CPU and metadata handling than it is about reads and writes to disk, and like I said, we’d already proved the effectiveness of our flash based caching with the series of 3160 benchmarks.

Going in to the future, I doubt NetApp will do another primary benchmark without flash, but its worth saying again, that the 3250 was done to show performance equivalency with the 3270, so that configuration was as close to identical as NetApp could, and that meant neither the 3270 or the 3250 benchmarks used Flash to improve the IOPS/disk. If NetApp had done it, I have every reason to believe that the results would have been in line with the 3160 and 6240 benchmarks referenced above.

DZ .. “I thought the purpose of a benchmark was to compare many vendors systems against each other with the workload remaining consistent?”

NetApp tends to use benchmarks as ways of demonstrating how much a their technology has improved against a previous NetApp baseline, to help their  customers make good purchasing decisions. Proving they’re better than someone else is not a primary consideration, though often that is a secondary effect. Oracle is free to use their benchmarks in any way they choose, personally I’d love to see a range of configurations from each technology bench-marked rather than just sweetspots, maybe opensfs and netmist will bring this about, but the fact is, running open, verifiable, and fairly comparable benchmarks is expensive and time consuming and I will probably never see enough good engineering data published. If you’ve got some ideas to simplify this, I’d love to work with you on this (seriously, we might compete against each other, but we both clearly care about this stuff, not many do)

DZ .. With that in mind the Oracle 7420 still crushes the netapp in price, efficiency and performance. I am guessing we are also still better or comparable in power usage as well.

You’ll see from the above that I respectfully disagree with pretty much everything in that last statement, and I’m looking forward to that controller power usage comparison 🙂

—END REPLY—

Categories: Uncategorized

Has NetApp Changed its Position re Flash ?

February 22, 2013 1 comment

In a Register article I read today http://www.theregister.co.uk/2013/02/19/netapp_flashray/ Chris Mellor stated

NetApp in marketing terms has made a U-turn. It may well deny it and probably will, merely acknowledging a slight change in direction. I’ve had a NetApp office of the CTO guy try to convince me that there is only a need for flash as a cache and not a tier and have seen the strenuous efforts NetApp people have made in the past to deny the validity of cache as a storage tier. Huh! If it quacks like a duck, paddles like a duck and flies like a duck then it is a duck. And this is a U-turn.

I posted a reponse on the column, and in it I said, “There is no U-Turn, there is simply a logical progression to a future state”. The rest of this post is pretty much what’s in the comment response, but I thought I’d post it here because I’ll probably want to refer back to it some other time, (that and the fact I can fix spelling mistakes more easily here)

Using flash to temporarily store hot data still makes sense for the vast majority of workloads, whether you do this as a tier or as cache (or from my perspective whether it’s a write behind or write through cache), the economics of flash and disk today make this the best way of applying solid state storage to business problems, especially in shared and virtualised environments where IT efficiency and reliability are the primary concerns. That’s why the vast majority of flash storage sold to enterprises has been in hybrid arrays, NetApp alone has sold more than 35PB of flash in hybrid arrays, which simply dwarfs the shipments from pure Flash arrays.

Within the context of most shared virtualised infrastructure, Flash as a cache or temporary storage tier is still the best possible solution, however there is a major architectural change happening where massive amounts of solid state storage will be increasingly built directly into the server infrastructure, like a Macbook Air on steroids. The performance benefits of having that cache very close to the CPU can be impressive, and for the right workload dedicating some flash in the server to that application can have amazing results, just ask Fusion I/O … That is what FlashAccel is all about, as this lets you easily dedicate a few hundred GB of flash to just one part of your infrastructure.

There are however some applications like high frequency trading where a few hundred GB just isn’t enough. These applications need large amounts of dedicated high speed kit, and when millisecond time differences result in million dollar profit differences, efficiency gives way to no-compromise performance. It is these kinds of applications that the EF540 is perfect for, just the same way as the E5400 is perfect for other kinds of HPC and Big Data environments.

 There are a whole stack of new applications being built today that will be able to generate this kind of business value in the future, and many of them will work better with a combination of the raw power of the EF540 and the advanced data management of ONTAP. They will be able to take advantage of the latent unused power of an adjacent cloud infrastructure, and will be part of the next generation of hybrid IT infrastructure that encompasses dedicated infrastructure, internal/private and external/public cloud. By that time the economics and technology of solid state or storage class memory will be significantly different than they are today. The future of IT infrastructure will be very interesting, and the future of storage will be even more so.

FlashRay is built for that future.

Categories: Uncategorized

Measuring Array Reliability

February 18, 2013 2 comments

ReliabilityIn a Register article http://www.theregister.co.uk/2013/02/11/storagebod_8feb13/ @storagebod asked vendors to disclose all their juicy reliabilit figures. This post is in response to that, though most of this comes from a preamble I wrote almost two years ago to an RFP response around system reliability, so it highights a number of NetApp specific technologies. It’s kind of dense, and some of the supporting information is getting a little old now, even so I still think its accurate, and helps to explain  vendors are a careful about giving out single reliability metrics for disk arrays.

There have been few formal studies published analyzing the reliability of storage system components. Early work done in 1989 presented a reliability model based on formula and datasheet-specified MTTF of each component, assuming component failures follow exponential distributions and that failures are independent.  Models based on these assumptions and that systems should be modeled using homogenous Poisson processes remain in common use today, however research sponsored by NetApp shows that these models may severely underestimate the annual failure rates for important subsystems such as RAID and Disk Shelves/Disk Access Enclosures and their associated interconnects.

Two NetApp sponsored studies :  “A Comprehensive Study of Storage Subsystem Failure Characteristics by Weihang Jiang, Chongfeng Hu, Yuanyuan Zhou and Arkady Kanevsky in April 2008 http://media.netapp.com/documents/dont-blame-disks-for-every-storage-subsystem-failure.pdf”  and “A Highly Accurate Method for Assessing Reliability of Redundant Arrays of Inexpensive Disks (RAID) by Jon G. Elerath and Michael Pecht  in IEEE TRANSACTIONS ON COMPUTERS, VOL. 58, NO. 3, MARCH 2009 http://media.netapp.com/documents/rp-0046.pdf” Contain sophisticated models supported by field data for evaluating the reliability of various storage array configurations. These findings, and their impact on our how NetApp designs its systems are summarized below.

Physical interconnects failures make up the largest part (27-68%) of storage subsystem failures, disk failures make up the second largest part (20-55%). This is addressed via redundant shelf interconnects and Dual Parity RAID techniques

  • Storage subsystems configured with redundant interconnects experience 30-40% lower failure rates than those with a single interconnect. This is the underlying reason for including redundant interconnects.
  • Spanning disks of a RAID group across multiple shelves provides a more resilient solution for storage subsystems than within a single shelf. Data OnTAP’s default RAID creation policies follow this model, in addition Syncmirror provides an additional level of redundancy and protection for the most critical data.
  • State of the art disk reliability models yields estimates of Dual Drive Failures that are as much as 4,000 times greater than the commonly used Mean Time to Data Loss (MTTDL) based estimates
  • Latent defects are inevitable, and scrubbing latent defects is imperative to RAID N + 1 reliability. As HDD capacity increases, the number of latent defects will also increase and render the MTTDL method less accurate.
  • Although scrubbing is a viable method to eliminate latent defects, there is a trade-off between serving data and scrubbing. As the demand on the HDD increases, less time will be available to scrub. If scrubbing is given priority, then system response to demands for data will be reduced. A second alternative to accept latent defects and increase system reliability is to increase redundancy to N + 2, RAID 6.

Because of the difficulty in creating a readily understood model that accurately reflects the complex interrelations of component reliability for systems with a mixture of exponential and Wiebull component failure distributions  NetApp publishes independently audited reliability metrics based on a rolling 6 month audit

Run hours and downtime are collected via AutoSupport reports based on 6-month rolling time period, from customer systems with active NetApp support agreements

–       Availability data is automatically reported for >15,000 FAS systems (FAS6000, FAS3000, FAS2000, FAS900 & FAS200)

System downtime is counted when caused by NetApp system:

–       Hardware failures (e.g., controller, expansion cards, shelves, disks)

–       Software failures

–       Planned outages associated with replacing a failed component (FRU)

System downtime is not counted as a result of:

–       Power and other environmental failures (e.g., excessive ambient temp)

–       Operator-initiated downtime

System Availability  = 1- [sum of all downtime / sum of total run time]

The graph at the top of this post shows the availability range of the all FAS models. The increasing black line at the bottom represents the introduction of a new FAS array which started out at over “five nines”, over time as a greater population of machines were deployed the average reliability increased, trending towards the “six nines” of availability achieved by our most commonly deployed array models as shown in the blue line at the top.

The other interesting thing about the way we measure downtime is that this discounts Operator-initiated downtime. Given that most hardware systems from reputable vendors are very reliable this may be the largest cause of overall system downtime. Clustered Ontap was designed to specifically eliminate or at the very least substantially mitigate the requirement for planned downtime for storage operations, leaving data center outages as the only major cause of system dowtime, and with SnapMirror we can help mitigate that one too.

As always comments and criticisms are welcome.

Regards

John

 

 

 

 

Categories: Uncategorized

Unfinished Business

February 15, 2013 5 comments

-Update  11/4/13 – Darius approved the comment in question and said sorry, so I’m all happy now. Given how long and verbose my comments are, I suspect he simply didn’t have time to read through it and figure out if it was safe to publish on an Oracle hosted website. While I disagree with Darius on a lot of points, I also like a lot of what he writes, and I suspect he’d be an interesting guy to have a beer with 🙂

Forgive me socialmeda for I have sinned, It’s been four months since my last blogpost. There are a bunch of reasons for this, mostly that I’ve had some real-world stuff that’s been way more important than blogging, and I’ve limited my technical writing to posting comments on other blogs or answering questions on Linked-in. Before I start writing again, there’s something I’d like to get off my chest. It _really_ bugs me when people edit out relevant comments. As a case in point I was having what I believed to be a reasonably constructive conversation with Darius Zanganeh of Oracle on his blog , but for some reason he never approved my final comment which I submitted to his blog on December 7th 2012, the text of which follows. If you’re interested, head over to his blog and read the entire post, I think it’s pretty good, it showcases some impressive benchmark numbers Oracle has been able to achieve with scale-up commodity hardware. From my perspective this is a great example of  that a deeper analysis of good benchmarks demonstrate far more than  top line numbers and $/IOPS …and if you know me, then you know I just LOVE a good debate over benchmark results, so I couldnt resist commenting even though I really had better/more important things to do at the time.

Thanks Darius, its nice to know exactly what we’re comparing this to. I didn’t read the press releases, nor was I replying to that release, I was replying to your post which was primarily a comparison to the FAS6240.

 If you do want to compare the 7420 to the 3270, then I’ll amend the figures once again .. to get a 240% better result you used a box with

  1.  More than eleven times as many CPU cores
  2. More than one hundred and sixty times as much memory

 I really wish you’d also published Power Consumption figures too 🙂

 Regarding disk efficiency, we’ve already demonstrated our cache effectiveness. On a previous 3160 benchmark, we used a modest amount of extended cache and reduced the number of drives required by 75%. By way of comparison to give about 1080 IOPS/15K Spindle we implemented a cache that was 7.6% of the capacity of the fileset. The Oracle benchmark got about 956 IOPS/drive with a cache size about 22% of the fileset size.

 The 3250 benchmark on the other hand, wasn’t done to demonstrate cache efficiency, it was done to allow a comparison to the old 3270. It’s also worth noting that the 3250 is not a replacement for the 3270, it’s a replacement for the 3240 with around 70% more performance. Every benchmark we do is generally done to create a fairly specific proof point, in the case of the 3250 benchmark it shows that it has almost identical performance as the 3270 for a controller that sells at a much lower price point.

 We might pick one of our controllers and do a “here’s a set config and here’s the performance across every known benchmark” the way Oracle seems to have done with the 7420. It might be kind of interesting, but I’m not sure what it would prove. Personally I’d like to see all the vendors including NetApp do way more benchmarking of all their models, but it’s a time-consuming and expensive thing to do, and as you’ve already demonstrated, its easy to draw some pretty odd conclusions from them. We’ll do more benchmarking in the future, you’ll just have to wait to see the results 🙂

 Going forward, I think non-scale out benchmark configs will still be valid to demonstrate stuff like model replacement equivalency, and cache efficiency, but I’ll say it again, if you’re after “my number is the biggest” hero number bragging rights, scale out is the only game in town.  But scale-out isn’t just about hero-numbers, for customers to rapidly scale without disruption as needs change, scale-out is an elegant and efficient solution and they need to know they can do that predicably and reliably. That’s why you see the benchmark series like the ones done by NetApp and Isilon. Even though scale-out NFS is a relatively small market, and Clustered-ONTAP has a good presence in that market, scale-out Unified storage has much broader appeal and is doing really well for us.  I cant disclose numbers, but based on the information I have, I wouldn’t be surprised if the number of new clusters sold since March exceeds the number of Oracle 7420s sold in the same period, either way I’m happy with the sales of Clustered-ONTAP.

 As technology blogger, its probably worth pointing out that stock charts are a REALLY poor proxy for technology comparisons, but if you want to go there, you should also look at stuff like P/E multiples (an indication of how fast the market expects you to grow), and market share numbers. If you’ve got Oracle’s storage revenue and profitability figures hand for use to do a side by side comparison to the NetApp published financial reports, post them on up, personally I would LOVE to see a comparison. Then again, maybe your readers would prefer us to stick to talking about the merits of our technology and how that can help them solve the problems they’ve got.

 In closing, while this has been fun, I don’t have a lot more time to spend on this. I have expressed my concerns at the amount of hardware you had to throw at the solution to achieve your benchmark results, and the leeway that gives you to be  competitive with street pricing, but as I said initially your benchmark shows you can get a great scale-up number, and you’re willing to do that at a keen list price, nobody can take that away from you, kudos to you and your team.

Other than having an opportunity to have my final say, my comments also underlines some major shifts in the industry that I’ll be blogging about over the next few months.

1. If you’re after “my number is the biggest” hero number bragging rights, scale out is the only game in town

2. Scale out Unified and Clustered ONTAP is going really well, I cant publish numbers, but the uptake has surprised me, the level of interest I’ve seen from the breifings I’ve been doing has been really good.

3. Efficiency matters, getting good results by throwing boatloads of commodity hardware at a problem is one way of solving a problem, but it usually causes problems and shifts costs elsewhere in the infrastructure (power, cooling, rackspace, labour, software, compliance etc etc)

I’ll also be writing a fair amount about Flash and Storage Class Memory, and why some of the Flash Benchmarks/Claimed performance is silly enough in my opinion to to be bordering on deceptive … until then, be prepared to dig deeper when people start to claim IOPS measured in the millions, until then, have fun 🙂

John Martin (life_no_borders)

Categories: Hyperbole, Performance

NetApp to talk business critical applications at vForum2012

October 31, 2012 Leave a comment

Cloud technology is transforming the IT landscape, and represents huge opportunities for Australian businesses. Private Cloud technology, with server virtualisation at its core, helps speed up business processes as well as help IT departments overcome challenges such as increasing infrastructure costs and the exponential growth of data.

An agile Private Cloud infrastructure provides business with many benefits including the flexibility to scale up or down to meet the needs of any situation, while simultaneously reducing infrastructure costs. Many of NetApp’s customers have seen significant improvements to the efficiency of their IT departments since adopting an agile data infrastructure based on cloud technology. Server virtualization has increased productivity, enabling them to transition away from inefficient applications and silo based infrastructure.

Like any new technology, adopting a cloud based infrastructure is not without it challenges. Virtual infrastructure complexities, concerns about performance, availability, security and cost have traditionally prevented many organisations from virtualizing business critical applications.

vForum 2012 provides a fantastic opportunity for technology decision makers to meet with NetApp representatives to learn more about how adopting a Private cloud infrastructure and server virtualization can increase efficiency in their business. It will also provide a stage for NetApp to address common storage challenges about virtualizing business critical applications, including:

  • The need to maintain optimal application availability for the business
  • Virtualization and sharing of storage resources
  • Security and containerization of business critical applications
  • Prioritization of business critical applications on a shared storage infrastructure
  • Agility to bring applications online quicker
  • Not meeting recovery point objectives (RPO) due to backups not completing in the defined window
  • Not meeting recovery time objective (RTO) as the process is too slow
  • Needing large amounts of storage to create database copies for environments such as dev/test and needing vast amounts of time to do so
  • Long time to provision storage or add capacity and deploy applications
  • Low overall storage utilisation

Make sure you don’t miss out on hearing from Vaughn Stewart, NetApp director of cloud computing and virtualization evangelist, who will be presenting on how storage is a key enabler for business critical application transformation and the rise  of the software defined data centre.

For those keen to engage in a deep-dive session with one of our virtualization experts, 1:1 sessions are available and can be booked, on the day, at the NetApp platinum stand.

What else?

  • See NetApp’s VMworld award-winning demo
  • See how clustered ONTAP can simplify data mobility by moving workloads non-disruptively between storage nodes
  • Learn about NetApp’s planned integration with VMware VVOLS (coming in 2013)
  • See Virtual Storage Console with vSphere 5.1 and clustered ONTAP in action
  • Learn about why FlexPod (TM) is the best of breed data centre platform to accelerate your transition to the Cloud
  • Explore how NetApp’s big data solutions can turn your big data threat into opportunities

And for all the Mo Bros (and Mo Sistas) out there, come by our NetApp platinum stand at vForum2012 and support NetApp’s Movember campaign. Help us  to raise funds and awareness for men’s health issues, including prostate cancer and male mental health.

See you at vForum 2012 (14-15 November) at the Sydney Convention and Exhibition Centre. Register here: https://isa.infosalons.biz/reg/vforum12s/login.asp?src=vmware_eng_eventpage_web&elq=&ossrc=&xyz=

Categories: Marketing, Virtualisation Tags:

Flash Accel

August 22, 2012 Leave a comment

Storage Class Memory Disruption

Storage Class Memory Management with Flash Accell

About two years ago I wrote a blog on Megacaches, what I didn’t write about at the time was something that I knew had been brewing in our Advanced Technology Groups (ATG) lab around storage class memory. These guys look at developments five to ten years in advance, and when I had the chance to study their findings it became clear to me why NetApp never saw Flash or SSD’s as just another storage tier. While I can’t be specific about the contents of those reports I think it’s safe to say that if you take a conservative set of predictions, you’ll see that a very large percentage of not only Flash memory, but also post-Flash technology such Phase Change Memory and MRAM is destined to reside in, or very close to the server layer.  This is already beginning to happen and it is going to cause a wave of disruption that will fundamentally change the way we think about, design, and manage mass data storage.

Even though there have been a number of early implementations of this technology, most of them in order to get time to market are little better than fast direct attached storage (DAS). While cost effective performance is probably one of the most important considerations in a data storage architecture, there are a whole range of other data management governance issues that need to be addressed, including backup and recovery, security, and increasingly, data mobility. There is a very good reason why the vast majority companies implement virtualised infrastructure on networked storage, and DAS, even very fast DAS, introduces more management problems than it solves.

Read more…

Categories: Uncategorized
%d bloggers like this: