Archive

Archive for the ‘Virtualisation’ Category

NetApp at Cisco Live : Another look into the future

March 5, 2014 3 comments

Hybrid CloudIt’s no secret that Cisco has been investing a LOT of resources into cloud, and from those investments,  we’ve recently seen them release a corresponding amount of fascinating new technology that I think will change the IT landscape in some really big ways. For that reason, I think this year’s Cisco Live in Melbourne is going to have a lot of relevance to a  broad cross section of the IT community, from application developers, all the way down to storage guys like us.

For that reason, I’m really pleased to be presenting a Cisco partner case study along with Anuj Aggarwal who is the Technical account manager on the Cisco Alliance team for NetApp. Most of you wont have met Anuj, but if you’ve got the time, you should take the time to get to know him while he’s down here. Anuj  knows more about network security, and the joint work NetApp and Cisco  has been doing on hybrid cloud than anyone else I know, and this makes him a very interesting person to spend some time with, especially if you need to know more about the future of  networking and storage in an increasingly cloudy IT environment.

I’m excited by this year’s presentation, because it expands on, and in many ways completes a lot of the work Cisco and NetApp have been doing since NetApp’s first appearance at Cisco Live in Melbourne in 2010 as a platinum sponsor, where we presented to a select few on  SMT, or more formally Secure Multi-Tenancy.

Read more…

Storage– Software Defined Since 1982


if you take that 3 step process for creating a “Software Defined” infrastructure that I outlined in my previous post, you could reasonably say that storage has been “software defined” since about 1982 (arguably as early as 1958 when the first disk drive made its appearance)

e.g.

  • Step 1 – identify and then formally define a set of common functions or primitives performed by existing infrastructure that are optimally run in purpose built devices (e.g. hardware filled with interfaces and ASICs) – This becomes the "Data Plane".
    From a data storage perspective I have broken down what I see as the common storage primitives into four main categories. I’ll probably use these categories as a tool for functional comparisons of various Software Defined Storage implementations going into the future.

placement managment – e.g. given an logical address and some data by a requestor,  write that data to an underlying storage medium so that it can be subsequently retrieved using that address without the requestor needing to be aware of the physical characteristics of that underlying storage medium

access managment – e.g. given an address by a requestor, read data from an underlying storage medium and make it available to the requestor. Additionally in the case where multiple requestors may make simultaneous requests to place or access the same data, provide a mechanism to arbitrate that access.

copy management – e.g. given a set of source addresses  and a range of target addresses, copy the data from the source to the target on behalf of the requestor

persistence management – in most storage systems this is an implied function, though increasingly with the rise of protocols such as CDMI, and XAM, data persistence SLOs are being explicitly defined at placement time.  In most cases however, data must be stored until the device itself fails, and the device is generally expected to have a lifetime of multiple years.

  • Step 2 – Create a protocol that manages those functions
    The great thing about standards is there’s so many of them … and the storage industry just LOVES forming standards bodies to create new protocols to manage the functions I described above. Many of them have been around for a while: SCSI was standardised in 1982, NFS in 1989, SMB in 1992 (kind of), OSD in 2004 and in more recent times we have seen implementations like XAM in 2010, and most recently CDMI which became an ISO standard in 2012.

    Some of us get religious about these standards and which one should be used for what purposes, what I find interesting is that they all seem to be converging around a common set of functionality, so it’s possible that we will eventually see one storage protocol to rule them all, but I doubt it will happen any time soon. In the near term, whether we need to create another new protocol is debatable, but as of this moment I’m pretty impressed with the work being done at SNIA with CDMI, not as a “new replacement” but as something which leverages the work that’s already been done with the other protocols and fills in their gaps, but I’m getting WAY ahead of myself here.

 

  • Step 3 – Create a standards compliant controller that runs on general purpose hardware (e.g. an intel server, virtualised or otherwise) that takes higher order service requests from applications and translates those into the primitives codified in step 1, over the protocol devised in step 2.  – This becomes the "Control Plane"
    Well if you accept that the existing storage protocol standards are functionally equivalent to the OpenFlow Protocol in Software defined Networking, then pretty much any modern operating system could function as a controller. Also any modern hypervisor also acts as a controllers, and any storage array which uses SCSI protocols to talk to the disks at the back end also acts as a controller, and in my view this is an accurate description.
    Each of these constructs acts a a standards compliant controller in a software defined storage infrastructure, with multiple levels of encapsulation with consequent challenges that there is significant functional control overlap between these controllers. Over the next few posts I’ll go through what this encapsulation looks like, where the challenges and opportunities are in each level, the design choices we face, and build that up so we can see how close we are to achieving something that matches some of they hype around software defined storage.

It’s also worth noting that until I’ve reached my conclusion, much of what I’ve written and will write will not neatly match up with the analyst definitions of software defined storage. If you bear with me we’ll get there, and probably then some. My hope is that if you follow this journey you’ll be in a better position to take advantage of something that I’ll be referring to as “SLO Defined” storage (simply because I really don’t think that “Software Defined” is particularly useful as a label)

If you want to jump there now and get the analysts views, check out what IDC and Gartner have to say. For example IDC’s definitions of software defined storage from http://www.idc.com/getdoc.jsp?containerId=prUS24068713 says in part

software-based storage stacks should offer a full suite of storage services and federation between the underlying persistent data placement resources to enable data mobility of its tenants between these resources

The Gartner definition which isn’t public, takes a slightly different approach and can be found in their document “2013 Planning Guide: Data Center, Infrastructure, Operations, Private Cloud and Desktop Transformation” where it talks about higher level functionality including the ability for upper level applications to define what storage objects they need with pre-defned SLO’s and then have that automatically provisioned to them. (or at least that is my take after a quick read of the document).

IMHO, both of these definitions have merit, and both go way beyond merely running array software in a VSA, or bundling software management functions into a hypervisor, or pretty much anything else that seems to pass for Software Defined Storage today, which is why I think it’s worth writing about …. In …. Painful …. Detail Smile

As always, Comments and Criticisms are welcome.

John

NetApp to talk business critical applications at vForum2012

October 31, 2012 Leave a comment

Cloud technology is transforming the IT landscape, and represents huge opportunities for Australian businesses. Private Cloud technology, with server virtualisation at its core, helps speed up business processes as well as help IT departments overcome challenges such as increasing infrastructure costs and the exponential growth of data.

An agile Private Cloud infrastructure provides business with many benefits including the flexibility to scale up or down to meet the needs of any situation, while simultaneously reducing infrastructure costs. Many of NetApp’s customers have seen significant improvements to the efficiency of their IT departments since adopting an agile data infrastructure based on cloud technology. Server virtualization has increased productivity, enabling them to transition away from inefficient applications and silo based infrastructure.

Like any new technology, adopting a cloud based infrastructure is not without it challenges. Virtual infrastructure complexities, concerns about performance, availability, security and cost have traditionally prevented many organisations from virtualizing business critical applications.

vForum 2012 provides a fantastic opportunity for technology decision makers to meet with NetApp representatives to learn more about how adopting a Private cloud infrastructure and server virtualization can increase efficiency in their business. It will also provide a stage for NetApp to address common storage challenges about virtualizing business critical applications, including:

  • The need to maintain optimal application availability for the business
  • Virtualization and sharing of storage resources
  • Security and containerization of business critical applications
  • Prioritization of business critical applications on a shared storage infrastructure
  • Agility to bring applications online quicker
  • Not meeting recovery point objectives (RPO) due to backups not completing in the defined window
  • Not meeting recovery time objective (RTO) as the process is too slow
  • Needing large amounts of storage to create database copies for environments such as dev/test and needing vast amounts of time to do so
  • Long time to provision storage or add capacity and deploy applications
  • Low overall storage utilisation

Make sure you don’t miss out on hearing from Vaughn Stewart, NetApp director of cloud computing and virtualization evangelist, who will be presenting on how storage is a key enabler for business critical application transformation and the rise  of the software defined data centre.

For those keen to engage in a deep-dive session with one of our virtualization experts, 1:1 sessions are available and can be booked, on the day, at the NetApp platinum stand.

What else?

  • See NetApp’s VMworld award-winning demo
  • See how clustered ONTAP can simplify data mobility by moving workloads non-disruptively between storage nodes
  • Learn about NetApp’s planned integration with VMware VVOLS (coming in 2013)
  • See Virtual Storage Console with vSphere 5.1 and clustered ONTAP in action
  • Learn about why FlexPod (TM) is the best of breed data centre platform to accelerate your transition to the Cloud
  • Explore how NetApp’s big data solutions can turn your big data threat into opportunities

And for all the Mo Bros (and Mo Sistas) out there, come by our NetApp platinum stand at vForum2012 and support NetApp’s Movember campaign. Help us  to raise funds and awareness for men’s health issues, including prostate cancer and male mental health.

See you at vForum 2012 (14-15 November) at the Sydney Convention and Exhibition Centre. Register here: https://isa.infosalons.biz/reg/vforum12s/login.asp?src=vmware_eng_eventpage_web&elq=&ossrc=&xyz=

Categories: Marketing, Virtualisation Tags:

BYOD – A Practical Approach

July 27, 2012 Leave a comment

The major challenges with Bring Your Own Device (BYOD) are perhaps less about technology than they are about changing the mindset of IT departments in the face of the ongoing consumerisation of technology. Since the advent of personal computing in the 80’s, there has been a gradual shift to IT being a set of tools that surrounded an empowered user. However, the fact of the matter is that most of the IT that has been available simply wasn’t consumer ready. It was complex, expensive to own, insecure and impossible to manage. In the end, centralised IT was asked to take on the burden of fixing the mess.

The introduction of truly consumer ready devices and services has seen the realisation of the idea of the empowered user. Where the likes of IBM, Oracle and Microsoft were the biggest players in the first wave of IT, it is companies like Apple, Amazon, and Google who have been the dominant players in this second wave by creating an ecosystem of personal technology that is easy enough for a child to use at a price point that almost everyone can afford. The excitement generated by these new technologies means people want to use these new tools not only at home, but everywhere.

The fact that people are now prepared to pay for their own technology in exchange for the ability to have a choice in what they use hasn’t been lost on those who are asked to fund upgrades for ageing corporate infrastructure. The result is that BYOD is now firmly on most IT department’s radar. However, the people who want to BYOD don’t want to use it simply as a way of lowering IT’s overheads, they want it to continue being a tool that allows them to consume services that enable them to do their job better. They want their internal IT to give them the service levels they’ve grown used to, they want immediacy, and they want it now.

For IT departments to take advantage of the benefits of consumer IT and BYOD, they must begin to shift their focus away from being custodians of technology towards being a provider of a service. The delivery of this service-centric IT may in fact provide the necessary political and budget justification for creating a new IT foundation built on an agile data infrastructure. To ease the transition to BYOD, IT departments should take a four-phase approach.

  • Define your policy
    Publish a service catalogue of what applications and services can be consumed from tablets and smartphones from within the corporate environment, while simultaneously setting expectations around the use of public cloud technologies for sensitive data
  • Focus on quick wins
    Deploy virtual desktop infrastructure (VDI) for laptop users. This is something that can deliver a reasonably quick win, allows a broad range of end user devices to be brought into the corporate environment, and elegantly solves a number of security and supportability problems
  • Continue the dialogue
    Establish an effective two way communication mechanism that allows users and IT to work together to prioritise which services need to be developed and deployed in-house, and which may need to be blocked or uninstalled
  • Develop the total solution
    Expand on, or integrate the initial VDI deployment into a larger Infrastructure as a Service offering that will form the foundation for Software and Storage as a Service offerings. These will fulfil much broader demands and allow the BYOD strategy to be completely successful

By focusing on quick wins, effective two way communication, and building an agile foundation for the future, BYOD efforts can be the catalyst for building a truly service driven IT department.

The four principals of Private Cloud – Part 2 – Service Analytics

July 17, 2012 Leave a comment

While defining and publishing a set of IT services that meets or even exceeds the expectations of your end user to help them improve the agility of the business is the first step in delivering IT as a Service, we have to be able to make sure that we don’t over-promise and under-deliver.

In order to keep our promises, the IT governance disciplines we have developed over the years around capacity planning, and infrastructure design and deployment remain important but are transformed as we focus more on service optimisation. The luxury that we had in the past of assuring SLA conformance by over engineering and over-provisioning within an application silo is no longer affordable. More to the point this siloed mentality often leads to a lack of standardisation across the different silos which makes subsequent automation incredibly difficult.  On the other hand in our drive for lower costs and greater agility, we cant sacrifice the security, reliability and failure isolation that siloed infrastructure gave us, and which IT infrastructure professionals care so deeply about.  The kinds of shared virtualised infrastructures that cloud computing is built on must not only give  us greater efficiency and flexibility into our environment, it must at the same time give us the ability to  increase our level of management and oversight, and improve our ability to take corrective action if it is required.

The old adage that “You cant manage what you cant measure”, still holds true, but the trouble is that measurement in highly virtualised IT infrastructures presents a number of challenges. In larger IT organisations no one group holds all the information or expertise to troubleshoot performance problems, or to identify which resources are being consumed by a given user or application. We cant assume that because that, for example, the network, compute and storage resources are all meeting their SLA targets that the end user experience is also acceptable or meeting the SLA offered in the service catalog. Having team leaders sit in a “Come to Jesus” meeting with their arms folded saying things ike “It cant be the storage teams fault, we bought the most expensive frame array in existence and over-engineered the hell out of it, it must be a problem in the network” simply doesnt cut it in a world where service levels are king and nobody cares who’s fault it is, or which widget is currently fubar.

The bestway to address this is to make appropriate investments in the tools and processes that goes beyond simple monitoring. These tools need to enable IT staff to do things like identify service paths and confirm the redundancy of those paths, set policies on service paths for accessibility, performance, and availability, intelligently analyzing logs and other data to ensure and report on whether policies are are being adhered to, and and provide capacity planning forecast intelligence to allow optimal use of resources and implement and leverage just in time purchasing processes.

By building a solid Service Analytics capability, you can immediately improve overall governance and lay the foundations for successfully managing and optimising private cloud services via

  • Consumption-based  metering
  • Dynamic capacity optimization.
  • Reduced management and resource costs.
  • Provenance and Audit-ability of  SLA conformance
Categories: Cloud, Virtualisation

Too many ideas not enough time.

July 2, 2012 Leave a comment

A couple of weeks ago I went to an internal NetApp event called Foresight where our field technology experts from around the world meet with our technical directors, product managers and senior executives. A lot of the time is spent talking about recent developments that the field is seeing develop into new trends and how that intersects with current  technology roadmaps. We get to see new stuff that’s in development and generally get to spend about a week thinking and talking about futures. The cool thing about this is that while the day to day work of helping clients solve their pressing problems means that we often don’t get to see the forest for the trees, this kind of event lifts us high above the forest canopy to see a much broader picture.

Read more…

Data Storage for VDI – Part 9 – Capex and SAN vs DAS

July 22, 2010 4 comments

I’d intended writing about Megacaches in both the previous post, as well as this one, but interesting things keep popping up that need to be dealt with first. This time it’s an article at information week With VDI, Local Disk Is A Thing Of The Past. In it Elias Khnaser outlines the same argument that I was going to make after I’d dealt with the technical details of how NetApp optimises VDI deployments.

I still plan to expand on this with posts on Megacaches, single instancing technologies, and broker integration, but Elias’ post was so well done that I thought it deserved some immediate attention.

If you havent done so already, check out the article before you read more here, because the only point I want to make in this uncharacteristically small post is the following

The capital expenditure for storage in a VDI deployment based on NetApp is lower than one based on direct attached storage.

This is based on the following

Solution 1: VDI Using Local Storage – Cost

$614,400

Solution 2 : VDI Using HDS Midrange SAN – Cost

$860,800, with array costs of approx $400,000

Solution 3 : VDI Using FAS 2040 – Cost

860,000 – 400,000 + (2000 * $50) = $560,000

You save $54,000 (about 10% overall) compared to DAS and still get the benefits of shared storage. That’s $56,000 you can spend on more advanced broker software or possibly a trip to the Bahamas.

Now if you’re wondering where I got my figures from, I did the same sizing exercise I did in Part 7 of this post but using 12 IOPS per user and using 33:63 R:W ratio. I then came up with a configuration and asked one of my colleagues for a street price. The figure came out to around $US50/desktop user for an NFS deployment, which is inline with what NetApp has been saying for about our costs for VDI deployments for some time now.

Even factoring in things like professional services, additional network infrastructure, training etc, you’d still be better off from a up-front expenditure point of view using NetApp than you would with internal disks.

Given the additional OpEx benefits, I wonder why anyone would even consider using DAS, or even for that matter another vendors SAN.

%d bloggers like this: