Too many ideas not enough time.
A couple of weeks ago I went to an internal NetApp event called Foresight where our field technology experts from around the world meet with our technical directors, product managers and senior executives. A lot of the time is spent talking about recent developments that the field is seeing develop into new trends and how that intersects with current technology roadmaps. We get to see new stuff that’s in development and generally get to spend about a week thinking and talking about futures. The cool thing about this is that while the day to day work of helping clients solve their pressing problems means that we often don’t get to see the forest for the trees, this kind of event lifts us high above the forest canopy to see a much broader picture.
While I can’t talk directly about the majority of what was presented at the event, I can say that I came away with a sense of excitement about what’s happening over the next three years, and also a little daunted by the implications of accelerated change and the opportunity we have to make a contribution not just to the art of IT, but more broadly to society as a whole.
Then while I was digesting that and researching a whole stack of new interesting stuff that I’d learned about during the event, I was asked at short notice to host an interactive discussion table at the NSW Futuregov event.
This event has a fairly unique format. Imagine a kind of group speed-dating exercise with CIO’s where you act as a joint facilitator for a 20 minute group discussion on a particular topic with another CIO. The subject was Government Cloud. With an average of six people at the table, and a five minute introductory brief from the facilitators, that left about two to three minutes of talk time for each of the CIO’s to express their concerns, seek feedback from the other table members, and ask clarifying questions of the facilitators. With those kinds of time constraints you really live the maxim that good questions are far more important than good answers.
I came away from that set of conversations with over thirty CIO’s with a much greater appreciation of the challenges faced by decision makers, which made me view the discussions I’d had during the Foresight event in an entirely new light.
Then I had the opportunity to spend an afternoon listening to Paul Strong the CTO, Global Field & Customer Initiatives at VMware who explained VMware’s view of the market and where the future is taking them. That presentation and more especially the subsequent Q&A session with a Paul, Peter Lees and Pat Breen from Netapp and Roger Lawrence the CTO for cloud services in the South Pacific for HP Enterprise Services changed my perceptions yet again.
To be honest, I’m still processing a lot of this information, and I’m juggling a lot of concepts around and waiting for the “aha !” moment when something crystalises out of the jumble of assorted ideas. In the mean time I thought I’d share some of the questions that are rattling around in my head.
- There are already a number of other industry disruptions caused by changes in consumption models built on cloud infrastructures e.g. The disintermediation in retail from online business, the blurring lines between publishers and retailers in the music and book industries, the rise of blogging and the impending demise of broadsheet print journalism. What other disruptions are likely to occur, and does traditional IT have a role in facilitating this for their businesses, or should they stay out of the way and let that kind of innovation happen organically and hope that it doesn’t come back to bite them ?
- If cloud is more about new consumption models than it is about technology, how do you make that work internally whenthe majority of IT funding is still done annual chunks ?
- If the majority of IT funding is spent managing hundreds or even thousands of discrete applications does Software as a Service (SaaS) actually make the problem worse ?
- Is SaaS sprawl a tougher problem to solve than VM sprawl ? or given the general “one app per VM” rule that seems to still be around, does VM sprawl just make the problem of application sprawl easier to enumerate ?
- If public ITaaS offerings cherry pick all the twenty or so commonly used software/service offerings such as e-mail then will IT and or Outsources be expected to continue managing the hundreds or thousands of remaining application that cant be operationally scaled in an efficient way ?
- If so will IT be expected to do so at a price equivalent to the cloud providers without being able to benefit from the economies of scale or will that support be pushed out to the consumer in the same way iPhone app administration is done. ?
- In what way are these problems really new ? What are the parallels with the 80’s when people plonked PC’s on their desk with a copy of Visicalc because the IT department back then couldn’t provide new application functionality on the mainframe quickly enough ?
- If data has mass (i.e its hard to move around, and the bigger it gets the harder it is to move), then does it make sense for corporates to house all their own data but outsource the compute capability into the cloud at the cheapest spot price via high speed low(ish) latency pipes, and accelerate performance via advanced data caching techniques ?
- If companies remain the primary custodians’ of their own data does it make sense for them to provide this via storage as a service methods with automation and web-services interfaces to make it easier to cloud-burst compute capabilities, or are good old fashioned methods of storage management still up to the job ?
Those are just a few top of mind issues, there are a bunch of other ones too, but as I said before, I believe that good questions are a lot more important to me than good answer so if you’ve got any insights or better questions feel free to drop them into the comments.