Wednesday, August 5, 2009

Why Cloud Computing Matters In Spite of the Hype

The hyperbole to reality ratio that surrounds cloud computing is higher than anything I have seen since ATM. If you remember ATM, industry pundits told us that ATM would be our next and last networking technology. It wasn't.

One of the big differences, however, between cloud computing and ATM is that there were well agreed to specifications that defined ATM; e.g., constant bit rate, variable bit rate, etc.. Unfortunately, there is relatively little agreement in the industry, particularly on the part of IT organizations, as to what is meant by cloud computing.

A lot of my interest in cloud computing was driven by a very important article in Network World (http://www.networkworld.com/news/2008/102908-bechtel.html?page=2). In that article Carolyn Duffy Marsan interviewed Geir Ramleth, the CIO of Bechtel. Marsan described how Ramleth had benchmarked Bechtel's IT operation against leading Internet companies such as Amazon.com, Google, Salesforce.com and YouTube. I believe that the results of that benchmarking laid down a gauntlet for other IT organizations. Relative to WAN bandwidth, Bechtel estimated that YouTube spends between $10 and $15 per megabit/second/month for bandwidth, while Bechtel spends $500 per megabit/second/month for its Internet-based VPN. Relative to storage, Bechtel identified the fact that Amazon.com was offering storage for 10 cents per gigabyte per month while Bechtel's internal U.S. rate was $3.75 per gigabyte per month. In round numbers, Bechtel was paying roughly forty times more for a unit of WAN bandwidth or a unit of storage that the Internet companies were paying.  

I have been involved in a number of benchmarking projects. As such, I realize that the results can sometimes lack precision. However, a factor of forty in terms of cost savings is indeed compelling. It says to me that there is something here that is important for IT organizations to understand and apply judiciously in their organization. We just have to cut through the myriad layers of hype to find exactly what the reality is.

Thursday, July 16, 2009

Is Five 9s the Right Goal in a Cloud Computing World?

I used to be involved running the network for Digital Equipment Corporation (DEC). Before its demise, DEC has a wonderful network and a great network organization. We prided ourselves on keeping an expansive international network up and running back in the 1980s when networks broke a whole lot more than they do currently.

As part of DEC culture, the network organization went to Total Quality Management (TQM) training. I remember developing a six sigma plan for the network. The goal of the plan was to define what a network defect was and then to eliminate virtually all instances of those defects. IT professionals don’t use the phrase six sigma today as much as we once did. However, the phrase five 9s is extremely common and at one level the two phrases reflect the same concept. That concept is that IT is to be as available as possible. When I worked at DEC, nobody ever questioned that concept.

Earlier this week I was at Network World’s IT Roadmap conference in Philadelphia. The keynote speaker was Peter Whatnell. Whatnell is the Chief Information Officer at Sunoco. Peter stated that like most IT organizations they are under great pressure to reduce cost. One of the steps that they are taking to save money is to actually reduce the availability of some of their services. The example that Whatnell gave was that in order to provide 99.99% server availability they had to deploy clustering and other technologies that drove up the cost. While they still do that for the servers the support certain applications, have cut back this approach and now a lot more of their servers are designed to run at something closer to 99% availability.

As we enter the world of cloud computing, we need to acknowledge that we are not going to have the same levels of availability and performance that we have in the current environment. For example, one of my clients showed me the SLA that they have with the Software as a Service (SaaS) vendor salesforce.com. It read “We will take all reasonable actions to ensure that the service is available 7 x 24.” When I first read the SLA I was amazed at how vacuous it was. My amazement has since lessened. Clearly the Fortune 500 are not going to run certain critical business processes using SaaS nor are they going to store their most critical data at Amazon. However, it will be curious to see how many IT organizations go down the path suggested by Whatnell. That path being that it is ok to accept lower availability and performance if the cost savings are great enough.

Wednesday, July 1, 2009

The Need for an Effective IT Architecture

Last week I moderated two tracks at Network World’s IT Roadmap conference in Atlanta. One of the speakers at the conference was Kevin Fuller who is a global network architect at Coca-Cola. Kevin gave a great presentation and caused me to muse about effective IT architecture – how important it is and how rare it is to find one.  

To put my musings into context, about two years ago, I was hired by the IT organization of a Fortune 200 company. The goal of the project was to have me review their network architecture. I requested the IT organization send me a copy of their architecture documents and I was only somewhat surprised to find out that they did not have any. After spending a day with the organization it became quite clear that not only did they not have any network architecture documents, they did not have a well-understood architecture for any part of their network.  

More recently I was hired by a Fortune 100 company for a project to help make their architecture more impactful. As it turns out, the company had developed a very sophisticated IT architecture. There was little that I could do to add to the architecture. The problem, however, had little to do with the architecture itself. The basic problem was that nobody in the IT organization had to follow the architecture, and as a result, few did. If that sounds a bit odd to you, it did not sound that odd to me. I had experienced that phenomenon before. A number of years ago I was responsible for transmission, switching and routing for Digital Equipment Corporation’s (DEC’s) network. Every year, DEC’s global IT organization would create an architecture that focused on many aspects of DEC’s IT Infrastructure. Unfortunately, there was no pressure on any of the various IT groups within DEC to follow the architecture.  

Whether you think about virtualization or cloud computing, IT organizations are making some major changes and these changes cut across technology domains. To be successful, IT organizations need an effective architecture. By effective I mean that the architecture drives decision around technologies, designs, and vendors.

Friday, June 5, 2009

UCS – Brilliant Move or Blunder?

I spent last week in the San Jose area. Outside of the discussion of whether Avaya or Siemens would acquire Nortel, the next hottest topic was Cisco’s UCS. Some of the conversations were around the technology, but more of the conversations were about how the announcement of UCS will dramatically alter the marketplace. In particular, there was a lot of discussion about what Cisco’s movement into servers means for Cisco’s relationship with HP and IBM.

To put all of this into context, roughly two years ago I was among a small group of analysts who were having lunch with John Chambers. As ever, Chambers was peppering the room with questions. One of the questions he asked us two years ago was if we thought that three years into the future that Cisco would still be a close partner with HP, IBM and EMC. His question clearly portended Cisco’s movement into servers. Based on what I now know, I feel quite confident that a year from now Cisco will not be a close partner with HP and IBM.

There is a line of thought that says that they only way that the elastic provisioning of IT resources (a.k.a., cloud computing) will ever work is if the environment is homogeneous. This line of thought argues that even minor differences in the IT infrastructure greatly increase the difficulty of achieving the goals of cloud computing. If Cisco truly buys into this line of thought, then they could argue that they had to move into the server market just as some of the major players in the server market will have to move into the networking market.

However, there is another line of thought that says that Cisco is a big company and the only way that a big company can grow substantially is to enter other big markets. That is a reasonable business strategy, but like all business strategies it comes with risk. In this case, part of the risk is how will the major players in the server market respond? The story that I heard is that when Cisco told HP of their plans to enter the server market, they were walked to the door. I don’t know if that story is true, but I doubt if HP’s reaction was to embrace Cisco, give them the secret handshake and welcome them to the club. I also doubt if IBM was terribly amused. So what is Cisco’s upside as it enters the server market? The good news is that the server market is indeed sizable. The bad news is that it is characterized by a number of large, established players and relatively small margins.  

Cisco’s switch and router business brings in over twenty-five billion dollars a year in revenue and is characterized by extremely high margins. Cisco does have some competitors in the enterprise router market, but none of them have found a way to gain double-digit market share. There is a line of thought that says that Cisco is putting this cash cow at significant risk in order to enter a low margin market.  

Probably nothing dramatic will happen in the market in the near term. The rumors that IBM was going to buy Juniper Networks have calmed down, at least for now. HP already has a networking business, but I doubt if Cisco takes it very seriously. That could certainly change if HP started to gain market share. One of the key issues that will get played out over the next year or two is “Is it easier for a network company to do servers than it is for a server company to do networking?” Part of that issue is technical. A bigger part, however, is account centric. For example, who has more control over the customer – Cisco or IBM?  

Friday, May 29, 2009

Can We Talk About Cloud Computing as Rational Adults?

I participate on a lot of seminars. A year or two ago I was doing seminar on wide area networking and part of my presentation included a discussion of some emerging trends that would impact the WAN. One of the trends that I mentioned in my presentation was Services Oriented Architectures (SOA). One of the other panelists was the VP of marketing for a mid sized WAN service provider. He loved the fact that I talked about SOA and its impact on the WAN and encouraged me to spend a lot more time on that topic in order to “really hype the impact of SOA”. I tried to politely decline saying that I was not sure that SOA would have that much of an impact in the short term and I did not want to over-hype it. This thoroughly confused the VP of marketing who in a loud voice repeatedly tried to convince me that “it is impossible to over hype a technology”.

My feelings are just the opposite. I strongly believe that not only is it possible to over hype a technology but that over hyping a technology is the normal mode of operation in our industry. The problem as I see it is that some marketers really believe that IT organizations make decisions based on PowerPoint slides, analyst reports, and general hysteria. Having run networking groups in two Fortune 500 companies I can say that in my experience IT organizations make decisions based on facts.

That brings me to cloud computing. Before I go on, I want to emphasize that I am somewhat bullish on the potential of cloud computing. I am not going to use this blog to bash cloud computing. I am, however, going to use this blog to bash the zealous over hyping of cloud computing. I just finished a phone call with a VP at a company that offers cloud computing services. I was hoping to discuss with him what IT organizations need in their own environment as well as from their service providers in order to realize the potential benefits of cloud computing. Instead of an intelligent discussion, all that I got was hype. According to the person that I was talking with, there are no fundamental impediments to cloud computing and IT organizations are really anxious to use cloud computing services because of their supposed revulsion to ever buying another server.

As I stated, I am somewhat bullish on the potential of cloud computing. However, I think that IT organizations will realize that potential a lot sooner if we can talk about cloud computing as rational adults. In particular, we need to have an intelligent discussion about what has to be in place for IT organizations to make a very fundamental shift in terms of how they offer services. I tried to explain to the gentleman that I was talking to today, that IT organizations do not make fundamental shifts in a matter of months. He didn’t understand the concept.

OK, it is 5:00 somewhere. I am gong to get a glass of wine and go into the pool. Yes, I will look up at the clouds as I sip (gulp?) my chardonnay.

Thursday, May 21, 2009

Last Comments from Interop

I always love coming to Interop in Vegas. This year’s show is over and it definitely was a success. Ok, it was not the Interop of ten years ago. The show did, however, perform a critical task. For three days it brought together thousands of IT professionals and provided them with a platform by which they could learn about technology, ask questions and in general expand their understanding of technology and its myriad uses.  

One of my panels today explored the need for IT organizations to rethink their LAN strategy. The four panelists were Manfred Arndt of HP, Jeff Prince of Consentry, Barry Cioe of Enterasys and Kumar Srikantan of Cisco.  These are four leaders in our industry and I was very pleased to have them on the panel.

It should not come as a surprise to any of you that all four panelists were of the opinion that IT organizations need to deploy LAN switching functionality that is different from what was deployed just a few years ago. For example, Prince stated his belief that LAN access switches need to be able to natively understand context and use that for myriad purposes, including providing more flexible security. Cioe suggested that the movement to SaaS and cloud computing drives the need for visibility and control beyond Layer 4 in order to understand transactions and prevent the leak of intellectual property or confidential content. Arndt discussed how the growing movement to implement unified communications drives the need for technology such as Power over Ethernet (POE), POE Plus with intelligent power management and multi-user network access protection (NAP) based on 802.1X. Srikantan talked about how the next generation of LAN switching is characterized by base hardware (i.e., Gig Access w/ POE Plus), base services (i.e., L2 and routed access), enhanced services (i.e., MPLS and IP SLA), service modules (i.e., server balancing and firewalls) and investment protection; i.e., 7 to 10 year lifecycle and incremental upgrades.

I buy off on one of Srikantan’s key points – that being that the LAN switches that IT organizations deploy need to have a 7 to 10 year lifecycle and be able to also support incremental upgrades. I also believe that access switches need to be intelligent enough to support applications such as unified communications and also support the evolving security requirements. One last point that I buy off on is that the data center LAN needs to evolve in order to support the highly consolidated, highly virtualized data centers that many large companies are on the road to implementing. At this point in time, however, I don’t have a good handle on what I think the new data center LAN needs to look like.   That is still a work in progress.

While moderating eleven panels at Interop was fun, I am not all that sad that the show is over with. As much as I love coming to Interop in Vegas, I really love going home to Sanibel.

Jim Metzler

Day 2 at Interop in Vegas

On Wednesday I talked with a number of the exhibitors at Interop. Uniformly they stated that they were getting less booth traffic than they did last year, but that the people who were coming to the booths were more interested in talking about technology than in getting a t shirt or a nerf ball. The net result was that all of the exhibitors I talked to said they were pleased with the show. The attendance at the panels yesterday was a bit lighter than it was on Tuesday. There also appeared to be a bit of a drop in the energy of the attendees on Wednesday. Is it possible that some of the attendees stayed out late on Tuesday night?

One of the panels that I moderated on Wednesday was entitled “How Networks Can Assist Application Delivery”. One of the panelists was Gary Hemminger of Brocade. The focus of Gary’s presentation was on the role that Application Delivery Controllers (ADC) play in application delivery. One of the issues that Gary discussed was the fact that many application vendors including SAP, VMware, Microsoft and Oracle are now defining detailed APIs for interfacing their applications with network devices such as ADCs, switches and routers. One of the benefits of these APIs is that they enables] ADCs to dynamically respond to the requirements of the application. However, as Gary pointed out, each application has its own interface specification. The fact that each application has its own interface specification greatly increases the amount of effort that is required on the part of networking equipment vendors in order to take advantage of this capability.

Gary also discussed the advantages of implementing virtualized ADCs. Although it is possible to virtualize ADCs whereby multiple ADCs appear as one, Gary was referring to the opposite approach – of having one ADC appear as multiple ADCs. As he pointed out, there are two alternative approaches that a vendor can take to implement this form of virtualization. One approach is based on software. Since each virtual ADC needs to be resource constrained to prevent resource hogging, ADC vendors could use VMware along with vCenter/vSphere to manage virtual ADC instances. One of the disadvantages of this approach is that it can introduce significant overhead.

An alternative approach is to virtualize ADCs based on hardware. In particular, Gary described how ADCs can be virtualized on a per core basis and allowed for the fact that multiple cores could be assigned to a particular virtualized ADC. One of the advantages of this approach is that it avoids the overhead associated with the software approach. One of the disadvantages of this approach is scale – are there enough cores available to support the requirements.

Jim Metzler