Wednesday, September 16, 2009

Which is More Difficult: A Marriage or a Corporate Acquisition?

My nephew recently attended a wedding and later commented that the bride and groom were totally incompatible and that he expected that the marriage would soon dissolve. His comments don’t seem entirely unwarranted. Given that half of the marriages in the US end in divorce, if you attend two weddings you can expect that one of them will not last.

Corporate acquisitions are about as likely as a marriage to be successful. With that in mind, it is interesting to look at the two acquisitions that were announced earlier this week and speculate as to whether or not they will be successful. One of those acquisitions, Avaya’s acquisition of Nortel’s Enterprise Solutions Business unit, was long expected. Avaya spent nine hundred million dollars for Nortel and set aside another fifteen million dollars for employee retention. Avaya is owned by private equity firm Silver Lake. Silver Lake executive Charlie Giancarlo, ex-of Cisco, is responsible for Avaya. While it is difficult to know what Giancarlo and Kevin Kennedy, CEO of Avaya have in mind, it is possible to make some intelligent guesses. Giancarlo knows the LAN switching business intimately. He could be interested in resurrecting Nortel’s LAN switching product line and trying to take market share away from Cisco. However, I doubt that is what he has in mind. This acquisition looks to me like Avaya is buying the Nortel customer base and if that is the case, Avaya will not face the tough challenge of product integration. Avaya will try to find a way to get at least some value out of the Nortel product set that it acquired while they focus primarily on the task of migrating the Nortel customer base over to Avaya. Given that Avaya has a seasoned management team, it is likely that Avaya will transition most of the Nortel customers and end up getting a good return on their investment. As such, this marriage (make that this acquisition) should be successful.

The CA acquisition of NetQoS came together relatively quickly. CA paid two hundred million dollars for NetQoS – roughly four to five times earnings. In 2009 that is a big multiple and indicates that CA clearly values NetQoS. There is reason to hope that this marriage can work. CA is a very different company than it was just a few years ago when it was known as Computer Associates. The CA infrastructure management group has brought in some very skilled executives (i.e., Roger Pilc, Bill Ahlstrom) and CA has made some key acquisitions; i.e., Wiley. When the new CA acquires a company, it tends to give it a fair degree of autonomy – at least for a while. That being said, Pilc will definitely want to integrate NetQoS products into the rest of his portfolio. That process always tends to take some of the momentum away from the acquired company. Perhaps the strongest threat to this marriage is that CA looses too many key NetQoS personnel if it makes the mistake of forcing a big company culture down the throats of a fast-moving small company.  Still, this marriage should work out.

Wednesday, August 5, 2009

Why Cloud Computing Matters In Spite of the Hype

The hyperbole to reality ratio that surrounds cloud computing is higher than anything I have seen since ATM. If you remember ATM, industry pundits told us that ATM would be our next and last networking technology. It wasn't.

One of the big differences, however, between cloud computing and ATM is that there were well agreed to specifications that defined ATM; e.g., constant bit rate, variable bit rate, etc.. Unfortunately, there is relatively little agreement in the industry, particularly on the part of IT organizations, as to what is meant by cloud computing.

A lot of my interest in cloud computing was driven by a very important article in Network World (http://www.networkworld.com/news/2008/102908-bechtel.html?page=2). In that article Carolyn Duffy Marsan interviewed Geir Ramleth, the CIO of Bechtel. Marsan described how Ramleth had benchmarked Bechtel's IT operation against leading Internet companies such as Amazon.com, Google, Salesforce.com and YouTube. I believe that the results of that benchmarking laid down a gauntlet for other IT organizations. Relative to WAN bandwidth, Bechtel estimated that YouTube spends between $10 and $15 per megabit/second/month for bandwidth, while Bechtel spends $500 per megabit/second/month for its Internet-based VPN. Relative to storage, Bechtel identified the fact that Amazon.com was offering storage for 10 cents per gigabyte per month while Bechtel's internal U.S. rate was $3.75 per gigabyte per month. In round numbers, Bechtel was paying roughly forty times more for a unit of WAN bandwidth or a unit of storage that the Internet companies were paying.  

I have been involved in a number of benchmarking projects. As such, I realize that the results can sometimes lack precision. However, a factor of forty in terms of cost savings is indeed compelling. It says to me that there is something here that is important for IT organizations to understand and apply judiciously in their organization. We just have to cut through the myriad layers of hype to find exactly what the reality is.

Thursday, July 16, 2009

Is Five 9s the Right Goal in a Cloud Computing World?

I used to be involved running the network for Digital Equipment Corporation (DEC). Before its demise, DEC has a wonderful network and a great network organization. We prided ourselves on keeping an expansive international network up and running back in the 1980s when networks broke a whole lot more than they do currently.

As part of DEC culture, the network organization went to Total Quality Management (TQM) training. I remember developing a six sigma plan for the network. The goal of the plan was to define what a network defect was and then to eliminate virtually all instances of those defects. IT professionals don’t use the phrase six sigma today as much as we once did. However, the phrase five 9s is extremely common and at one level the two phrases reflect the same concept. That concept is that IT is to be as available as possible. When I worked at DEC, nobody ever questioned that concept.

Earlier this week I was at Network World’s IT Roadmap conference in Philadelphia. The keynote speaker was Peter Whatnell. Whatnell is the Chief Information Officer at Sunoco. Peter stated that like most IT organizations they are under great pressure to reduce cost. One of the steps that they are taking to save money is to actually reduce the availability of some of their services. The example that Whatnell gave was that in order to provide 99.99% server availability they had to deploy clustering and other technologies that drove up the cost. While they still do that for the servers the support certain applications, have cut back this approach and now a lot more of their servers are designed to run at something closer to 99% availability.

As we enter the world of cloud computing, we need to acknowledge that we are not going to have the same levels of availability and performance that we have in the current environment. For example, one of my clients showed me the SLA that they have with the Software as a Service (SaaS) vendor salesforce.com. It read “We will take all reasonable actions to ensure that the service is available 7 x 24.” When I first read the SLA I was amazed at how vacuous it was. My amazement has since lessened. Clearly the Fortune 500 are not going to run certain critical business processes using SaaS nor are they going to store their most critical data at Amazon. However, it will be curious to see how many IT organizations go down the path suggested by Whatnell. That path being that it is ok to accept lower availability and performance if the cost savings are great enough.

Wednesday, July 1, 2009

The Need for an Effective IT Architecture

Last week I moderated two tracks at Network World’s IT Roadmap conference in Atlanta. One of the speakers at the conference was Kevin Fuller who is a global network architect at Coca-Cola. Kevin gave a great presentation and caused me to muse about effective IT architecture – how important it is and how rare it is to find one.  

To put my musings into context, about two years ago, I was hired by the IT organization of a Fortune 200 company. The goal of the project was to have me review their network architecture. I requested the IT organization send me a copy of their architecture documents and I was only somewhat surprised to find out that they did not have any. After spending a day with the organization it became quite clear that not only did they not have any network architecture documents, they did not have a well-understood architecture for any part of their network.  

More recently I was hired by a Fortune 100 company for a project to help make their architecture more impactful. As it turns out, the company had developed a very sophisticated IT architecture. There was little that I could do to add to the architecture. The problem, however, had little to do with the architecture itself. The basic problem was that nobody in the IT organization had to follow the architecture, and as a result, few did. If that sounds a bit odd to you, it did not sound that odd to me. I had experienced that phenomenon before. A number of years ago I was responsible for transmission, switching and routing for Digital Equipment Corporation’s (DEC’s) network. Every year, DEC’s global IT organization would create an architecture that focused on many aspects of DEC’s IT Infrastructure. Unfortunately, there was no pressure on any of the various IT groups within DEC to follow the architecture.  

Whether you think about virtualization or cloud computing, IT organizations are making some major changes and these changes cut across technology domains. To be successful, IT organizations need an effective architecture. By effective I mean that the architecture drives decision around technologies, designs, and vendors.

Friday, June 5, 2009

UCS – Brilliant Move or Blunder?

I spent last week in the San Jose area. Outside of the discussion of whether Avaya or Siemens would acquire Nortel, the next hottest topic was Cisco’s UCS. Some of the conversations were around the technology, but more of the conversations were about how the announcement of UCS will dramatically alter the marketplace. In particular, there was a lot of discussion about what Cisco’s movement into servers means for Cisco’s relationship with HP and IBM.

To put all of this into context, roughly two years ago I was among a small group of analysts who were having lunch with John Chambers. As ever, Chambers was peppering the room with questions. One of the questions he asked us two years ago was if we thought that three years into the future that Cisco would still be a close partner with HP, IBM and EMC. His question clearly portended Cisco’s movement into servers. Based on what I now know, I feel quite confident that a year from now Cisco will not be a close partner with HP and IBM.

There is a line of thought that says that they only way that the elastic provisioning of IT resources (a.k.a., cloud computing) will ever work is if the environment is homogeneous. This line of thought argues that even minor differences in the IT infrastructure greatly increase the difficulty of achieving the goals of cloud computing. If Cisco truly buys into this line of thought, then they could argue that they had to move into the server market just as some of the major players in the server market will have to move into the networking market.

However, there is another line of thought that says that Cisco is a big company and the only way that a big company can grow substantially is to enter other big markets. That is a reasonable business strategy, but like all business strategies it comes with risk. In this case, part of the risk is how will the major players in the server market respond? The story that I heard is that when Cisco told HP of their plans to enter the server market, they were walked to the door. I don’t know if that story is true, but I doubt if HP’s reaction was to embrace Cisco, give them the secret handshake and welcome them to the club. I also doubt if IBM was terribly amused. So what is Cisco’s upside as it enters the server market? The good news is that the server market is indeed sizable. The bad news is that it is characterized by a number of large, established players and relatively small margins.  

Cisco’s switch and router business brings in over twenty-five billion dollars a year in revenue and is characterized by extremely high margins. Cisco does have some competitors in the enterprise router market, but none of them have found a way to gain double-digit market share. There is a line of thought that says that Cisco is putting this cash cow at significant risk in order to enter a low margin market.  

Probably nothing dramatic will happen in the market in the near term. The rumors that IBM was going to buy Juniper Networks have calmed down, at least for now. HP already has a networking business, but I doubt if Cisco takes it very seriously. That could certainly change if HP started to gain market share. One of the key issues that will get played out over the next year or two is “Is it easier for a network company to do servers than it is for a server company to do networking?” Part of that issue is technical. A bigger part, however, is account centric. For example, who has more control over the customer – Cisco or IBM?  

Friday, May 29, 2009

Can We Talk About Cloud Computing as Rational Adults?

I participate on a lot of seminars. A year or two ago I was doing seminar on wide area networking and part of my presentation included a discussion of some emerging trends that would impact the WAN. One of the trends that I mentioned in my presentation was Services Oriented Architectures (SOA). One of the other panelists was the VP of marketing for a mid sized WAN service provider. He loved the fact that I talked about SOA and its impact on the WAN and encouraged me to spend a lot more time on that topic in order to “really hype the impact of SOA”. I tried to politely decline saying that I was not sure that SOA would have that much of an impact in the short term and I did not want to over-hype it. This thoroughly confused the VP of marketing who in a loud voice repeatedly tried to convince me that “it is impossible to over hype a technology”.

My feelings are just the opposite. I strongly believe that not only is it possible to over hype a technology but that over hyping a technology is the normal mode of operation in our industry. The problem as I see it is that some marketers really believe that IT organizations make decisions based on PowerPoint slides, analyst reports, and general hysteria. Having run networking groups in two Fortune 500 companies I can say that in my experience IT organizations make decisions based on facts.

That brings me to cloud computing. Before I go on, I want to emphasize that I am somewhat bullish on the potential of cloud computing. I am not going to use this blog to bash cloud computing. I am, however, going to use this blog to bash the zealous over hyping of cloud computing. I just finished a phone call with a VP at a company that offers cloud computing services. I was hoping to discuss with him what IT organizations need in their own environment as well as from their service providers in order to realize the potential benefits of cloud computing. Instead of an intelligent discussion, all that I got was hype. According to the person that I was talking with, there are no fundamental impediments to cloud computing and IT organizations are really anxious to use cloud computing services because of their supposed revulsion to ever buying another server.

As I stated, I am somewhat bullish on the potential of cloud computing. However, I think that IT organizations will realize that potential a lot sooner if we can talk about cloud computing as rational adults. In particular, we need to have an intelligent discussion about what has to be in place for IT organizations to make a very fundamental shift in terms of how they offer services. I tried to explain to the gentleman that I was talking to today, that IT organizations do not make fundamental shifts in a matter of months. He didn’t understand the concept.

OK, it is 5:00 somewhere. I am gong to get a glass of wine and go into the pool. Yes, I will look up at the clouds as I sip (gulp?) my chardonnay.

Thursday, May 21, 2009

Last Comments from Interop

I always love coming to Interop in Vegas. This year’s show is over and it definitely was a success. Ok, it was not the Interop of ten years ago. The show did, however, perform a critical task. For three days it brought together thousands of IT professionals and provided them with a platform by which they could learn about technology, ask questions and in general expand their understanding of technology and its myriad uses.  

One of my panels today explored the need for IT organizations to rethink their LAN strategy. The four panelists were Manfred Arndt of HP, Jeff Prince of Consentry, Barry Cioe of Enterasys and Kumar Srikantan of Cisco.  These are four leaders in our industry and I was very pleased to have them on the panel.

It should not come as a surprise to any of you that all four panelists were of the opinion that IT organizations need to deploy LAN switching functionality that is different from what was deployed just a few years ago. For example, Prince stated his belief that LAN access switches need to be able to natively understand context and use that for myriad purposes, including providing more flexible security. Cioe suggested that the movement to SaaS and cloud computing drives the need for visibility and control beyond Layer 4 in order to understand transactions and prevent the leak of intellectual property or confidential content. Arndt discussed how the growing movement to implement unified communications drives the need for technology such as Power over Ethernet (POE), POE Plus with intelligent power management and multi-user network access protection (NAP) based on 802.1X. Srikantan talked about how the next generation of LAN switching is characterized by base hardware (i.e., Gig Access w/ POE Plus), base services (i.e., L2 and routed access), enhanced services (i.e., MPLS and IP SLA), service modules (i.e., server balancing and firewalls) and investment protection; i.e., 7 to 10 year lifecycle and incremental upgrades.

I buy off on one of Srikantan’s key points – that being that the LAN switches that IT organizations deploy need to have a 7 to 10 year lifecycle and be able to also support incremental upgrades. I also believe that access switches need to be intelligent enough to support applications such as unified communications and also support the evolving security requirements. One last point that I buy off on is that the data center LAN needs to evolve in order to support the highly consolidated, highly virtualized data centers that many large companies are on the road to implementing. At this point in time, however, I don’t have a good handle on what I think the new data center LAN needs to look like.   That is still a work in progress.

While moderating eleven panels at Interop was fun, I am not all that sad that the show is over with. As much as I love coming to Interop in Vegas, I really love going home to Sanibel.

Jim Metzler

Day 2 at Interop in Vegas

On Wednesday I talked with a number of the exhibitors at Interop. Uniformly they stated that they were getting less booth traffic than they did last year, but that the people who were coming to the booths were more interested in talking about technology than in getting a t shirt or a nerf ball. The net result was that all of the exhibitors I talked to said they were pleased with the show. The attendance at the panels yesterday was a bit lighter than it was on Tuesday. There also appeared to be a bit of a drop in the energy of the attendees on Wednesday. Is it possible that some of the attendees stayed out late on Tuesday night?

One of the panels that I moderated on Wednesday was entitled “How Networks Can Assist Application Delivery”. One of the panelists was Gary Hemminger of Brocade. The focus of Gary’s presentation was on the role that Application Delivery Controllers (ADC) play in application delivery. One of the issues that Gary discussed was the fact that many application vendors including SAP, VMware, Microsoft and Oracle are now defining detailed APIs for interfacing their applications with network devices such as ADCs, switches and routers. One of the benefits of these APIs is that they enables] ADCs to dynamically respond to the requirements of the application. However, as Gary pointed out, each application has its own interface specification. The fact that each application has its own interface specification greatly increases the amount of effort that is required on the part of networking equipment vendors in order to take advantage of this capability.

Gary also discussed the advantages of implementing virtualized ADCs. Although it is possible to virtualize ADCs whereby multiple ADCs appear as one, Gary was referring to the opposite approach – of having one ADC appear as multiple ADCs. As he pointed out, there are two alternative approaches that a vendor can take to implement this form of virtualization. One approach is based on software. Since each virtual ADC needs to be resource constrained to prevent resource hogging, ADC vendors could use VMware along with vCenter/vSphere to manage virtual ADC instances. One of the disadvantages of this approach is that it can introduce significant overhead.

An alternative approach is to virtualize ADCs based on hardware. In particular, Gary described how ADCs can be virtualized on a per core basis and allowed for the fact that multiple cores could be assigned to a particular virtualized ADC. One of the advantages of this approach is that it avoids the overhead associated with the software approach. One of the disadvantages of this approach is scale – are there enough cores available to support the requirements.

Jim Metzler  

Wednesday, May 20, 2009

First Impressions of Interop

I landed in Vegas Monday afternoon (5/18) around 4:00. When I stepped out of the hotel I saw something that I have never seen before in Vegas – there was absolutely no line for a taxi. Every other time I have come to Vegas there has been a long line, often lasting a half hour or more. My fear was that the Interop show would be as empty as the taxi line. It is not. It appears to be down some from last year, but there still is a lot of energy here.

The first session I moderated on Tuesday morning was on Application Performance Management (APM). The panelists were from NetQoS, CA and Fluke. I find this to be a very important topic because I strongly believe that all that a company’s business managers really care about is the performance of a handful of applications that they use to run their business unit. All of the infrastructure components (e.g., LAN, WAN, SAN, servers, OSs, firewalls, WOCs – you get the idea) are just a means towards an end.

The attendance at the session was ok, but less than I expected for this topic. The three panelists did a good job of describing APM and their company’s approach. Paul Ellis of CA drove home the fact that CA believes that IT organizations need to focus on the transaction and the quality of the user’s experience with that transaction. Matt Sherrod of NetQoS and Doug Roberts of Fluke Networks both did an admirable job of creating a framework for how IT organizations should approach APM.

The bottom line is that I was quite pleased with all three presentations. Then we got to the Q&A and the gap between what is being promoted by vendors and analysts and what is being practiced by IT organizations became painfully clear. For example, vendors and analysts have been talking for years about what IT organizations need to do to meet their internal SLAs. When asked, hardly any of the participants stated that they offer internal SLAs. That did not surprise me. Even more interesting is that vendors and analysts have also been talking for years about the need for visibility into applications. When asked, relatively few of the participants stated that they had that kind of view even though most of them had some kind of APM tool. That did surprise me. The feedback from the participants was that the main reason they didn’t have that kind of visibility was the overall complexity of the IT environment. Given that I believe that things are only going to get more complex, the gap between theory and practice may well get larger over the next few years.

Jim Metzler

Monday, May 18, 2009

A Comparison of Application Performance Management (APM) Vendors

Management used to be focused primarily on the availability of network devices such as switches and routers. However, in the last few years the focus of management has evolved to where it now typically includes the performance of both networks and applications. While the shift has been relatively recent, the industry is flooded with vendors who claim to offer application performance management (APM) products. Viewed from a hundred thousand foot level, the majority of APM tool vendors all make very similar promises. Most if not all APM tool vendors promise that their products can help to identify when the performance of an application is degrading and can help to identify the component of IT that is causing the degradation; i.e., is it the WAN or the servers that is causing the degradation. Some APM tool vendors claim that their tools also enable an IT organization to identify the particular sub-element (e.g., the particular WAN link or server) that is causing the degradation.

The first panel that I will be moderating at Interop is entitled “Application Performance Management”. The primary goal of this panel is to help IT organizations get better at APM. A secondary goal is to help IT organizations understand some of the primary similarities and differences amongst APM vendors. To achieve those goals I have invited three APM vendors to the panel. Those vendors are Fluke Networks, NetQoS and CA. I have asked each of the panelists to spend about 15 minutes discussing what it takes for IT organizations to be successful with APM. At the conclusion of the formal presentations we will have a Q&A. I will start the Q&A by asking each of the panelists to discuss how their company is differentiated in the marketplace. After that, I will turn it over to the audience for further questions.

The panel will be held Tuesday, May the 19th from 10:15 to 11:15 in Breakers E. If you are going to be at Interop, I invite you to attend.

Jim Metzler

Friday, May 15, 2009

Technologies that Enterprise CTOs Like

At the Interop conference next week in Las Vegas most of the educational sessions will feature vendors who will try to convince the attendees that they should acquire the vendor's products or services. There is nothing wrong with approach that as long as the speakers abstain from making too flagrant of a sales pitch.

One of my eleven sessions, however, does not have any vendor speakers. The session is entitled "CTO Roundtable - Which Emerging Technologies Will Make an Impact?" The session will be held on Tuesday from 2:45 to 3:45. The room for the session is Breakers E.

The exciting aspect of this session is that I have brought together CTOs from three companies that are in different industries and which very widely in size. I have asked the panelists to discuss which technologies they are bullish about and why. I am particularly interested to see if any of the CTOs are investing early in the life cycle of a technology because of the strong promise it offers.

I have also asked the three CTOs to identify which technologies they think are either over-hyped or just have little applicability for their organization. I am very interested to see which technologies make their lists for being over-hyped. To my way of thinking possibilities include SOA, SaaS, Web 2.0, desktop virtualization and public cloud computing. This session will be particularly interesting if one CTO identifyies a technology that they find to be very impactful and another CTO discusses how they find that technology to be over-hyped.

If you will be at Interop I hope that you find the time to attend this session.

Jim Metzler

Tuesday, May 12, 2009

Is There a Need to Rethink the LAN?

Is There a Need to Rethink the LAN?

OK, I have been in the industry long enough that I can remember the era of slow-speed, shared LANs. I also remember an infamous article that appeared in a trade magazine in the early 1990s that argued that it was impossible to ever exhaust the capacity of a shared 10 Mbps Ethernet LAN. The authors of that article were not dumb. They were, however, very na├»ve. They assumed that the world that they knew would not change. In particular, they assumed that the primary use of the enterprise LAN would remain what it was – supporting simple applications such as word-processing and email. And of course, in their vision of the future email did not have attachments such as a 30 MB PowerPoint file or a video.

In the mid to late 1990s IT organizations made the transition from shared to switched LANs. However, for most of the last decade LAN design has been pretty staid. Now a number of vendors are talking about the need for a new, highly functional LAN switch. Some vendors are even talking about the need for a new LAN architecture. It would be easy to write this off as just vendor hype. However, we all want to avoid the previously mentioned situation. In particular, we want to avoid being surprised and unprepared for the fact that the LAN needs to undergo fundamental changes in order to support changing demands.

With this in mind, I invite those of you who are attending the Vegas Interop conference to attend my panel that is entitled ‘Is there a need for a next generation LAN switch?’ On the panel I have Manfred Arndt, Distinguished Technologist at HP; Jeff Prince, CEO at Consentry Networks; Barry Cioe, VP of Product Management & Marketing for Enterasys; and Kumar Srikantan, VP of Product Management at Cisco.

When it comes to the LAN, these speakers are some of the industry heavyweights. This should be a very interesting panel.

Jim Metzler

Wednesday, April 29, 2009

Cloud Computing is Like Sex

Last night while discussing with some senior IT executives the hyperbole that currently surrounds public cloud computing the thought struck me that public clouds are like sex. If you think about it, Oprah makes millions of dollars a year talking about sex. A practitioner such as Heidi Fleiss (a.k.a., the "Hollywood Madam”), however, does not fare as well. Ms. Fleiss, as you may or may not remember, ended up in jail.

The relevance to cloud computing is that industry analysts can make a lot of money talking about public cloud computing. In contrast to Ms. Fleiss, IT organizations that entrust their data and critical applications to public cloud vendors based largely on the hype in the marketplace will probably not end up in jail. They will, however, expose both their company and their careers to unnecessary risk.

There are aspects of the cloud computing vision that are real today and which offer value today.  However, as an industry we need to get past all of the hype and fluff and talk about what is real and what it takes for IT organizations to take advantage of cloud computing.

Monday, April 27, 2009

Nicholas Carr's Simplistic View of Cloud Computing

Nicholas Carr is at it again.  After the dot com implosion, Carr wrote an article in the Harvard Business review entitled "IT Doesn't Matter".   In the article, Carr aruges that since information technology is generally available to all organizations, it does not provide a permanent strategic advantage to any company.  One of the reasons that I find Carr's argument to be simplistic is that it assumes that all company's are equally adept at utilizing IT to their advantage.  This is clearly not the case.  Another reason is that he seems to dismiss the idea of using IT to get a strategic advantage that while not permanent, will be in affect for years.  The IT organizations that I deal with are quite pleased if they can help their company get a two year advantage over their competitors.

Carr recently authored "The Big Switch" and again his arguments are simplistic.  The book begins with a thorough description of how the electric utilities developed in the US.  He then argues by analogy that Cloud Computing is the future of IT.  The analogy being that the provision of IT services will evolve exactly the same way as the provision of electicity did.  

I have two primary concerns with Carr's argument.  The first is the fact that any argument by analogy is necessarily week.  The generation of electricity and the provision of IT services may well have some similarities, but they are not the same thing.  My second concern is that the way the book reads, Carr has already determined that the future of IT is Cloud Computing and is out to convince the reader of that.  There is no real discussion in the book of the pros and cons of Cloud Computing merely the repeated assertion that the future is Cloud Computing.   Perhaps the closest that Carr comes to discussing the pros of Cloud Computing is when he quotes some anonymous industry analyst as saying that Amazon's cost of providing Cloud Computing services is one tenth of what it would cost the traditional IT organization.  There is, however, no citation or backup of any kind to allow us to better understand that assertion.

More important, there is no discussion in the book of what has to happen from a technology perspective to make Cloud Computing viable.  Cloud Computing might well be a dominant force in the provision of IT services some time in the future.   Cloud Computing, however, involves the sophisticated interaction of numerous complex technologies.  Carr would have better served the industry if he had spent some attention identifying the impediments that inhibit Cloud Computing and provided his insight into when those impediments will be overcome.

Jim Metzler