Thursday, April 28, 2011

Cloud Management

One of my concerns about the use of public cloud computing services is the tradeoff that IT organizations typically have had to make. On the positive side of the tradeoff, by using a public cloud service companies lower their cost and gain access to functionality that they normally would not have access to. Those are both compelling reasons to use public cloud services. However, on the negative side of the tradeoff, IT organizations typically loose all visibility into the availability and performance of the service that they get from a public cloud provider. I shiver when industry pundits show graphs and charts of some of the major cloud providers to demonstrate that they usually have high availability and totally gloss over the issue of performance. This line of reasoning reminds me of the refrain in that old song that went “don’t worry, be happy”.

I don’t dispute the fact that a best effort approach is totally appropriate for many applications and workloads. That said, I strongly believe that there is a large and growing set of applications and workloads for which a best effort level of support is not appropriate. The successful support of these applications and workloads requires that IT organizations have detailed management insight into their availability and performance.

IT organizations are beginning to demand better management insight from cloud computing service providers (CCSPs). Fortunately, many CCSPs are scrambling to find ways to differentiate themselves in the market. One clear way that CCSPs can differentiate themselves is by offering a performance-based SLA and by making it easier for their customers to monitor the end-to-end availability and performance of the services that they provide. Given that, there is reason to hope that CCSPs will begin to provide more management insight.

One company that is addressing this challenge is AppNeta. AppNeta was recently launched by the team and technology of Apparent Networks. AppNeta’s PathView Cloud Network Performance Management solutions performs non-invasive, end-to-end performance monitoring of bandwidth utilization, delay, jitter, QoS and packet loss on a hop-by-hop basis. AppNeta is supplementing that active network monitoring capability with the capability to perform active performance analysis of applications such as VoIP, video, IP storage and VDI. AppNeta also offers the capabilities to perform packet and traffic capture at remote sites using the PathView appliances, a zero-administration device.

One of the things that I find most appealing about the AppNeta solutions is that they are cloud based and hence bring to the market all of the benefits of cloud based services. This means that whether you want to manage the performance of a cloud based service or of a more traditional service, you can turn on that functionality virtually instantaneously, start getting detailed management data immediately and do all that without a capital expenditure. A big step forward.

Tuesday, September 28, 2010

Get Ready for Virtual Wireless LANs

 

I was recently in Boston to participate in a seminar that was produced by Blue Socket and IBM.  The focus of the seminar was virtual wireless LANs.  At first blush, the thought of a virtual wireless LAN seems a bit strange.  One obvious question is ‘how do you virtualize an access point?”  The quick answer is that you don’t.  The thrust of the seminar was on the need to separate the control and data plane of a wireless LAN switch in a fashion similar to the Cisco Nexus 1000V.   In addition, there is distinct value in virtualizing the controller software and hence creating a virtual wireless LAN.  In particular, virtualizing the controller has a number of benefits, including reducing the acquisition cost and making it easier to add capacity as needed. 

We are going to follow up the seminar with a Webinar on Sept 30th, at 12pm (EST).  Feel free to join us to enjoy an engaging discussion about the benefits of cloud networking, virtualization and virtual wireless LANs.

 To sign up go to : http://web06.echomail.com/web02/l.docid=15&mid=948&e=bbe21~ubgznvy.pbz&t=1073

 Below is a more formal description of the Webinar:

 Whether you're a large enterprise or small to medium business, you'll soon be benefiting from virtualizing your IT organization.  Join this discussion to learn how you can consolidate your virtual efforts across the IT organization to build a smarter network that is cost-efficient and future proof.  This webinar, moderated by Jim Metzler, will feature Patrick Foy who will talk to us about Virtualizing your WLAN. Expect to hear everything you need to know about Virtual Wireless LANs.

* Recognize what can be virtualized and the advantages of virtualizing them

* Get exposed to the challenges of server virtualization

* Listen and interact with the folks who dare to ask the right questions and develop best practices.

* Understand what is meant by Virtual Wireless LAN and how it can become part of your virtual strategy today

Wednesday, September 16, 2009

Which is More Difficult: A Marriage or a Corporate Acquisition?

My nephew recently attended a wedding and later commented that the bride and groom were totally incompatible and that he expected that the marriage would soon dissolve. His comments don’t seem entirely unwarranted. Given that half of the marriages in the US end in divorce, if you attend two weddings you can expect that one of them will not last.

Corporate acquisitions are about as likely as a marriage to be successful. With that in mind, it is interesting to look at the two acquisitions that were announced earlier this week and speculate as to whether or not they will be successful. One of those acquisitions, Avaya’s acquisition of Nortel’s Enterprise Solutions Business unit, was long expected. Avaya spent nine hundred million dollars for Nortel and set aside another fifteen million dollars for employee retention. Avaya is owned by private equity firm Silver Lake. Silver Lake executive Charlie Giancarlo, ex-of Cisco, is responsible for Avaya. While it is difficult to know what Giancarlo and Kevin Kennedy, CEO of Avaya have in mind, it is possible to make some intelligent guesses. Giancarlo knows the LAN switching business intimately. He could be interested in resurrecting Nortel’s LAN switching product line and trying to take market share away from Cisco. However, I doubt that is what he has in mind. This acquisition looks to me like Avaya is buying the Nortel customer base and if that is the case, Avaya will not face the tough challenge of product integration. Avaya will try to find a way to get at least some value out of the Nortel product set that it acquired while they focus primarily on the task of migrating the Nortel customer base over to Avaya. Given that Avaya has a seasoned management team, it is likely that Avaya will transition most of the Nortel customers and end up getting a good return on their investment. As such, this marriage (make that this acquisition) should be successful.

The CA acquisition of NetQoS came together relatively quickly. CA paid two hundred million dollars for NetQoS – roughly four to five times earnings. In 2009 that is a big multiple and indicates that CA clearly values NetQoS. There is reason to hope that this marriage can work. CA is a very different company than it was just a few years ago when it was known as Computer Associates. The CA infrastructure management group has brought in some very skilled executives (i.e., Roger Pilc, Bill Ahlstrom) and CA has made some key acquisitions; i.e., Wiley. When the new CA acquires a company, it tends to give it a fair degree of autonomy – at least for a while. That being said, Pilc will definitely want to integrate NetQoS products into the rest of his portfolio. That process always tends to take some of the momentum away from the acquired company. Perhaps the strongest threat to this marriage is that CA looses too many key NetQoS personnel if it makes the mistake of forcing a big company culture down the throats of a fast-moving small company.  Still, this marriage should work out.

Wednesday, August 5, 2009

Why Cloud Computing Matters In Spite of the Hype

The hyperbole to reality ratio that surrounds cloud computing is higher than anything I have seen since ATM. If you remember ATM, industry pundits told us that ATM would be our next and last networking technology. It wasn't.

One of the big differences, however, between cloud computing and ATM is that there were well agreed to specifications that defined ATM; e.g., constant bit rate, variable bit rate, etc.. Unfortunately, there is relatively little agreement in the industry, particularly on the part of IT organizations, as to what is meant by cloud computing.

A lot of my interest in cloud computing was driven by a very important article in Network World (http://www.networkworld.com/news/2008/102908-bechtel.html?page=2). In that article Carolyn Duffy Marsan interviewed Geir Ramleth, the CIO of Bechtel. Marsan described how Ramleth had benchmarked Bechtel's IT operation against leading Internet companies such as Amazon.com, Google, Salesforce.com and YouTube. I believe that the results of that benchmarking laid down a gauntlet for other IT organizations. Relative to WAN bandwidth, Bechtel estimated that YouTube spends between $10 and $15 per megabit/second/month for bandwidth, while Bechtel spends $500 per megabit/second/month for its Internet-based VPN. Relative to storage, Bechtel identified the fact that Amazon.com was offering storage for 10 cents per gigabyte per month while Bechtel's internal U.S. rate was $3.75 per gigabyte per month. In round numbers, Bechtel was paying roughly forty times more for a unit of WAN bandwidth or a unit of storage that the Internet companies were paying.  

I have been involved in a number of benchmarking projects. As such, I realize that the results can sometimes lack precision. However, a factor of forty in terms of cost savings is indeed compelling. It says to me that there is something here that is important for IT organizations to understand and apply judiciously in their organization. We just have to cut through the myriad layers of hype to find exactly what the reality is.

Thursday, July 16, 2009

Is Five 9s the Right Goal in a Cloud Computing World?

I used to be involved running the network for Digital Equipment Corporation (DEC). Before its demise, DEC has a wonderful network and a great network organization. We prided ourselves on keeping an expansive international network up and running back in the 1980s when networks broke a whole lot more than they do currently.

As part of DEC culture, the network organization went to Total Quality Management (TQM) training. I remember developing a six sigma plan for the network. The goal of the plan was to define what a network defect was and then to eliminate virtually all instances of those defects. IT professionals don’t use the phrase six sigma today as much as we once did. However, the phrase five 9s is extremely common and at one level the two phrases reflect the same concept. That concept is that IT is to be as available as possible. When I worked at DEC, nobody ever questioned that concept.

Earlier this week I was at Network World’s IT Roadmap conference in Philadelphia. The keynote speaker was Peter Whatnell. Whatnell is the Chief Information Officer at Sunoco. Peter stated that like most IT organizations they are under great pressure to reduce cost. One of the steps that they are taking to save money is to actually reduce the availability of some of their services. The example that Whatnell gave was that in order to provide 99.99% server availability they had to deploy clustering and other technologies that drove up the cost. While they still do that for the servers the support certain applications, have cut back this approach and now a lot more of their servers are designed to run at something closer to 99% availability.

As we enter the world of cloud computing, we need to acknowledge that we are not going to have the same levels of availability and performance that we have in the current environment. For example, one of my clients showed me the SLA that they have with the Software as a Service (SaaS) vendor salesforce.com. It read “We will take all reasonable actions to ensure that the service is available 7 x 24.” When I first read the SLA I was amazed at how vacuous it was. My amazement has since lessened. Clearly the Fortune 500 are not going to run certain critical business processes using SaaS nor are they going to store their most critical data at Amazon. However, it will be curious to see how many IT organizations go down the path suggested by Whatnell. That path being that it is ok to accept lower availability and performance if the cost savings are great enough.

Wednesday, July 1, 2009

The Need for an Effective IT Architecture

Last week I moderated two tracks at Network World’s IT Roadmap conference in Atlanta. One of the speakers at the conference was Kevin Fuller who is a global network architect at Coca-Cola. Kevin gave a great presentation and caused me to muse about effective IT architecture – how important it is and how rare it is to find one.  

To put my musings into context, about two years ago, I was hired by the IT organization of a Fortune 200 company. The goal of the project was to have me review their network architecture. I requested the IT organization send me a copy of their architecture documents and I was only somewhat surprised to find out that they did not have any. After spending a day with the organization it became quite clear that not only did they not have any network architecture documents, they did not have a well-understood architecture for any part of their network.  

More recently I was hired by a Fortune 100 company for a project to help make their architecture more impactful. As it turns out, the company had developed a very sophisticated IT architecture. There was little that I could do to add to the architecture. The problem, however, had little to do with the architecture itself. The basic problem was that nobody in the IT organization had to follow the architecture, and as a result, few did. If that sounds a bit odd to you, it did not sound that odd to me. I had experienced that phenomenon before. A number of years ago I was responsible for transmission, switching and routing for Digital Equipment Corporation’s (DEC’s) network. Every year, DEC’s global IT organization would create an architecture that focused on many aspects of DEC’s IT Infrastructure. Unfortunately, there was no pressure on any of the various IT groups within DEC to follow the architecture.  

Whether you think about virtualization or cloud computing, IT organizations are making some major changes and these changes cut across technology domains. To be successful, IT organizations need an effective architecture. By effective I mean that the architecture drives decision around technologies, designs, and vendors.

Friday, June 5, 2009

UCS – Brilliant Move or Blunder?

I spent last week in the San Jose area. Outside of the discussion of whether Avaya or Siemens would acquire Nortel, the next hottest topic was Cisco’s UCS. Some of the conversations were around the technology, but more of the conversations were about how the announcement of UCS will dramatically alter the marketplace. In particular, there was a lot of discussion about what Cisco’s movement into servers means for Cisco’s relationship with HP and IBM.

To put all of this into context, roughly two years ago I was among a small group of analysts who were having lunch with John Chambers. As ever, Chambers was peppering the room with questions. One of the questions he asked us two years ago was if we thought that three years into the future that Cisco would still be a close partner with HP, IBM and EMC. His question clearly portended Cisco’s movement into servers. Based on what I now know, I feel quite confident that a year from now Cisco will not be a close partner with HP and IBM.

There is a line of thought that says that they only way that the elastic provisioning of IT resources (a.k.a., cloud computing) will ever work is if the environment is homogeneous. This line of thought argues that even minor differences in the IT infrastructure greatly increase the difficulty of achieving the goals of cloud computing. If Cisco truly buys into this line of thought, then they could argue that they had to move into the server market just as some of the major players in the server market will have to move into the networking market.

However, there is another line of thought that says that Cisco is a big company and the only way that a big company can grow substantially is to enter other big markets. That is a reasonable business strategy, but like all business strategies it comes with risk. In this case, part of the risk is how will the major players in the server market respond? The story that I heard is that when Cisco told HP of their plans to enter the server market, they were walked to the door. I don’t know if that story is true, but I doubt if HP’s reaction was to embrace Cisco, give them the secret handshake and welcome them to the club. I also doubt if IBM was terribly amused. So what is Cisco’s upside as it enters the server market? The good news is that the server market is indeed sizable. The bad news is that it is characterized by a number of large, established players and relatively small margins.  

Cisco’s switch and router business brings in over twenty-five billion dollars a year in revenue and is characterized by extremely high margins. Cisco does have some competitors in the enterprise router market, but none of them have found a way to gain double-digit market share. There is a line of thought that says that Cisco is putting this cash cow at significant risk in order to enter a low margin market.  

Probably nothing dramatic will happen in the market in the near term. The rumors that IBM was going to buy Juniper Networks have calmed down, at least for now. HP already has a networking business, but I doubt if Cisco takes it very seriously. That could certainly change if HP started to gain market share. One of the key issues that will get played out over the next year or two is “Is it easier for a network company to do servers than it is for a server company to do networking?” Part of that issue is technical. A bigger part, however, is account centric. For example, who has more control over the customer – Cisco or IBM?