BIoT Canada

The Cloud Means Business

The rapid growth in 10GbE deployments in the data centre has made the cloud more viable than ever, with providers able to offer cost-effective performance and better compliance for uptime and data recovery. uptime and disaster recovery.

May 1, 2011  

Print this page

The cloud is getting plenty of buzz these days, but all that talking may only confirm that it means different things to different people. In its most basic sense, a public cloud is offered as a service by a third party, whereas a private cloud is managed by the user organization within a separate firewall.

After taking this into consideration, the cloud then functions in three main areas: infrastructure-as-a-service (IaaS); platform-as-a-service (PaaS); and software-as-a-service (SaaS). From here the data centre can support many ancillary technologies, most notably virtualization, as well as services that can fit with a range of business strategies.

“There are multiple business models for the cloud; it can offer a lot more flexibility, power and utilization,” says Henry Franc, a senior account manager for Belden Inc. “But with great power comes great responsibility, and that means putting more thought into design to minimize the potential for human error. It is also important to look at the data centre holistically: the danger comes when things are considered in isolation.”

That means considering how IaaS, which refers to basic infrastructure such as storage and computing power, can support SaaS-based provisioning of applications over the Internet, as well as PaaS, which offers complete application development environments. Within these categories, of course, there can be more mixing-and-matching.

Hardware infrastructure can be part of a co-location offering, where the customers bring in their own servers, or it can be completely run by the provider. And SaaS can support essentially any application, from e-mail, billing, customer relationship management (CRM), to human resources, electronic medical records — even graphics processing. Given the range of options, it is perhaps not surprising that many enterprises are taking a phased approach.

“Our customers will often begin with traditional hosting,” says James Beer, director, data centres, for Bell Business Markets. “From there we might see managed services, with some of their infrastructure co-located in a Bell data centre.”

Once customers are in the data centre they can see the range of cloud-based offerings, from security to virtualization. A customer will tend to seek out the safest environment to test the cloud. They need something that can put the technology through a provable, user-based initiation, but that isn’t mission critical.

“Most people will start with application development and testing,” says Joe Mardini, vice president of data centres for Bell Business Markets. “It is the most logical use case to move out and monitor. Some people say that it doesn’t have high availability requirements, but I would argue that it does. Developers need services — it could get costly very quickly.”

Any discussion of the appropriate cloud-based technologies should be driven by business requirements, with the provider able to back up their offerings with solid service level agreements (SLAs) that support business continuity and disaster recovery.

Given the significance of the technological shifts now occurring, offering flexibility means having a provider that can walk the talk. Of course, Bell isn’t the only company that is finding its customers want to dip their toes into the cloud before diving in.

“At RackForce we have an array of hybrid cloud services, including compute and storage, co-location, network and managed services,” says Brian Fry, co-founder and vice president, sales and marketing, at RackForce, a privately held ICT service provider based in Kelowna B.C. “We often see customers co-locate a private cloud in our data centre, and then when they see what our team can deliver they want to migrate to a full cloud offering, because they can see how, with the cloud, things that formerly took weeks now take hours.”

These advantages are apparent for large organizations with big ICT budgets, but much of the innovation and competitive advantage of the cloud is directed at small to medium businesses (SMBs) as well.

The reasons are clear enough: the cloud can deliver enterprise-class capabilities to customers that, historically, would never have had the IT budget to deliver such high levels of redundancy, security, disaster recovery, and overall performance.

“We serve customers of all sizes, but have a big footprint in the SMB space,” says Matt Stein, vice president, network and services, at Primus Telecommunications Canada Inc. “Our customers purchase a bulk amount of storage, RAM, CPU, and a network, and then from a Web portal they can carve off individual parts of that and build a machine with the point and click of a mouse.”

This is a very high level of flexibility. CPUs and RAM can be applied to machines, and software licensing can be updated to reflect varying business requirements, such as adjusting HR and finance applications to support changing business requirements.

“Cloud adoption has picked up dramatically over the past year,” says Stein. “Sometimes we support a specific cloud server request, and sometimes the customer wants to take a hybrid approach. We have our Virtual Managed Machine (VMM) infrastructure service; this is a key distinction for us, as it gives us the ability to scale up from only one or two machines to very many.”

The power and flexibility of virtualization also means that Primus allows customers to bring in their server requirements, extending control right back to the client.

Technology counts:

Part of the appeal of the cloud is that local processing is pushed to a compute resource located elsewhere, with the underlying infrastructure potentially being of little to no interest to the user.

The technology client is reduced to a standardized browser, allowing for application delivery to any device, including handhelds. Technology companies and service providers like to use the term “seamless” in support of this vision, but for the cloud to deliver without disruption the data centre technology has to be top notch.

“We need to rethink how we design the network, because we have an ever-growing diversity of client types and increased traffic in our data centres,” says Michael Phung, backbone network manager for PEER 1 Hosting, a global hosting company with headquarters in Vancouver. “We need to reduce the network layers, with higher uplink capacity and higher port capacity at the access layer.”

This means moving to 10 gigabit (Gb) for server access, which greatly reduces the cable cluttering that can come off of 1 Gb links. With the higher capacity links PEER 1 Hosting can drop the number of redundant links, radically simplifying the network.

“Previously, we had a web of switches with multiple links,” says Phung. “Someone might make a mistake plugging in a server, and it could tear down the network.”

The rapid growth in 10GbE deployments in the data centre has made the cloud more viable than ever, with providers able to offer cost-effective performance and better compliance for uptime and disaster recovery. Overall, the cloud advantage of shifting capital expenses to operating expenses, and of reducing — or even removing — the burden of hardware and application management, are driving businesses to consider the cloud. But that means that providers have to invest if they are going to come through with flexible offerings and industry-leading SLAs.

“The technology is changing every single day, and in all areas,” says Tim Varma, PEER 1 Hosting’s vice president of product development, speaking from Atlanta, Ga. “Virtualization in cloud services plays a huge part, whether it is Citrix’s open-source Xen, or Microsoft’s Hyper-V, or VMware.”

Varma says that PEER 1 is improving its data centres on an ongoing basis, not only from a facilities perspective, but also with regard to converging technologies. He echoes Franc in his emphasis on the importance of design.

“We need to simplify design; we can get more efficient from a manageability perspective,” says Varma. “The storage area network (SAN) fabric is getting closer to the network fabric, and we are looking to feature-proof our network with single mode fiber.”

As a result, the company is not focusing on moving to CAT6A in terms of copper, but is absorbing the higher cost of moving to fiber, for the simple reason that, until someone beats the speed of light, fiber will be the standard of the future. At present there is still copper at the access layer — there isn’t much demand yet for 10 Gb to the server — although fiber can easily be run there, too. For others, like RackForce, the answer is to embrace a partner vendor’s all-encompassing vision for the data centre.

“We have adopted the Nexus platform as part of Cisco’s Data Center 3.0,” says James Bothe, director of IT at RackForce. “It is a fully converged network at 10 Gb, with the performance scalability really only limited by the number of physical ports. We’ve had it in place since July, 2009 – we were the first service provider in Canada to put in place Data Center 3.0.”

The idea behind Data Center 3.0 is to converge Fiber Channel and Ethernet onto one network connection. Multiple links can now be carried on a single cable, rather than separating out Ethernet and fiber. Instead of having Ethernet cards and cable bundles, RackForce can utilize a converged network adaptor with two twin axial cables, each at 10 Gb. Suddenly, 10 to 15 cables drops are reduced to two, saving time, improving airflow, and reducing risk.

“Some data centres use InfiniBand for Fiber Channel/Ethernet at 40 Gb,” says Boothe. “RackForce has chosen 10Gb Ethernet in the Cisco Data Center 3.0 model. The next step is 40 Gb Ethernet followed up by 100 Gb Ethernet within five to eight years.”

The scalability, built-in quality of service, and failover capabilities will bring such a dramatic shift in service delivery that within a decade the cloud could become the de facto means of supplying infrastructure and applications.

“The change will be enormous,” says Bothe. “As it stands, the data centre is pushing the federation of 10, 40, and 100 Gb within its walls, and potentially extending these services beyond its confines, effectively moving workloads from data centre to data centre at the blink of an eye.”

The future is clear:

 The shift to cloud computing is increasing the greenfield build outs of external data centres. It is giving competitive advantage to small and medium size businesses and, to some extent, causing consternation in the IT departments of large enterprises, many of whom are uncertain of how to proceed.

“We are seeing demand for cloud computing in all markets and geographies, but a lot of organizations are on the sidelines watching and waiting to see,” says Beer. “Some are more security conscious than others, such as financial services.”

The irony is that the cloud can address the rigorous security concerns, and that highly sensitive applications such as medical records are already being delivered as a service in Canada. In fact Bell, assisted by its acquisition of IT services firm xwave, sees healthcare’s huge storage requirements, limited capex resources, and pressure on staffing as well-suited to the cloud.

“Healthcare is an obvious area for us to push into, because they have increasing demands and also want to drive down costs,” says Beer.

As the technology continues to evolve, the dominant concerns — control, availability and security — will diminish.

And as more large enterprises move select technologies and services to the cloud, the role of an organization’s IT department inevitably will be affected. However, there is no reason to believe that the cloud will suddenly make an internal IT department irrelevant.

“Giving up physical server customization to fit into a virtual environment can be a hurdle for some, but once IT managers get past it they never look back,” says Stein.

“You can guarantee pricing, you can guarantee CPUs. All of these intricate, complex decisions tend not to be respected throughout different levels of an organization, and now IT can take hardware out of the equation and focus on running the business, with the IT manager able to add more value.”

In the future, very large organizations will likely still have their IT staff develop internal clouds. These will most certainly provide fast service delivery and business agility to business units.

But with federated data centres supported by fiber, for many enterprises it will be hard to argue against letting someone else manage the nuts and bolts, especially when economies of scale end up delivering greater business agility in a secure environment, and at a lower cost.


Tim Wilson is a freelance writer based in Peterborough, Ont. He can be reached via e-mail at