Delving Into The DNA Of AWS
February 27, 2017
Print this page
By Paul Barker
Las Vegas — It may have been well past the cocktail hour, but that didn’t stop thousands of AWS re: Invent 2016 delegates from streaming into a cavernous hall to listen to James Hamilton, responsible for infrastructure efficiency, reliability and scaling at a company that would not be in business without the existence of all three.
Judging by the many tweets that were posted at the same time Hamilton spoke along with the applause he received, support for Amazon Web Services, a subsidiary of Amazon.com, that has morphed into a multi-billion dollar enterprise IT vendor, is not only strong, but continuing to escalate rapidly.
A report released by 451 Research on the eve of the show estimated that the managed infrastructure and cloud market will not only remain robust, but will “encompass more than US$129 billion in spending,” with public cloud and private cloud as the predominant sources of growth.
“AWS is still the biggest player in the cloud space,” said William Fellows, 451 Research vice president. “(It) has achieved revenues of US$12 billion and an enterprise value of some US$150 billion, or half the market cap of Amazon.com. Looking at it another way, it took IBM a century to reach the same valuation.”
Headquartered in Seattle, the company offers 70+ services ranging from compute, storage, analytics, applications and IoT tools and its architecture is located in 16 geographical regions operating in what it calls Availability Zones . In early December, the AWS Canada (Central) Region was launched allowing its customers to run their applications and store their data on infrastructure in Canada.
For Hamilton, a Canadian and former auto mechanic who graduated from the University of Waterloo in 1997 with a Masters in Math and Computer Science, the secret to AWS’ success lies with the network that allows all these services to run simultaneously. He defines it as 100% AWS controlled resources and one example that he talked about is AWS’ latest “project” called the Hawaiki Submarine Cable initiative, scheduled to go live in June 2018.
The company has purchased capacity to improve performance and reduce latency for AWS cloud customers operating between Australia/New Zealand and North America. According to Hawaiki Submarine Cable LP, headquartered in Auckland, and the owner and developer of the 14,000 kilometre transpacific system, it will “deliver more than 30 Tbps of capacity via TE SubCom’s C100U+ submarine line terminating equipment.
“It’s kind of a big deal,” Hamilton told conference delegates. As of mid-January of this year, a major hurdle was overcome with the completion of the route survey for Hawaiki.
‘The start of 2017 finds the cable system closer and closer to ready for service,” said Remi Galasso, the firm’s CEO. “The information garnered from the recently completed deep water route survey will be instrumental in ensuring the long-term viability of the system.”
Hamilton, meanwhile, described the intricacies of such a massive undertaking during his keynote speech. “At its deepest point, it’s 6,000 meters below the sea. Every time you get involved with technology, you learn it’s always harder than it looks. You know, how hard should it be to string fiber between Australia and the U.S.?”
As it turns out, quite difficult. Signal-to-noise ratios being what they are, Hamilton said, means that electrically-powered repeaters are needed every 60 to 80 kilometres, all of which require power. That occurs because the fiber itself will be wrapped in copper sheathing. “If you look at it closely,” he said. “You’ll see there is a bundle of fiber, some insulation and then a couple of layers of copper.
“The problem is there are a lot of repeaters, which means you have to have a lot of copper because it’s carrying a lot of current. It takes a lot of power to run them all, but you can’t do that because it’s not cost-effective. So what’s the trick?
“The same trick that gets played on long-haul transmission and terrestrial power lines and that is if you need a lot of power, you can either deliver a lot of amperage, which means you need a lot of conductors or a lot of voltage. Hawaiki opted for a lot of voltage.”
Hamilton went on to speak in detail about the AWS Global Infrastructure. There are currently 40 AZs in existence with each consisting of one or more discrete data centres containing redundant power, networking and connectivity.
In his keynote the following day, AWS CEO Andy Jassy said the combination of the AWS network has produced an organization that is nearly a “US$13 billion revenue run rate business, growing at 55% year over year.
“If you look at how people end up moving to the cloud, almost always the conversation starter ends up being cost,” he said.
“Most companies like turning capital expense into variable expense. Most companies like having a lower variable expense in the cloud than they can do on their own.
“However, almost always when you talk to companies, the number one reason they choose to move to the cloud is the agility and the speed they get in the cloud. And when they talk about speed, it’s two things. The first is the ability to spin up thousands of servers in minutes as opposed to 10 to 18 weeks it takes for most on premise companies. But more importantly, what allows them to move fast is having a plethora of infrastructure services at their fingertips to get from idea to implementation in several orders of magnitude faster than they could before.”
There was no shortage of product announcements from Jassy during the almost three-hour keynote, but it was the arrival of an 18-wheeler that took centre stage, almost literally.
The arrival of the transport truck, which stopped very near the stage at the Sands Convention Centre, where he spoke was no prop for it heralded the arrival of a new AWS service called Snowmobile.
According to the company, “Snowmobile is an exabyte-scale data transfer service used to move extremely large amounts of data to AWS. You can transfer up to 100PB per Snowmobile, a 45-foot long ruggedized shipping container, pulled by a semi-trailer truck. Snowmobile makes it easy to move massive volumes of data to the cloud, including video libraries, image repositories, or even a complete data centre migration.”
After an initial assessment, a Snowmobile will be transported to a customer’s data centre and AWS personnel will configure it so it can be accessed as a network storage target.
In a blog posted on the day of the announcement, Jeff Barr, director of AWS evangelism, wrote that “moving large amounts of on-premises data to the cloud as part of a migration effort is still more challenging than it should be. Even with high-end connections, moving petabytes or exabytes of film vaults, financial records, satellite imagery, or scientific data across the Internet can take years or decades. On the business side, adding new networking or better connectivity to data centers that are scheduled to be decommissioned after a migration is expensive and hard to justify.
“Each Snowmobile includes a network cable connected to a high-speed switch capable of supporting 1 Tb/second of data transfer spread across multiple 40 Gb/second connections. Assuming that your existing network can transfer data at that rate, you can fill a Snowmobile in about 10 days.”
There was a plethora of new AWS services announced on Day 2 of the conference during a keynote speech by Dr. Werner Vogels, Amazon.com’s chief technology officer.
AWS OpsWorks for Chef, an environment available through AWS OpsWorks that reduces the “heavy lifting associated with continuous deployment” and “fuel even more automation.”
Amazon EC2 Systems Manager, a collection of tools for package installation, patching, resource configuration and task automation on Amazon’s service that provides scalable computing capacity in the cloud.
AWS Shield, a protective armor against distributed denial of service (DDoS) attacks and available as Shield Standard and Shield Advanced.
Blox, a collection of open source projects for container management and orchestration.
AWS Glue, a fully-managed data catalog and ETL service that makes it “easy to move data between data stores, while also simplifying and automating time-consuming data discovery, conversion, mapping and job scheduling tasks.”
Amazon Pinpoint, a data-driven engagement service for mobile applications.
Vogels wrote in a blog that the announced services will accelerate the transformation across development, testing and operations, data and analytics, and computation itself.
He expanded on that theme in his speech: “We’re strong believers in giving you choice. This means that we are not so arrogant to think that we know how you should develop, how you should build your new applications.
“We’ve had some earlier customers that grew up on our platform totally disrupting traditional verticals, whether it’s healthcare or life sciences or telecommunications.”
Vogels was joined on stage by one of those disruptors — Jeff Lawson, the CEO and co-founder of Twilio, a San Francisco-based, cloud communications platform as a service company.
“I’m a software developer,” he said. “But more than that, I consider myself to be a software person. It’s not that I write code. It’s that I like using software to build competitive advantage in the businesses that I’m involved in because I believe that software is a mindset, it is not a skill set.
“(But) when we turned to the legacy communications industry to find out how we should build the things we had in mind, we got the same answer every time: ‘Sure, wire up all these copper wires to your data centre. Rack up a bunch of telco gear. And then bring in professional services to come and integrate that whole thing for you. And that’ll take millions of dollars and take 24 months before we can launch anything.’ I said, “Wow, that is the complete opposite of that software mindset.”
Twilio was launched in 2008 as a means of solving this problem, said Lawson, namely bring communications into the era of software and out of its legacy based in hardware and physical networks.
Fast forward to 2017 and much has changed. “The on-demand economy has changed the nature of what it means to be cloud scale,” he said. “You don’t have 8,000 agents, you have half a million drivers talking to their riders and you don’t have 100 million interactions annually, you have billions of interactions annually. This is cloud scale communications.
“That old way where you’d go and rack up PBXs, you can’t get to cloud scale by racking up boxes. It used to be you’d put this thing in a closet and you’d wait five years to amortize it before you could do anything new. That doesn’t help you when you’re scaling rapidly. That doesn’t help you when you’re trying to reach customers all around the world in a very rapid fashion. That doesn’t help with your velocity of innovation.” C+