Hybrid IT has quickly become the de facto standard as we embrace a combination of on-premises, cloud and colocation to serve our diverse workloads.
This proliferation of hybrid IT has caused one consistent problem – network connectivity. According to 451 Research,“With end customers and employees ever more dispersed geographically, the use of mobile devices growing and large amounts of data coming in from internet-connected devices the network is under even more pressure.”
One Size Need Not Fit All
Most organizations operate a wide range of applications with diverse bandwidth, performance and cloud connectivity requirements. Latency tolerance is different for each of these applications. Enterprises may be more accepting of higher latency for SaaS applications like Office 365 or Salesforce but require better performance for customer facing applications and data intensive, latency sensitive workloads such as those that serve financial exchanges. Ultra-low latency a critical capability for capital markets and high-frequency trading systems may generate significant revenues, and demand the lowest physical latency design possible.
Assessing cloud connectivity is important but solving the cost-performance equation is not one-size-fits-all. What may have worked to connect your headquarters to your cloud provider won’t scale to meet your growing business demands. If a growing user community or customer base requires expansion and local presence in new markets, you’ll be looking to availability zones and edge computing strategies to shorten the route (and the time) users take to access the applications and data they need. This expanding reach creates a preference for colocation and interconnection providers with a strong interconnection portfolio and diverse connectivity ecosystem. To ensure connectivity provisioning doesn’t constrain business agility, software-defined interconnection has emerged as another key enabler for consideration in the selection of your data center provider.
The Growing List of Cloud Connectivity Options
Fortunately, the number of colocation cloud connectivity options has grown so you can choose the one best suited to an application’s need.
Option 1: Direct connect from your data center services to a cloud service provider via a cloud exchange
This option provides a significant technical advantage – the ability to reduce latency to nearly zero, as the traffic stays within the data center, eliminating any additional path. The disadvantage here is that your data center operator will charge a premium for power and space. And because cloud service providers have had multiple on-ramps fail in specific data centers, it also doesn’t solve for multiple availability zones to mitigate any single point of failure.
Option 2: Cloud on-ramp providers native in data centers
Using a cloud on-ramp provider like Megaport, Zayo, CenturyLink or Packetfabric provides advantages in dual entrance into the facility, and multiple network options to provide multiple availability zones that are local, regional, and national in nature. The disadvantage here is an additional latency metric between the data center operator and the cloud on-ramp. Often these additional latency numbers are 1 to 1.3 milliseconds roundtrip. However, the vast majority of customers are willing to leverage this approach to save 20 to 50% off their power and space bill.
Option 3: Software-programmable Interconnection
Option three’s strength is in its ability to deliver point, click, provision connections on demand. This option requires a data center services provider that delivers an elastic, software-defined network fabric, such as Cyxtera’s Extensible Data Center platform, CXD. This allows you to provision and consume data center, network, and even edge compute resources through a GUI with on-demand provisioning. The other advantages of this approach are a consumption-based model with pay-as-you-go terms that help keep costs down further by avoiding over-provisioning. – Read more