Enhance cloud services with high capacity interconnection
Why are cloud services changing?
The requirements for cloud services are becoming more and more demanding. Driven by a myriad of forces including 5G-enabled consumer and enterprise services, advanced AI applications, and the promise of the Metaverse, cloud services must now be delivered with a new set of capacity and performance characteristics.
To accommodate this deluge of requirements, the cloud is broadening its footprint as it moves from a largely centralized model to a growing edge and regional cloud presence. This not only affects the geographical distribution of data centers, but it is also putting new demands on both IP and optical data center interconnect infrastructures to deliver low latency and scalable bandwidth that can meet these emerging highly dynamic interconnection needs.
How have data centers evolved?
A data center is a facility that centralizes an organization’s IT and business operations and associated equipment for the purposes of storing, processing, and disseminating internal and external services. Data centers have evolved dramatically over the last several years to keep pace with the ever-changing requirements of cloud services. Often modern data centers are also called clouds or cloud data centers. The two main categories of data centers are on-prem or private data centers, and public clouds.
The on-prem or private data center is located and operated within the premises of the company’s facilities. These data centers offer complete control and autonomy but require constant operational and maintenance support. For businesses who do not want to expand their own private facilities and are ok with relinquishing some control to a third party to simplify operations and trade Capex for Opex, the public clouds are the answer.
Public clouds work by migrating on-prem resources and workloads within their shared centralized facilities. This avoids the costs and effort of operating and maintaining these assets within their own facilities. Many choose to spread workloads across a variety of public and private clouds for resiliency, performance optimization, and security purposes, and this introduces the need for additional multi-cloud or hybrid-cloud data center interconnection capabilities in the network. Public clouds are hosted and operated by the large public cloud service providers (e.g., AWS, Amazon, GCP) to deliver public cloud services but another multi-vendor service hosted by colocation data centers has emerged to complement these larger players.
What is a colocation data center?
The newest data center variant is the carrier neutral and multi-tenant colocation facility that provides space, power, cooling, and physical security for the server, storage, and networking equipment of cloud service providers. Like public clouds, colocation facilities are often used by businesses who do not want to expand within their own premises due to space, capex restraints, or incur the operational overhead of large datacenter facilities. Colocation facilities are often leveraged by businesses or service providers that are expanding to the network edges to accommodate stringent performance requirements. Colocation facilities have often evolved to also provide Internet Exchange Points (IXP), or a common local or regional meeting point where a variety of telecommunications and communication service providers (CSP) can connect locally with high performance yet with a minimum of cost and complexity.
What is data center interconnect (DCI)?
The cloud era demands a new network architecture that interconnects edge, regional, and core cloud data centers. Data center interconnect ecosystems are being built based on an agile, flexible, and automated IP/optical infrastructures that can support current and future cloud service requirements. Two main types of interconnect ecosystems include optical data center interconnect (DCI), and IP data center interconnect.
Optical data center interconnect (DCI) involves connecting data center and clouds using optical networking or optical transport. This transport can be thought of as the freeway that higher level routing networks use to deliver information. Optical DCI networks can be designed as point-to-point, mesh, and ring topologies depending upon the number of data centers and resiliency requirements. The goal of this layer is, at the lowest cost, to drive the most fiber speed and capacity while extending fiber reach as far as possible.
IP data center interconnect (DCI) connects two or more data centers through IP technology. It is used to primarily to connect data centers within the same organization and is often used by CSPs and enterprises. One protocol often used for this solution is Ethernet VPN (EVPN).
What is IP interconnection?
IP interconnection enables the ability to connect and exchange data between different networks through a scalable, secure, and reliable cloud interconnection platform typically offered through a regional Internet Exchange Provider (IXP). These networks could belong to CSPs, cloud service providers, enterprise business, hyperscalers and others. IXPs can meet these interconnection needs by implementing a comprehensive routing solution which often leverages secure IP peering and transit.
How does IP interconnection enhance cloud services?
Interconnection can provide attractive business benefits to customers including:
- Improved security - Applications and services can be accessed via private direct connections to the networks of cloud providers collocated in the same facility without traversing the internet.
- Reduced transport costs - Colocated service providers, alternative network providers and carrier neutral network operators offer a wide choice of connections to remote destinations at a lower price.
- Higher performance and lower latency - As connections are direct and are often located closer to the person or thing they are serving, there is a reduction in latency and an increase in reliability as they bypass multiple hops across the public internet.
- More control - Through network automation and via customer portals, cloud service providers can gain more control of their cloud connectivity.
- Greater flexibility - With a wider range of connectivity options, enterprises can distribute application workloads and access cloud applications and services globally to meet business demands and to gain access to new markets.
How do you choose your data center interconnection solutions?
Capacity
The most obvious feature when selecting an interconnect solution is the ability of it to provide leading capacity and speed to meet growing traffic demands. Nokia’s family of FP5-based routers offer industry leading capacity and speed. As the first to introduce 800GE routing to the industry, deploying these interfaces can triple IP network capacity in the same space and energy footprint. Watch how NL-ix has used this technology to solve their capacity and power needs.
Nokia has recently launched its sixth generation of super-coherent Photonic Service Engines (PSE-6s) ushering in a new frontier in optical transport. The PSE-6s enables massive network scale delivering the industry’s first 2.4Tb/s coherent transport solution, enabling network operators to scale transport capacity to unprecedented levels across metro, long-haul and subsea networks.
Performance
The new emerging blend of traffic demand measurable performance including low latency.
Nokia’s long history of FP-based routers have always been built with performance in mind. With FP5, you aren’t forced to choose between services or settle for a reduced feature set and unpredictable performance. All features and services can be turned on and run at full capacity, so you can do more with fewer routers and line cards. FP5 uses line rate memories and full buffering throughout so performance is never impacted regardless of traffic conditions.
Nokia’s PSE-6s sets new benchmarks in capacity and reach performance to enable the transport of 800GE services over a single wavelength over three times the distance of previous coherent solutions. This enables 800G transport over any distance, including long-haul networks of over 2000 kms, as well as 800G transport across trans-oceanic cables.
Sustainability
Nokia understands that operators need better efficiency to meet their green footprint and initiatives.
Using faster 800GE optics is more energy efficient than transporting an equivalent volume of traffic using multiple 400GE or 100GE links. 800GE optics consume between 20% and 40% less energy per Gigabit of traffic.
The PSE-6s enables greener and more sustainable networks with better power efficiency. The PSE-6s reduces the power per bit by 40% versus today’s coherent solutions. Combined with the performance of the PSE-6s, this allows more capacity to be deployed using fewer numbers of coherent optics. Network operators can reduce network power consumption by 60% or more in typical network applications.
Experience
What makes Nokia’s solutions different is that they are industry leaders across both IP routing and coherent optics. Nokia has been building IP networks and optical solutions across the largest networks in the world for decades. Nokia has a presence across all parts of the world with both international sales and support organizations ready and able to support its customers.
Stay connected with our Webscale newsletter!