Skip to main content

Key principles for designing a 5G packet core – part 2

Key principles for designing a 5G packet core – part 2

In our last article, we looked at how the disaggregated and cloud-native design of the packet core is advantageous for scalability and efficiency. The issue of how throughput is maximized and latency minimized with integrated IP services was discussed along with the importance of deployment flexibility to realize highly distributed deployments at the edge of the network. We also looked at the importance of the packet core being able to support multi-generational technologies all the way back to 2G.

The critical shift in moving to virtualization and cloud-native is that your competitive advantage will come not from engineering a better proprietary system, but from moving faster than your competitor to meet the needs of your customers — especially in the case of enterprises that are using wireless networks as a key enabler of Industry 4.0 use cases.

The key to faster and more responsive development of features is the inherent serviceability and upgradeability of micro-services, which can leverage deployment automation, auto-scaling and in-service software updates.

Deployment automation within the packet core

In a K8s environment, Helm charts are used for automated day one installation, provisioning and basic configuration management of packet core CNFs. Day two management also needs to be automated and simplified in terms of workloads that range from scaling, based on application-specific metrics, and in-service software upgrades to resilience and failure testing. In addition, for reusability of CNFs in different environments (e.g., development, staging, production environments; private or public clouds), you can leverage the parameterized manifests and configuration maps to generate deployment specifications and configurations supported by Helm. This is advantageous when a large number of packet core CNF instances are needed, for example, for multiple edge or mobile access edge computing (MEC) locations.

Helm charts are also useful for upgradeability tasks such as continuous integration and continuous delivery (CI/CD). They can be integrated into a Jenkin’s pipeline with a webhook that is configured to trigger a pipeline to update a CNF based, for instance, on the existence of a new or updated image version — taking advantage of the K8s rolling-update capability. The update procedure can be flexibly managed based on settings allowed by K8s. These settings are used to control a number of pod-related aspects, including the number of new pods to spin up at once, the number of outstanding pods, health check of new pods and the delay in deleting old pods.

Custom-built K8s operators can also be configured to automate everything from monitoring alarms for KPI and KCI indicators to scale-in or scale-out based on CPU and memory utilization metrics using the K8s horizontal pod auto-scaler (HPA) or custom application-specific metrics.

Overcoming packet core networking challenges

Using K8s does create some architectural challenges for a packet core, which we examined in the last article. It can also create networking challenges such as the need to support multiple IP interfaces in different routing contexts, which is critical within a packet core for isolation.

Maintaining IP addresses
The first issue is that the ephemeral IP address used by K8s lives and dies with the pod instance. One of the key features of a cloud-native approach is its ability to near-instantly scale resources to match demand, meaning that pods are constantly being spun up and down. Because the packet core often depends on knowing the peer’s identity using its IP address, this is a problem. When the pod is reinstated, it is assigned a new IP address from K8s’ pool of ephemeral IP addresses. This breaks the connection to the CNF within the packet core.

There are two approaches that can be used to overcome this inherent issue with K8s. The first is to bypass K8s networking altogether, by plumbing the packet core application pods directly to the node’s interfaces with load balancing being handled at the application level.

The other approach is to tunnel through K8s, thus preserving the original IP address, and using the IP virtual server (IPVS) as a tunneling load balancer. K8s “Kube-router” uses IPVS in tunneling mode for private clusters and when used at the edge of the cluster exposes the service endpoint as a K8s service to external peers. As a proxy, Kube-router then load-balances and tunnels traffic to the packet core application’s load-balancer (LB) pods.

Multiple IP addresses
In the packet core, each pod requires a set of IP addresses depending on the routing context. For example, separate IP addresses and routing contexts can be used for the management interface, signaling interfaces to other network functions such as the policy control and rules function (PCRF) and online charging system (OCS), and network interfaces to the access network and service networks such as the Internet and public/private voice.

To solve this a container network interface (CNI) that can call multiple CNI plug-ins is used. This enables multiple network interfaces on a pod beyond the default network interface used for pod-to-pod communication. A pod can be associated with different virtual routing and forwarding (VRFs) to support routing separation for exchanging reachability of UE pools to routing peers on the network side.

Conclusion

With a cloud-native, state-efficient design and the right level of software disaggregation, it is possible for a micro-services architecture to implement packet core CNFs to enable maximum transaction rates and throughput. It can satisfy key operational networking requirements such as multiple network interfaces with routing isolation and the preservation of IP addresses. Automating deployments for enhanced day one and day two serviceability is well supported by K8s with tools such as Helm charts.  

We strongly recommend embracing a cloud-native approach to architecting your packet core for its increased flexibility and faster integration and deployment cycles. In the long run, this will enable you to take the greatest advantage of 5G’s unique cloud-native architecture, meet the unique needs of enterprise customers and speed your time to market with new services — all the while ensuring service continuity and customer satisfaction. Service providers should carefully evaluate their packet core vendor selection. Nokia is the most trusted vendor within the industry, giving service providers the utmost confidence in evolving their packet core with confidence.

Rob McManus

About Rob McManus

Rob is a senior product marketing manager for Nokia’s Cloud Packet Core. If you need convincing about the exciting possibilities of a cloud-native core – talk to him or keep an eye out at key industry forums where he’s a regular speaker.

Tweet us at @nokianetworks

Article tags