A Fortune 500 company began its cloud migration three months ago. The migration was fast, secure, and application-compatible. Yet the company is only realizing a fraction of its intended benefits. Why? Its current 100GbE network isn’t able to move the data required, resulting in processes that would typically take minutes to complete now taking an hour.
Big Data is a problem faced by many organisations across many industries worldwide. The volume of data generated is increasingly outpacing the bandwidth available on today’s high speed networks.
The Performance Imperative
Applications today are producing ever increasing amounts of data. Virtual machines need to be provisioned in minutes, containers scaled in seconds and real-time analytics are processing a firehose of data. What was sufficient for network capacity at load just a few months prior is now woefully insufficient.
400GbE Network Interface Cards (NICs) will be able to support transmission rates that are 4 times those of current 100GbE Network Interface Cards, dealing with incredibly large capacities of data that need to be transferred over a network. However, the new faster NICs will demand lower latency to be effective, and applications such as next-generation trading platforms, real-time fraud detection systems, autonomous vehicles, and others will require latency in the microsecond range.
Unlike many other higher speed networking solutions that require four 100GbE connections, the new connectivity provided by this switch translates directly into what users will feel: with a single 400GbE connection you go from having the capability to transfer what would otherwise require four 100GbE connections at high speed across a single port, simplifying your workflow while increasing reliability.
Cloud Computing Acceleration
Cloud Service Providers’ (CSP) networks have different requirements than typical data center and enterprise networks. Providing service to thousands of VMs each demanding high bandwidth, while at the same time delivering a consistent experience to end users across varying workloads poses a unique set of challenges. For instance, the ratio of east-west traffic (server to server) to north-south traffic (end user to server) typically skews towards the former, placing increased pressure on internal bandwidth. Current legacy networks are not architected to handle such high internal workloads, and upgrading said networks to support increased scalability is not cost effective.
400GbE Network Interface Cards (NICs) break all the old rules for operation in the hyperscale datacenter. For starters, server-to-server communication now happens at blindingly faster speeds, and that means there will be fewer hops to traverse as data makes its way around the server cluster. Models of machine learning that operate on distributed datasets won’t have the network slow them down either.
Multi-tenant Architecture Benefits
To operate successfully within a multi-tenant cloud environment, organizations need to have the ability to control bandwidth within their cloud infrastructure. Customers have unique requirements for bandwidth consumption whether it be a static web application or a big data bursty analytics environment.
By providing some margin for growth as bandwidths on Network Interface Cards (NICs) increase, products like the AMD Pollara 400GbE NIC allow cloud providers to ride out highly variable patterns of network traffic utilization without over-provisioning. The real value, however, is found in the increased performance density the new NIC offers for next-generation cloud data centers seeking to guarantee consistent levels of service to customers.
Enterprise Network Transformation
Enterprise networks are not cloud providers, with different challenges and requirements. But one thing is the same: huge need for bandwidth. With the shift to remote work, traffic flow through the enterprise network has changed dramatically. Networks now handle video conferencing, file synchronization, and remote desktop protocols — loads that require enormous bandwidth capacity without compromising user experience.
Organisations migrating to 100GbE networks approximately 5 years ago are now fast reaching the limits of their current infrastructure. The adoption of Software-Defined Networking (SDN) has increased the required bandwidth even further as the overhead of the network virtualisation layer places a greater strain on the existing infrastructure. In addition distributed applications and microservices generate complex communication patterns that cannot be viewed as simple client-server traffic flows.
Storage area networks (SANs) also present an opportunity for near-term 400GbE deployment. While all-flash storage arrays can rapidly become too powerful for lower-speed networking, enabling near-term 400GbE adoption to allow storage infrastructure to reach its full potential before it becomes constrained by slower networking.
Future-proofing Network Investments
A company’s network infrastructure is a large investment that depreciates much faster than most other capital assets. Thus, most businesses cannot afford to upgrade their network hardware on an annual or even bi-annual basis and as such, require a realistic and strategic approach to planning bandwidth requirements.
400GbE infrastructure provides organisations with a three year window in which to mitigate growing capacity requirements, lasting for up to three years.
What will happen tomorrow? How will future applications consume more bandwidth and challenge today’s networks in yet unknown ways? We already know next-generation AI/ML workloads, edge computing, and virtual reality will demand unprecedented levels of network capacity. Raising the speed from 100GbE to 400GbE is simply following the same growth trajectory as every prior increase in networking speed. The initial phases of adoption provide a competitive edge at very high cost, but as more players jump into the fray, the new speed becomes the new baseline.
Our current network infrastructure will form the backbone of an organisation’s future requirements over the next 5 to 7 years. We look at the implications of this and question whether current upgrades to higher speed networking such as 400GbE are sufficient to meet the needs of the organisation going forward.
