architecture, Computer Science, Development, Geek, Programming, Tech

10 Gigabit Ethernet & The Future of Networking

The rise of cloud computing is a bit like that of a space shuttle taking off. When the rocket engine and its propellants fire up, the shuttle lifts slowly off the launch pad and then builds momentum until it ultimately streaks into space.

The Cloud is now in the momentum-building phase and on the verge of quickly soaring to new heights. There are lots of good reasons for the rapid rise of this new approach to computing. Cloud models are widely seen as one of the keys to increasing IT and business agility, making better use of existing infrastructure and cutting costs.

So how do you launch your cloud? An essential first step is to prepare your network for the unique requirements of services running on a multitenant shared infrastructure. These requirements include IT simplicity, scalability, interoperability and manageability. All of these requirements make the case for unified networking based on 10 gigabit Ethernet (10GbE).

Unified networking over 10GbE simplifies your network environment. It allows you to unite your network into one type of fabric so you don’t have to maintain and manage different technologies for different types of network traffic. You also gain the ability to run storage traffic over a dedicated SAN if that makes the most sense for your organization.

Either way, 10GbE gives you a great deal of scalability, enabling you to quickly scale up your networking bandwidth to keep pace with the dynamic demands of cloud applications. This rapid scalability helps you avoid I/O bottlenecks and meet your service-level agreements.

While that’s all part of the goodness of 10GbE, it’s important to keep this caveat in mind: Not all 10GbE is the same. You need a solution that scales and, with features like intelligent offloads of targeted processing functions, helps you realize best-in-class performance for your cloud network. Unified networking solutions can be enabled through a combination of standard Intel Ethernet products along with trusted network protocols integrated and enabled in a broad range of operating systems and hypervisors. This approach makes unified networking capabilities available on every server, enabling maximum reuse in heterogeneous environments. Ultimately, this approach to unified networking helps you solve today’s overarching cloud networking challenges and create a launch pad for your private, hybrid or public cloud.

The urge to purge: Have you had enough of “too many” and “too much”?

In today’s data center, networks are a story of “too many” and “too much.” That’s too many fabrics, too many cables, and too much complexity. Unified networking simplifies this story. “Too many” and “too much” become “just right.” Let’s start with the fabrics. It is not uncommon to find an organization that is running three distinctly different networks: a 1GbE management network, a multi-1GbE local area network (LAN), and a iSCSI storage area network (SAN) or Fibre Channel.

Unified networking enables cost-effective connectivity to the LAN and the SAN on the same Ethernet fabric. Pick your protocols for your storage traffic. You can use iSCSI, NFS, or Fibre Channel over Ethernet (FCoE) to carry storage traffic over your converged ethernet line.

You can still have a dedicated network for storage traffic if that works best for your needs. The only difference: That network runs your storage protocols over 10GbE — the same technology used in your LAN.

When you make this fundamental shift, you can reduce your equipment needs. Convergence of network fabrics allows you to standardize the equipment you use throughout your networking environment — the same cabling, the same NICs, the same switches. You now need just one set of everything, instead of two or three sets.

In a complementary gain, convergence over 10GbE helps you cut your cable numbers. In a 1GbE world, many virtualized servers have eight to 10 ports, each of which has its own network cable. In a typical deployment, one 10GbE cable could handle all of that traffic. This isn’t a vision of things to come. This world of simplified networking is here today. Better still, this is a world based on open standards. This approach to unified networking increases interoperability with common APIs and open-standard technologies. A few examples of these technologies:

  • Data Center Bridging (DCB) allows multiple types of traffic to run over an Ethernet wire.
  • Fibre Channel over Ethernet (FCoE) enables the Fibre Channel protocol used in many SANs to run over the Ethernet standard common in LANs.
  • Management Component Transport Protocol (MCTP) and Network Controller Sideband Interface (NC-SI) enable server management via the network.

These and other open-standard technologies enable the interoperability that allows network convergence and management simplification. And just like that, “too many” and “too much” become “just right.”

Know your limits — then push them with super-elastic 10GbE

Now let’s imagine for a moment a dream highway. In the middle of the night, when traffic is light, the highway is a four-lane road. When the morning rush hour begins and cars flood the road, the highway magically adds several lanes to accommodate the influx of traffic.

This commuter’s dream is the way cloud networks must work. The cloud network must be architected to quickly scale up and down to adapt itself to the dynamic and unpredictable demands of applications. This super-elasticity is a fundamental requirement for a successful cloud.

Of course, achieving this level of elasticity is easier said than done. In a cloud environment, virtualization turns a single physical server into multiple virtual machines, each with its own dynamic I/O bandwidth demands. These dynamic and unpredictable demands can overwhelm networks and lead to unacceptable I/O bottlenecks. The solution to this challenge lies in super-elastic 10 GbE networks built for cloud traffic. So what does it take to get there? The right solutions help you build your 10 GbE network with unique technologies designed to accelerate virtualization and remove I/O bottlenecks, while complementing solutions from leading cloud software providers.

Consider these examples:

  • Virtual Machine Device Queues (VMDq) improves the network performance and CPU utilization for VMware and Windows Server 2008 Hyper-V by reducing the sorting overhead of networking traffic. VMDq offloads data-packet sorting from the virtual switch in the virtual machine monitor and instead does this on the network adaptor. This innovation helps you avoid the I/O tax that comes with virtualization.
  • The latest Ethernet servers support Single Root I/O Virtualization (SR-IOV), a standard created by the PCI Special Interest Group. SR-IOV improves network performance for Citrix XenServer and Red Hat KVM by providing dedicated I/O and data isolation between VMs and the network controller. The technology allows you to partition a physical port into multiple virtual I/O ports, each dedicated to a particular virtual machine.

Technologies like these enable you to build a high-performing, elastic network that helps keep the bottlenecks out of your cloud. It’s like that dream highway that adds lanes whenever the traffic gets heavy.

Manage the ups, downs, and in-betweens of services in the cloud

In an apartment building, different tenants have different Internet requirements. Tenants who transfer a lot of large files or play online games want the fastest Internet connections they can get. Tenants who use the Internet only for email and occasional shopping are probably content to live with slower transfer speeds. To stay competitive, the service providers need to tailor their offerings to meet these ever diversifying needs.

This is the way it is in a cloud environment: Different tenants have different service requirements. Some need a lot of bandwidth and the fastest possible throughput times. Others can settle for something less.

If you are operating a cloud environment, either public or private, you will need to meet these differing requirements. That means you need to be able to allocate the right level of bandwidth to an application and manage network quality of service (QoS) in a manner that meets your service-level agreements (SLAs) with different tenants — technologies that allow you to tailor service quality to the needs and SLAs of different applications and different cloud tenants.

Here are some of the more important technologies for a well-managed cloud network: Data Center Bridging (DCB) provides a collection of standards-based end-to-end networking technologies that make Ethernet the unified fabric for multiple types of traffic in the data center.

  • Queue Rate Limiting (QRL) assigns a queue to each virtual machine (VM) or each tenant in the cloud environment and controls the amount of bandwidth delivered to that user. The Intel approach to QRL enables a VM or tenant to get a minimum amount of bandwidth, but it doesn’t limit the maximum bandwidth. If there is headroom on the wire, the VM or tenant can use it.
  • It allows for better traffic prioritization over a single interface, as well as an advanced means of shaping traffic on the network to decrease congestion.
  • Traffic Steering sorts traffic per tenant to support rate limiting, QoS and other management approaches. Traffic Steering is made possible by on-chip flow classification that delineates one tenant from another. This is like the logic in the local Internet provider’s box in the apartment building. Everybody’s Internet traffic comes to the apartment in a single pipe, but then gets divided out to each apartment, so all the packets are delivered to the right addresses.

Technologies like these enable your organization to manage the ups, downs, and in-betweens of services in the cloud. You can then tailor your cloud offerings to the needs of different internal or external customers — and deliver the right level of service at the right price.

On the road to the cloud

For years, people have talked about 10 GbE being the future of networking and the foundation of cloud environments. Well, the future is now; 10GbE is here in a big way.

There are many reasons for this fundamental shift. Unified networking based on 10GbE helps you reduce the complexity of your network environment, increase I/O scalability and better manage network quality of service. 10GbE simplifies your network, allowing you to converge to one type of fabric. This is a story of simplification. One network card. One network connection. Optimum LAN and SAN performance. Put it all together and you have a solution for your journey to a unified, cloud-ready network.

You Might Also Like