New technologies from Google make cloud services faster

Alphabet Inc., a subsidiary of Alphabet Inc., is confident of new ways to deal with the problem of network-scale network congestion and is currently introducing the technology to the Google Cloud Platform (GCP) to provide enterprises with infrastructure services.

Google has introduced a new BBR network algorithm that has been used to speed up its consumer services such as YouTube and Google.com, which may be the next step in boosting public Internet performance. The company said it has seen significant improvements in these services and is now offering it to users of Google Cloud Platform (GCP).

Google CEO Sandal Pizarre at Google Developers Conference in California in May 2016

Google's BBR is a network congestion control protocol designed to handle common problems: congestion, congestion, and high-speed international links in complex networks that make up the modern Internet. Each mobile device can only receive a share of the backhaul of home stations. Home users Shared connectivity to DSLs or cable centers, as well as enterprises sharing thousands of devices through a handful of routers. All of this constitutes a network that does not realize its full potential.

Eric Hanselman, principal analyst at research firm 451 Research, said: "Today's Internet is like a prehistoric monster, and Google's BBR is the latest in a solution to the performance issues of the most difficult legacy protocol on the Internet."

While many data organizations transfer data from the data center without being affected by congestion, their impact is obvious when streaming, transferring large files, or requiring real-time responses. With the initial deployment of BBR, Google has made significant strides in YouTube and Google.com services. Now deployed on Google Cloud Computing Platform, users can take advantage of its own applications and services.

So BBR how to work?

Packet loss has been a signal of network congestion, but also the sender needs to reduce the data rate signal. Recent changes in the Internet architecture have led to the inefficiency of these technologies; large buffers have been deployed in the last mile of broadband connections, while long-haul connections are using switches with shallow buffers. This combination means that the Internet is blocked due to queuing delays in a large number of buffers and traffic instability on the main road.

Using these buffers, how do you determine the best speed to send data? Once you determine what the slowest link in any TCP connection is, the answer is simple. This link defines the maximum data transfer rate for the connection, as well as where the queue is formed. Knowing the round-trip time and bandwidth of the slowest link as the bottleneck in the connection, the algorithm can determine the best data rate to use, a problem that has long been considered almost unmanageable.

This is where the BBR name comes from: Bottleneck Bandwidth and Round-trip. Based on the latest developments in these computing and control systems, Google's network engineers have come up with a way to dynamically manage the amount of data over a connection so it does not take up the capacity of its bottleneck links, keeping the queue to a minimum.

Although TCP does not track bottleneck bandwidth connections, it can be estimated from the timestamp of the packet response. The BBR is capable of sending data at the maximum possible rate, limited by the speed at which connections are being generated by the application's data, limited by network capacity, and by accurately knowing which response packets should be sampled to obtain these estimates. Internet connections on the Internet are not static. If the connection is running in a steady state, the BBR occasionally raises the data rate to see if any bottlenecks have changed, which means it can react quickly to changes in the underlying network.

Fly thousands of times faster across the Atlantic

This improvement can be important; Google claims that its typical trans-Atlantic connection can be up to 2700 times faster. BBRs may also better match newer protocols (such as HTTP / 2), and instead use more than one connection, one single TCP connection may be used for multiple requests to the server.

The implementation of BBR as a sender algorithm means that Google can improve the end-user experience without having to upgrade all network devices and services between the Google Cloud Platform (GCP) and user devices. While this is a big win for YouTube, bringing the algorithm to the Google Cloud Platform (GCP) is an important step as it will handle the flow of more diverse applications.

How BBR Accelerates Google's Cloud Services

Google Cloud Platform (GCP) customers can take advantage of BBR support in three ways: by connecting to Google services that use it, by using it as a front end for applications that run through Google Cloud Web Services, or directly in their own IaaS applications.

Since Google's own service will use BBR, the user's cloud storage latency should be reduced, making applications like Spanner or BigTable more responsive. End users will gain even greater BBR support from Google's Cloud CDN (Better Media Delivery) and Cloud Load Balancing (BBB), which will route packets from different application instances.

If you want to use BBR in an IaaS application running on Google Compute Engine, you need to use a custom Linux kernel. Although BBR has contributed to the Linux kernel, it has not been used in the mainstream version, requiring users to add it from the web development department, configure it as GCE, and compile the kernel.

BBR can be compiled into the Linux kernel, and users can start using it on their own networks, especially if users are using Linux-powered network devices such as open compute switches. Switching from Google Cloud Platform (GCP) to BBR may draw interest from Google, the Linux community, and other network operators and vendors.

Hanse at 451 Research

Posted on