Networking’s Kobayashi Maru Problem

When faced with the infamous “no-win” scenario in the Kobayashi Maru simulation at Starfleet Academy, James Kirk simply changed the rules — because he did not believe that there was a “no-win” scenario.

Attempting to play by the rules of how routed TCP/IP traffic behaves across congested links, network operators at enterprises and service providers alike wake up every day to their own “no-win” situation, as data loads from mobile, cloud and IoT flood networks around the world. (Arguably less life threatening than Klingon battle cruisers, but equally overwhelming.) Links fill up, data queues fill up, random packet drops start flying and user sessions crash repeatedly – and that’s before you even start to consider the impact of DDoS attacks and the like.

For the better part of three decades network operators have responded to all of this chaos by deploying dozens of expensive tools and technologies over the years to try and contain the random best-effort nature of IP packet delivery and, ultimately, by installing double the bandwidth they can actually make use of in order to make room for peak TCP/IP traffic loads. (The average large enterprise sports at least six pricey edge appliances in its network today.)

In fact, entire multi-billion-dollar industries, such as WAN Optimization, have sprung up simply to treat the behavioral tics of congested IP traffic, even though these appliances often add their own traffic and can actually mask underlying network issues. And the cost? Impressive. And, in the end, they have not solved the problem presented by the routed IP data flows.

Saisei decided to take a long, hard look at the situation and, like James Kirk, found a way to change the rules, in this case by fixing the underlying problem of TCP/IP’s random, chaotic packet delivery behavior rather than create a better bandage.

Without mucking about with the standard – routed TCP/IP data flows are standard entering the Saisei system and remain standard TCP/IP when they leave – Saisei has developed a patented software Network Performance Enforcement solution that allows us to completely eliminate the requirement for queuing and scheduling packets.

Using Saisei’s high-performance flow engine technology, random, chaotic TCP/IP data flows become guided and predictable for the first time. In fact, random packet drops are completely eliminated with Saisei, in contrast to all of today’s popular QoS queuing mechanisms that constantly drop packets as the queues fill up.

The immediate effects show just what you can do when the rules are changed so that they favor users and applications over traditional network devices.

To start, Saisei, in most cases, literally doubles the amount of data that can flow through an existing routed IP network link. By “domesticating” TCP/IP while under our control, Saisei can comfortably run network link utilization at over 95% around the clock.

But the corollary benefit is even more of a game changer as Saisei can run links at that level of utilization while concurrently guaranteeing that no user session on that link will ever drop or stall out again. We call this our “No Flow Left Behind™” guarantee.

All of this is arriving just in time for the data deluge demands coming from mobile, cloud and the IoT.

Network Performance Enforcement from Saisei represents a brand new technology suite and defines a new industry Best Practice for large enterprises and service providers worldwide, offering unprecedented control, visibility and scalability of users and applications across networks. These three pillars – control, visibility and scalability — comprise the essence of NPE.

kobayashi-maru
Image: NASA

Leave a Reply

Your email address will not be published. Required fields are marked *