If you take a high-performance racing class, one of the first things you will experience is a ride around the track in a vehicle that seems ill-equipped for racing. Some classes might take you around the course in an average car like a Kia, while others might be a bit more dramatic and get all the students into the back of an old van. The point of this first exercise is to show you that vehicles are far more capable than the average driver expects.

The punch line of this first lesson in racing is that if you think you are pushing your car to the limits when you approach 100mph or take a turn a little hard, you don’t really understand what your car can do. And if you want to be a performance racer, you need to get a lot more comfortable as you get out closer to the extremes.

The point here is less about the car and more about human psychology. Without knowing what the limits are, most of us behave conservatively. The fear of calamity is real, and it is enough to keep us operating well within the perceived constraints of the system. Whenever we experience something a little outside the edges of what we are used to, we recoil a bit.

Comfort levels and networking

Now imagine this in the context of SDN. One of the core aspects of SDN is the presence of some sort of central control. Whether that is a completely-centralized controller or a distributed application providing some form of system control, the fact that it is at least logically central means that it has a global view of the network as a resource.

This is useful because a global view of the resource allows the controller to do intelligent things with network workloads. For example, workloads can be balanced to optimize overall network performance. Or maybe the controller fans traffic out over more available paths, which could drive up fabric utilization.

The challenge

This actually creates a challenge.

Architects and operators are used to running their networks within some constraints. Things like capacity planning take into account total resources. Processes are built around these limits. Even things like purchasing decisions consider the operating assumptions.

Within these constraints, networks are built. And then they are monitored. We look at things like queuing and buffering to get a feel for how traffic is moving across the network. We get accustomed to how counters ought to look. In essence, we familiarize ourselves with the operating parameters of our network.

But what if those limitations were not really the limitations?

If, for example, intelligent load balancing and more sophisticated workload management allowed you to get more out of your network than you were used to, would you feel comfortable extending the operating limits that confine you today?

Intellectually, the answer is likely yes, but there is an education process that has to happen here. Most people are consumers—not producers—of information. The reason best practices are so powerful is that they allow the majority of people to leverage the learnings of the nominally smaller set of people willing to experiment and figure things out.

And because networking is notoriously complex, the dependence on this information is even higher than in other disciplines. It actually keeps most of us from really knowing what our networks are capable of. Not unlike would-be performance drivers, we don’t fully understand what we can do with our network. We operate either well below the limits, or occasionally we do something reckless that results in something disastrous.

Creating familiarity

Either case is really derived from the same issue: an unfamiliarity of where we ought to be.

Getting to familiarity requires a re-examination of how we think about monitoring. When you are driving a car, how do you know where the limit is? It’s not just feeling uncomfortable as the wheel shakes; you need to know the point at which the back side actually slides out from under you on a curve.

In networking parlance, this means we need to be looking at more than just counters and bits per second. We need to know the point at which the network slides out from under us. And in the case where we are making better use of more paths through SDN, we need to be looking at more than just hot links. We will eventually want to know how traffic gets balanced across all available links, and how that impacts application workloads.

Essentially, we are fast approaching an era where monitoring, planning, and troubleshooting are going to rely on more than simple counters. SDN represents more than just a new architecture. It brings with it the ability to do some pretty clever things. But those clever things will push us beyond our comfort zone. For people for whom performance is not important, maybe it’s ok to stay trapped behind a veil of lowered expectations.

But if SDN is really going to breed a new kind of performance networker, it means that we will collectively have to become a lot more familiar with our cars. The results might be life-changing… or at least network-changing

[Today’s fun fact: Running links between sites at 99% utilization is possible. Imagine if you didn’t have to be Google to do it.]