With the blurring of technology lines, the rise of competitive companies, and a shift in buying models all before us, it would appear we are at the cusp of ushering in the next era in IT—the Third Platform Era. But as with the other transitions, it is not the technology or the vendors that trigger a change in buying patterns. There must be fundamental shifts in buying behavior driven by business objectives.
The IT industry at large is in the midst of a massive rewrite of key business applications in response to two technology trends: the proliferation of data and the need for additional performance and scale. In many regards, the first begets the second. As data becomes more available—via traditional datacenters, and both public and private cloud environments—applications look to use that data, which means the applications themselves have to go through an evolution to account for the scale and performance required.
Scale up or scale out?
When the industry talks about scale, people typically trot out Moore’s Law to explain how capacity doubles every 18 months. Strictly speaking, Moore’s Law is more principle than law, and it was initially applied to the number of transistors in ASICs. It has, however, become a fairly loosely used way to think about performance over time.
Of course, as the need for compute resources has skyrocketed, the key to solving the compute scaling problem wasn’t in creating chips with faster and faster clock rates. Rather, to get big, the compute industry went small through the introduction of multicore processors.
The true path to scaling was found in distribution. By creating many smaller cores, workloads could be distributed and worked on in parallel to create lower compute times. And the ability to push workloads out to large numbers of CPU cores means that the amount of compute power could be scaled up by fanning workloads out. Essentially, this is the premise behind scale-out architectures.
A point that gets lost in all of this was that simply putting a multi-core processor on a server didn’t mean that scale automatically came. In fact, if the application itself was not changed, it would run neatly in one of however many cores were on the CPU. To take advantage of this scaling out of compute, the applications themselves had to go through a transformation.
The same story has played itself out on the storage side. The premise behind Big Data applications is that volumes of data can be sharded across a number of nodes so that operations can be scaled out across a larger number of servers, each one handling a fraction of the job and completing their parts in less time. By spreading workloads out across multiple storage servers, the time it takes to fetch data and perform operations drops.
Here again, the applications themselves need to change to take advantage of the new architecture.
What is happening now?
The application space as a whole is essentially going through its own transformation. At its most basic, this means that companies are in the process of rewriting business applications to take advantage of available data and to embrace a new architecture more capable of scaling than before.
Note that an additional property of scaled out applications is that they will tend to be more resilient to failures in the infrastructure. Applications written expressly for scale-out environments tend to be designed not with the goal of eliminating failures but rather making failures transparent to the rest of the applications. By replicating data in multiple places (across servers and across racks, for instance), Big Data applications are less reliant on individual servers or switches.
But the application evolution isn’t just confined to companies with large enterprise applications. An entire industry has emerged to support a transition from multi-tiered applications to the current generation of flat, scaled out applications pioneered by the likes of Fecebook and Google. If initiatives like Mesos and Docker are any indication, the future of high-performance applications will exist only in distributed environments, with operating system and toolkit support.
Where is the network in all of this?
Overlooked in the transition is the network. For decades, the network has been built to be intentionally agnostic to what is running on top of it. While there have been intermittent periods where network-aware and application-aware have been bandied about, the majority of networking since its inception has been an exercise in creating connectivity by providing bandwidth uniformly between any endpoints on the network.
The entire premise to scaling out is providing workload capacity in small chunks and then distributing applications across those chunks. In the case of compute, this is done by creating many small cores (or VMs) and then moving the applications to the available compute. In the case of storage, storage and processing capacity is spread across a number of nodes, and application workloads are distributed to free capacity. In each of these cases, small blocks of capacity are created and the application workloads are moved to them.
How should the model for networking evolve? If scalable solutions all have the property that application workloads are moved to where capacity is present, then networking needs to go through some fairly foundational changes.
Networking today is based on a set of pathing algorithms that date back more than 50 years. The whole of networking is based on Shortest Path First algorithms that essentially reduce paths in the network to the set of paths that have the shortest number of hops. There might be hundreds of ways to get from point A to point B, but the network will only use the subset that have the shortest possible path. Technologies like Equal Cost Multi Pathing (ECMP) then load balance traffic across paths with the same number of hops.
If the objective is to identify where there is capacity and push application flows across the least congested links, there will need to be fundamental changes in how networking functions to account for non-equal-cost multi-pathing (that is, fanning traffic across all available links).
Next-generation application requirements
If the Third Platform era is characterized by a new generation of applications, understanding what those applications require will determine how infrastructure in support of those applications must evolve.
Third Era applications have the following properties:
- Horizontally-scaled – Applications will tend to be based on scale-out architectures
- Agile – With an eye towards facilitating service management, interactions (from provisioning to troubleshooting) will be highly automated—across infrastructure silos
- Integrated – To achieve the performance and scale required, compute, storage, networking, and the application will all be integrated
- Resilient – Distributed applications will not be designed for infrastructure uptime but rather for overall application resiliency (fault tolerant, not fault free)
- Security – With data underpinning many of these applications, security and compliance (along with auditability) will be key
- These properties will determine how each of compute, storage, and networking must evolve.
Scale-out networking
The network that supports scale-out applications will itself be based on a scale-out architecture. The key property to scale out is less about the ultimate scale and more about the path to scale. If applications scale up by adding application instances (on servers, on VMs, or in containers), then the supporting infrastructure gets activated by enabling additional capacity as needed.
There are two facets here that are important:
Graceful addition of new capacity – Because application capacity will be turned up as needed, the requisite infrastructure capacity must be easy to add. Additional servers should be added without significant re-architecture efforts, storage servers must be added without re-designing the cluster, and network capacity must be added without going through a massive datacenter deployment exercise. For leaf-spine architectures, growth occurs through step-function-like scaling. When the number of access ports requires the addition of a new spine switch, for example, the entire architecture must be revisited and every device re-cabled. This incurs a significantly longer delay than either the compute or storage equivalents. A next-generation network designed with the graceful addition of new capacity in mind would allow for non disruptive capacity additions.
Scale down capacity when it is not needed – While most of scaling discussions are focused on scaling up, it is equally important to scale down. For instance, if a specific application requires less capacity at certain times or under certain conditions, the supporting infrastructure must be capable of redeploying that capacity to other applications or tenants as it makes sense. Traditional networking using leaf-spine architectures uniformly distributes capacity regardless of application requirements. Next-generation network architectures should be able to dynamically allocate capacity when it is not required. This means leveraging technologies like WDM, which allows capacity to be treated as a fluid resource that can be applied using programmatic controls from a central management point.
It is probably worth adding an element here about how compute and storage will scale out. If a new resource is added in a different physical location, the role of the network is not just to have the requisite capacity but also to make that resource look as if it is physically adjacent to the other resources. Scaling out is not just about capacity then; it is also about providing high-bandwidth, low-latency connectivity such that data locality is a less stringent requirement than otherwise. This means that resources can be across the datacenter, across a metro area, or across a continent.
Agility
Agility, put simply, is about making change faster. The notion that you can plan your architecture years in advance and then allow the system to simply run is just no longer true. When applications and the data they use are dynamic, change is a certainty. The question is: how do you deal with that change?
There are two ways to deal with change: automate what you can, and make everything else easier.
The currency of automation is data. To be automated, data must be shared between systems in a programmatic way. Automation is, in many ways, a byproduct of effective integration, but integration is not by itself the entire story. To automate, you must understand workflows—how things are actually done. This is an exercise in understanding how information is layered based on frame of reference. When there is an issue related to a web server, automation is more about collecting all data related to that web server across all infrastructure than taking some action. The challenge in automating things is knowing what action to take, not reducing the keystrokes it takes to execute the command.
It is impossible to automate everything, so the remaining elements of the network must be flexible and more wieldy than we have become accustomed to in networking. This means simplifying architectures so that we are deploying fewer devices and ports. It means reducing the number of control points in the network so we have fewer places to go to make changes. And it means implicitly handling behaviors that have traditionally been managed through pinpoint control over thousands of configuration knobs.
Integrated infrastructure
The future of IT infrastructure is based not on silos of compute, storage, and networking capacity but on the various subsystems working together to deliver application workloads. Accordingly, solutions that service the Third Platform era will either be tightly-integrated solutions from a single vendor, or collections of components that are explicitly designed to be integrated.
In the case of the former, the concern for customers is the pricing impacts of an integrated solution. Vertical stacks are inherently more difficult to displace, which means that incumbency creates a strong barrier to adoption, which will have the tendency to drive pricing higher (noting that there is already a lot of pressure to push pricing down).
In the case of the latter, the integration will need to be more than testing things alongside each other. This is not about the coexistence of equipment but rather the intelligent interaction of devices from different classes of vendor. From a Plexxi perspective, this is why efforts like the Data Services Engine (DSE) are important. They provide a means of integrating, but more importantly, they provide a framework that is easily extensible to other infrastructure so that the future is always integratable. Additionally, this integration layer is open source, so the likelihood of lock-in is significantly lower.
Resilience
The next-generation platform is resilient. Rather than designing for correctness and relying on a never-ending battery of tests to ensure efficacy, infrastructure is constructed using building blocks that are themselves resilient to failures and tolerant to issues in either the hardware or the software. From a compute perspective, this means having the ability to fail to other application instances or containers. For storage, this means replicating data across multiple servers in multiple racks. For networking, this is all about having multiple paths between endpoints so that if one goes down, resources are not stranded.
With resiliency built in via Plexxi’s inherent optical path diversity, the emphasis shifts to failure detection and failover. Path pre-calculation is a major player in making sure that failover and convergence times stay low.
Over time, resilience will include pushes into DevOps-style deployment models where the infrastructure is treated as a single system image that is qualified before changes are deployed. This will require integration with DevOps tools—not just tools like Chef and Ansible but also tools like Jenkins and Git.
Security
Security is about keeping data secure, not just keeping equipment secure. This means that traffic must be isolated where necessary, auditable when required, and ultimately managed as a collection of tenant and application flows that can be treated individually as their payloads require. To get differentiated service, there will need to be policy abstraction that dictates the workload requirements for individual tenants and applications. For instance, if a workload requires special treatment, it can be flagged and redirected as needed.
Read more: