Tagged: Virtual Netork Functions

Lean NFV Ops: scaling basics.


“Just-in-Time means making only what is needed, when it is needed, and in the amount needed […] it is necessary to create a detailed production plan […] to eliminate waste, inconsistencies, and unreasonable requirements, resulting in improved productivity.”Just in Time, Philosophy of Complete Elimination of Waste byToyota.

“Each unpredictable feature demanded by customers is considered an opportunity […] this requires rapid adjustment of production capability. Dynamic and flexible network utilizations in functional modules can maximize the strength of each resource and the overall risk and costs are reduced.”Flexible Manufacturing System for Mass Customization Manufacturing by Guixiu Qiao, Roberto Lu and Charles McLean.

“Providing capacity in a more expedient fashion allows us to deploy a functioning and consumable business service more quickly […] at the core of our self-service functionality is a hosting automation […] On-demand self-service is a critical aspect of our cloud environment; however, without underlying business logic, controls, and transparency, an unconstrained on-demand enterprise private cloud will quickly exceed its capacity by doling out allocations beyond its supply.”Implementing On-Demand Services by Intel.

“Elasticity is commonly understood as the ability of a system to automatically provision and deprovision computing resources on demand as workloads change […] in a way that the end-user does not experience any performance variability.”Elasticity in Cloud Computing: What It Is, and What It Is not by Nikolas Roman Herbst, Samuel Kounev and Ralf Reussner.


image

This past few months I’ve followed a few discussions on virtualization and scalability.

There is such a thing as becoming a victim of success when pent up demand strikes and a business fails to scale accordingly.

Capacity management has typically prompted over-engineering decisions and long lead times taking a year or more in the telecoms industry. This can result in concerns about delayed breakeven points, underutilizing precious resources as well as limited offerings due to the higher cost of oversubscribing.

Lean means staying nimble at any size, streamlining and keeping lead times as short as possible by design. Effective and efficient capacity management relies on understanding economies of scale and scope. The first relates to achieving larger scales triggering more efficient utilization levels and, therefore, lower and more competitive average costs.

Scope means taking advantage of synergies and common infrastructure and platforms to deliver a variety of services, application multi-tenancy being an example in NFV’s (Network Functions Virtualization’s) context.

Active portfolio management follows: complementary application lifecycles can share resources and raise overall utilization levels in the process. Moreover, some applications can be deconstructed and modularized so that specific subsets become standalone services available to (or reused by) other applications. These can be decoupled to join a common pool and scale independently.

In some discussions we refer to growth models where “scale” follows a “vertical” approach while “scope” adds breath with new functions and is, therefore, a horizontal expansion model. This breakdown allows for plotting and segmenting growth/de-growth scenarios in a simple matrix. I am experimenting with new ways of helping visualize these concepts. This is work in progress and the final result will look different from early drafts poste here. Though, I think that they can be used for the time being.


One other thought… elasticity relates to following demand curves: offer meets demand by dynamically adapting capacity. This entails provisioning, deprovisioning and a virtuous circle by means of gracefully tearing down resources, which are freed up and exposed for other applications to leverage. Elastic computing seems to make us think of unlimited just-in-time capacity, but there are upper and lower boundaries involving diminishing returns. It just so happens that virtualization has pushed the envelope by considerably widening and shifting these constrains.

It is worth reflecting on Gordon Moore’s law in this context: many incremental and disruptive innovations yield exponential performance improvements in today’s cloud age. That can be coupled with NFV’s (Network Functions Virtualization’s) shift from lengthy lead times, cumbersome operations and costly dedicated hardware to automated systems working with a wide supply of more affordable COTS (Commercial of The Shelf) hardware and open source solutions.


image

Let’s now focus on the notion of service decomposition and how that impacts scaling.

This exercise often starts with deconstructing monolithic systems typically relying on vertically integrated architectures, then looking at the actual services involved, dependencies, flows… and figuring out what is best to keep integrated vs. modularized, centralized vs. distributed.

This also entails looking at opportunities for what it takes to streamline development time, such as code reuse and processes worth exposing by means of API (Application Programming Interfaces). Note that many applications do not need to duplicate assets and can become distributed systems consuming resources and processes running elsewhere.

In this section’s graphic, the application is a VNF (Virtual Network Function) which has been decomposed and right-sized to run in three different VMs (Virtual Machines) of different volumes instead of procuring a single physical server for just this application.

Lighter gray blocks at the back end present a pool of services available to that and other applications. As an example, when decoupling an application’s logic from the app’s data we get to leverage DaaS (Database as a Service) as one of the shared services.

These are the “scaling” terms provided by ETSI (European Telecommunications Standards Institute) NFV reference documents:

  • Scaling up: extending a resource (compute, memory, storage) to a given VM.
  • Scaling down: decreasing resource allocation.
  • Scaling out: creating a new instance, adding VMs.
  • Scaling in: removing VMs.

Circling back with service decomposition: there are scaling scenarios where there is no need to go through the trouble of scaling out an entire application, but just a specific service at stake, such as one of the VMs or the database in the previous example.

In some other scenarios scaling can prompt application updates and/or upgrades to enable new functionality. Suitable “upgrade windows” can be hard to find when multiples services are in demand and expected to remain always-on anytime. A stateless architecture means that the session’s state is kept outside of the application, with the shared database in this example. Traffic can be redirected to an application’s mated pair, this is a second instance which was kept on active standby mode until the maintenance event.

This also means going beyond 1+1 models where everything is duplicated (mated pair concept) for failover sake. There often are more efficient n+k systems in HA (High Availability) environments. Note that, paradoxically enough, rolling out upgrades happens to be a primary source of maintenance issues thereafter, adding to the need for sustaining service continuity at all times coupled with zero touch and zero downtime.


Zero touch is delivered by automation, which relies on continuous system monitoring, engineering triggers and preceding work with recipes, templates and/or playbooks (these are alternative terms based on different technologies) detailing what needs to happen for to execute a lifecycle event. Scaling is the subject of this post and onboarding, backup, healing, termination are other lifecycle events just to name a few more.

Programmability drives flexible automation, which is data driven and based on analytics. Predictive analytics goes a step further to project and address trends so that actions can be taken in advance. In our Lean NFV Ops demonstration we purposely stimulate network traffic with a load generator to exemplify this. We run scenarios illustrating both (a) fully automated scaling and (b) autonomation by switching to manual controls that put the operations team in charge at every step.

Autonomic computing is powered by machine learning. Research on NFV autonomics points to the ability to self-configure, specially so under unplanned conditions. Looking into automation and distribution modes helps define maturity levels for NFV, that being a topic for another article.


image

Let’s zoom out to discuss scaling in the context of the platform.

ETSI NFV defines MANO as the Management and Orchestration system. “Managing” refers to addressing the application’s lifecycle needs, scaling being one of them. The notion of “orchestrating” focuses on the underlying resources to be consumed.

The MANO layer is thought out as NFV’s Innovation Platform, which I show in purple color: the thickness of that layer conveys the degree to which an application uses more (right) or less (left) of MANO’s capabilities. This is an application multi-tenant environment where VNF1 shows a monolithic app example in contrast to VNFn which is meant to take full advantage of MANO’s automation.

This cross-section shows a horizontal architecture as the platform supports multiple applications as well as back end systems. Horizontal and vertical solutions scale differently. A common platform presents à la carte features and start small, growing and scaling to enable homogenous end to end management across the applications, while the monolithic approach moves forward with siloed operations on an application by application basis.

One more example, growing by adding interdependent services is a discouraging endeavor when reconfiguring multiple functions becomes overwhelming. SFC (Service Function Chaining) comes to the rescue in a virtual environment by providing network programmability and dynamic automation to create networks connecting new services. NFV’s scaling needs make a good case for SDN (Software Defined Networking), the technology behind SFC.


image

Now moving to what’s under the hood.

NFVI stands for Network Functions Virtualization Infrastructure. Most typically, what we can see and touch is a data center environment providing resources consumed by the applications such as compute, memory, storage and networking to begin with.

The visual in this section shows a conceptual server farm right under the platform. Blue nodes on the left and brown ones on the right are physically placed at different geographic locations, yet forming part of the same NFVI orchestrated by MANO. The gray one is being added: scaling out of the existing infrastructure. The green node lays outside and can be leveraged when bursting:

  • Scaling out: adding more servers (gray cube).
  • Scaling up: leveraging clusters and/or distributed computing to share the load (blue and brown cubes).
  • Bursting: tapping into third party infrastructure to address capacity spikes (green cube).

Note that, in this context, scaling up can also mean upgrading servers to handle larger workloads. This can also be about using an existing chassis while replacing a server with a new node featuring more processing, data acceleration, lower energy needs, etc.


Early on we talked about COTS’ being easier to scale out when compared to proprietary dedicated hardware. It has partly to do with standardization, centralized management and consolidation, the existing supply chain for x86 systems and node automation.

We can also factor consumption based models where a given application’s business case is not impacted by up-front CAPEX (Capital Expenditures). Instead, the application business case accounts for resource usage levels which, once again, benefits from economies of scale and scope. The notion of elasticity makes infrastructure planning transparent to the application.

Capacity and performance management skills remain of the essence: the move to applications based on stateless architectures means that scaling distributed applications places a greater emphasis on API behavior by addressing capacity and speed in terms of RPS (Requests Per Second). And, nonetheless, the telecommunications industry is known to require high capacity, low latency SFC, which is driving data plane acceleration solutions.


image

We can now zoom out.

Scaling is not a new thing or need. Conventional architectures can scale, they just don’t do it fast or effectively enough in a cost effective fashion. Taking months and years to get the job done risks missing markets and taxing resources which would have been needed to create innovative services.

Admittedly, one of the objectives behind writing this was wrestling with jargon by outlining “scaling” terms in context, whether related to application, platform or infrastructure. Hopefully, that goal was accomplished. Otherwise, please let me know.

One other thought… NFV is a change agent. Hence, cool technical wizardry alone does not suffice. We are discussing emerging technologies causing interest in connecting dots across behavioral economics (and not just the business case) and organizational cultures and decision making in the telecoms sector. Understanding the human factor matters.

As usual, I will be glad to continue the conversation by exchanging emails, over LinkedIn or in person if you happen to be around at IDF15, Intel Developers Forum, in San Francisco’s Moscone Center on August 18-20.

Approaching NFV readiness and maturity levels.


Crossing the chasm between inventing and innovating has a lot to do with a technology’s diffusion level and depth of adoption.  Generally speaking, inventions talk to new forms, functions and applications while innovations have more to do with whether that novelty becomes a game changer.

Innovations qualify as such because of causing a significant industry impact.  This is beyond just filing for a patent or making something commercially available.  Otherwise, we would just be talking about inventions.

When an emerging technology is first conceived, those of us rallying behind it might do so because we sense and foresee potential.  We strive to work with all of the assumptions involved in how it will unfold, evolve and even transform and mutate.  But, accidental innovation happens.  Moreover, a majority of entrepreneurs would acknowledge that what made a business successful might not necessarily be the source concept they started with.

As Antonio Machado (Spanish poet, 1875-1939) stated in one of his most popular writings: “we make paths as we go.”  While NFV (Network Functions Virtualization) has crossed a point of no return and aims to shift from invention to innovation status, we cannot yet benefit from defining maturity levels in hindsight.  Moving forward in the midst of changes and uncertainty calls for exercising thought leadership.


image


NFV qualifies as an emerging technology of great interest in the telecommunications sector, jointly with SDN (Software Defined Networking).  When I worked on the the above visuals, my goal was to convey a dynamic service delivery environment resembling neural activity with different but interconnected layers.  I see this as an application driven and a constantly morphing system where new connections get instantiated, any needed assets surface just in time and resources get fired up without incurring in self defeating trade-offs, therefore the above right chart.

Note that this is in contrast with today’s rigid network systems where service innovativeness can be either halted or negatively impacted by lengthy lead times and cumbersome operational constrains.  Couple that with performance and reliability concerns.  As a matter of fact, the way Clayton defines his “innovator’s dilemma” is worth reviewing and understanding in this context.

Having depicted a desirable vision within the scope of what’s eventually possible, when undertaking technology roadmapping my next batch of questions are more about [a] what fruitful immediate steps can be undertaken now, as well as [b] looking for a sense of direction by outlining the journey… while keeping things flexible and agile enough to pivot as needed.  The first question’s answer has to do with the notion of “incremental innovation” while the second question can be addressed in terms of “disruptive innovation.”

As an example, the journey (above left chart) starts with getting to leverage tools and technologies that currently exist, such as today’s many virtualization projects, which deliver early success stories.  In some cases, this just means achieving better asset utilization levels as new virtual machines can be easily created.  This circumvents lead times for new equipment orders and, hence, can also translate into faster time to market.  Early success stories are like taking baby steps that build confidence and momentum: learning to walk before we can run.

By looking at how to best address [a] and [b] and zooming in and out in the process, we can also make decisions on what projects should be future proofed at each step, or which ones make sense to continue to exploit as trail blazing bets, yet experimental initiatives.


image


About a year later I delivered these other two charts that you see right above this paragraph.  We are talking about a turning point where pilot virtualization projects where expected to level the playing field already.  Time flies in the cloud age and table stakes prompts a need for moving up the value chain.  This means seeking the kind of advance and differentiation that convert into new competitive advantages… and whether first movers’ set the pace, can sustain or further them and get a better overall deal.

This 2×2 matrix maps out capabilities (vertical axis) and readiness (horizontal axis) progressing from proof of concept prototyping all the way to live deployments.  It also helps discuss two other significant chasms based on what it really takes for initiatives in the lower left quadrant to move forward from the labs and pre-production to live environments, as well as whether they evolve upward toward a pure carrier cloud environment, that being the ultimate end goal.

The spiral on the right evolved from a radar chart.  Admittedly, I keep toggling between the above spiral and my source radar chart depending on what I need to communicate.  When showing the spiral version I discuss a fast evolving and complete delivery and operations lifecycle.  Then by switching to the radar version I can plot how far a given project is on any of those axis.  What usually follows in that discussion is drawing a polygon connecting the multivariate data and how the resulting shapes can be different across projects due to product management decisions.


image


Last but not least, here is another multivariate view for a more recent talk with observations and insights linking how infrastructure, analytics, management systems and services evolve and, once again, where a given project might lay.  This time around, my goal was to lead the discussion with four hot and attention grabbing topics such as: [1] cloud fabrics, [2] big data, [3] automation and [4] software defined elements as a service.

This is an animated chart uncovering a column for each topic.  Once completed, it becomes easier to engage in a meaningful conversation on the bigger picture where these four pillars turn to be interdependent.  The above display was the result of a whiteboarding exercise where a fifth column outlined ecosystem items and a sixth one was dedicated to human factors and organizational behaviors.


You can see these and other charts in context as part of Alcatel-Lucent presentations such as:

These materials are also available in the “content” section of this blog.