Category: CTO

Executive Forum on Digital Transformation (DX)- Chicago, September 12 2017


“Argyle Executive Forum is bringing together senior digital & IT executives from a variety of industry verticals for our biannual CIO Chicago Forum. Throughout a full day of content and networking, we will focus on the most pressing issues facing IT executives with regards to leading the business through digital transformation, with an agenda geared specifically towards Chief Information officers, Chief Data Officers, Chief Digital Officers, as well as Data/ Analytics/MIS VPs, Directors, and Architects in a leading role.

Leading the Business Through Digital Transformation – Argyle.



imageFirst, thanks to the team at Argyle for what turned out to be a timely and insightful conference on DX, Digital Transformation. Nokia was one of the Executive Forum’s sponsors as a Senior Supporter.

It is worth noticing that this event featured partners who we work with such as HP Enterprise, Thought Leader Sponsor, and IBM, Breakout Session Sponsor.

That talks to the criticality of collaborative undertakings as Digital Transformation becomes a pressing objective across industries, academia, public service and government sectors.

What follows is my notes and personal insights. While all the sessions and discussions were quite relevant, I would like to highlight the opening keynote, which set the tone and narrative of the event.


imageJames P. MacLennan, SVP & CIO at IDEX, discussed “The Five Components of a Great Digital Strategy,” which addressed the fact that “Design Thinking”, “Human Factors” and a collaborative culture involving interdisciplinary workstyles and “Great Teams” have become of the essence.

Moreover, he stated that “a Digital Business” will only succeed when it understands hot to connect with people.” The “human element” and, therefore, “people centered” strategies turn out to be critical success factors.

I would like to add that this entails engineering a continuum of (a) stakeholders, who are all human personas by definition, and to do so across (b) UX (user experience) and CX (customer experience) domains.

This job takes (c) a holistic understanding of customer facing (front end) and resource facing (back end) elements forming a coherent end-to-end system. Otherwise, operational fragmentation will take a toll and will deny the intended DX benefits.


imageJames’ presentation displayed the convoluted UI (user interface) shown in this picture to illustrate the paradox of well intended yet counterproductive implementations that negate transformation initiatives.

Here is another valuable insight coming out of Argyle’s Executive Forum: information technologies (IT) and tech and processes for operations cannot longer be worlds apart, which demands superb cross-functional teamwork.

Cognitive overload, deficient information architecture, and poor usability translates into: human error, risk aversion, costly budget overruns, missing or deviating from goals, so on and so forth.

Any and all of these issues combined can be silently impacting quality or, simply, just lowering the bar for a business to get through noisy and cluttered operational environments. That is hardly the stuff that operational excellence calls for.


Obviously, in the context of CX, customer satisfaction becomes harder and harder to attain and, more specifically, to get that effectively done in a consistent fashion.

Predictability and consistency are key objectives for any Quality Management program. If that scenario alone wasn’t troublesome enough, Customer Delight (rather than just satisfying agreed upon requirements) is Design Thinking’s ultimate performance indicator, which commands a premium clearly beyond reach under those circumstances.

Quality management wise, “satisfaction” is the fulfilment of expected specifications while “delight” is about great pleasure, or great satisfaction if you will. “Satisfaction” can be rationalized and is the acceptance ticket to be in business. “Delight” accounts for human affects (emotions) and is a powerful source of differentiation. Those who think that’s just about splitting hairs should take a pause and think twice because DX is set to enable game changing experiences on all counts and fronts.


Thoughtout the forum and session after session, Jim’s “Design for Humans”  principle gained more and more critical mass as presenters and panelists discussed the reasons why we should be mindful of the user journey and how to best improve all touch points along the way.

In one of the panel discussions this became even more evident when the question on aligning people, processes and technologies pointed to difficult prioritization exercises. Note that there was immediate consensus on the need for putting people first and humanizing technology and processes by applying Design Thinking, a human centered methodology that is corner stone to the job of creative technologists.

That means projects that are driven by clear missions and specific experiential outcomes and lifecycles (Goal Directed Design) rather than just an I/O approach. It also means rapid experience prototyping and A/B multivariate testing to explore possibilities since Design Thinking is a serial innovation engine.



imageLet’s connect some more dots.

Chicago’s NPR station aired a rerun of The Power of Design this past weekend. The discussion was centered on “How Can We Design For A Better Experience.”

By the way, TED’s acronym actually stands for the convergence of Technology, Entertainment and… Design.


Interview with Tony Fadell, one of the main designers of the iPod (Apple) and founder of Nest (Google).

 “Design begins by also noticing all those little problems that many ignore (…) we we though our lives accepting these design flaws that actually don’t improve our lives.”

“Steve Jobs challenged us to see our products through the eyes of the customer, the new customer, the one that has fears and possible frustrations, and hopes and exhilaration that the new technology can work straight away for them. He called it “staying beginners” and wanted to make sure that we focused on those tiny little details to make things work faster and seamless for the new customers.”

“There is this positive emotional momentum that builds on itself at each step of the process (…) when you hit a brick wall you loose all the momentum (…) and though away an entire great experience.”

“There are to halves to design, just as there are two halves to your brain, the emotional part and the rational part. If you want people to truly adopt your product it has to have an emotional component, something that grabs you (…) that unlocks your curiosity, it also needs to rationally work (…) because people see value beyond the sexiness.”


Interview with Joe Gebbia, Airbnb cofounder.

“Any time that you see duct tape in the world, that’s a design opportunity (…) it’s an indicator that something is broken, that something did not perform the way it was design to and that there is an opportunity to improve it.”

“Design is the key to (Airbnb) success (…) and as a competitive advantage, design is thing that can separate you (…) the next thing that can differentiate you. All things being equal, two comparable products side by side with the same technical features and components… you will be crazy to choose the one that is harder to use.”

“Airbnb’s design decisions not only made the service easy to use but it helped millions of complete estrangers trust each other (…) and open their homes (…) design is more than the look and feel of something, it is the whole experience.”


Related Posts:

Human Factors Engineering: Big Data & Social Analytics to #MakeTechHuman


“Netflix’s analytical orientation has already led to a high level of success and growth. But the company is also counting on analytics to drive it through a major technological shift […] by analytics we mean the extensive use of data, statistical and quantitative analysis, explanatory and predictive models, and fact-based management to drive decisions an actions”. Competing on Analytics by Thomas H. Davenport and Jeanne G. Harris.

“Big data changes the nature of business, markets, and society […] the effects on individuals may be the biggest shock of all […] this will force an adjustment to traditional ideas of management, decision making, human resources and education”. Big Data by Viktor Mayer-Schonberger and Kenneth Cukier.

“Social physics functions by analyzing patters of human experience and idea exchange within the digital breadcrumbs we all leave behind as we move through the world […] the process of analyzing the patterns is called reality mining […] one of the ten technologies that will change the world [according to MIT Technology Review]”. Social Physics by Alex Pentland.


image

It’s Saturday night and I am happy to share that I just submitted my last two Jupyter notebooks and, therefore, completed MIT’s first certificate course on Big Data and Social Analytics.

This was one intensive summer with very little time left for anything else beyond work, day-to-day family life and spending most evenings and weekends studying. MIT BD&SA course developers estimated a weekly workload of 8 to 12 hours through 9 weeks. Though, many of us have spent north of 15 hours a week to cover: videos and readings, Python programming and written assignments, quizzes, and forum discussions. By the way, all definitely worthwhile.

While taking this course, I couldn’t help recalling the kind of scarce data we used to work with when I got my postgrad on Human Factors Engineering at BarcelonaTech in the early 90s, also graduating with the first class.

By means of an example, one of the industrial ergonomics projects got kicked off with statistical data provided by the military. Stats on Marines fit for service being the only readily available physiological data for us to design a local civilian application.  We knew that wasn’t a representative model of the target user base for the industrial workstation under design. Back then, undertaking a proper data collection study was costly and beyond project means.

Our group worked with small data by testing things on ourselves and leveraging in-house dogfooding to some extent. Though, unfortunately, this kind of findings might not adequately reflect the reality of human variability. If overlooked, that can result on implementing designs that optimize for a set of “proficient some” while undermining ease of use for many others and missing the mark in the process. Let’s keep in mind that, as clearly outlined in Crossing the Chasm, early success among devoted early adopters might not translate in mainstream praise and popularity, then failing to grow the user base and failing in the market.  


imageTo be clear, working with secondary research (e.g. reference data sets from third parties) and conducting primary research by testing things on ourselves coupled with in-house dogfooding are all valuable practices. Though not necessarily enough to make a compelling difference in today’s “big data” day and age.

MIT BD&SA discusses the benefits of working with living labs driven by UCD, User Centered Design. We now have commercial off-the-shelf technologies (smartphones, Internet of Things, sensing networks, machine learning) at our disposal, which allow us to capture user actions and behavior on location and, most importantly, with greater data resolution.

Couple that with ethnographic research focusing on understanding human factors by observing users in their own environment and usage context and, most importantly, capturing their PoV, Point of View at each step.

So, those of us working on Human Factors Engineering and driven by User Centered Design to deliver processes, tools, products and services, can create new experiences that take the human possibilities of technologies to new unprecedented levels, analytics becoming of the essence to #MakeTechHuman.



image

Big Data Revolution. TED Radio Hour. NPR.

image

The Human Face of Big Data. PBS.

image

Source: Business Innovation Demands Accelerated Insights. Intel.

image 

MakeTechHuman. Nokia.


See you at RecSys 2016 next week : )

#MakeTechHuman

Lean NFV Ops Workshops at CIC.


image

We have hosted a number of workshops with customers, partners, analysts and public officials worldwide. Back in November we welcomed Light Reading analysts in Naperville, IL, who had meetings with IP Platforms’ leaders and joined a live demonstration at the Cloud Innovation Center (CIC).


‘[R&D work] is in the operations,’ Bhaskar Gorti, Alcatel-Lucent (NYSE: ALU)’s president of IP Platforms, told Light Reading without hesitation at a recent on-site visit: ‘Getting a network function to run in a virtualized network is fine, but the reality is that there will be a hybrid world of virtual and physical networks. How do you operate it?’ In fact, its Naperville, IL. offices are full of Lean Ops demos that show evidence of this R&D work.”Alcatel-Lucent’s NFV Boss: Operations Is Key R&D.” by Sarah Thomas, Editorial Operations Director at Light Reading.

“A visit to Alcatel’s Lucent’s Naperville, IL. Campus, west of Chicago, shed light on how the company is focusing its efforts around software and the cloud.” “Ted East showed us how easy it is to spin up services like firewalls in the cloud, thanks to SDN and NFV. Then he had us cause a network failure by playing Whack-a-Mole in order to demonstrate how fast the network can rebuild itself.” – “Alcatel-Lucent Field Trip” by Elizabeth Miller Coyne, Editor at The New IP.


image


CIC’s NFV Solutions Hub provides a truly immersive operational experience of running a cloud based telco using NFV technologies. The hub advances collaboration with network operators and ecosystem partners by enabling real solutions to be built and validated as well as providing a hosted cloud facility for the Alcatel-Lucent community.


The most visible outcome involves Proof of Concept (PoC) projects, high impact demonstrations and practical assistance with onboarding and validating Virtual Network Functions (VNF). Light Reading’s team experienced CIC’s Lean NFV Ops program, which showcases a fully virtualized end-to-end VoLTE environment and a wide range of service lifecycle use cases and operations.


An interactive demonstration deploys Rapport Cloud Communications and IPR&T Cloud Mobile Core (vEPC) working with Motive Dynamic Operations (MDO), CloudBand Management System, and Nuage Network’s Virtualized Service Platform (VSP). This is all running live using commercially available software including CloudBand Nodes which leverage OpenStack and is widely considered the most stable and mature platform for NFV. Furthermore, this session also promoted the role of our Ecosystem Program with over 60 members and positioned Services and Consulting practices, which are currently helping to deliver 6 commercial NFV projects and discussed the impact of Bell Labs research findings.


image

The team at the Cloud Innovation Center (CIC).


The Lean NFV Ops demonstration experience was launched at Mobile World Congress in March and has been turned into a program that has been featured in 70+ events and workshops worldwide, engaging 1,600+ industry representatives.

These help not only discuss and validate fundamental concepts but also gather invaluable customer insights in the process. Intel’s Data Center Group Leadership singled out our program in their Investor Day where Alcatel-Lucent was the only partner featured in the demonstration zone. Intel has also funded and sponsored our most recent video, which is now available on TelecomTV.

Customized Lean NFV Ops workshops, whether at Executive Briefing Centers (EBC), at customer premise or online, can be easily booked online.


image

LEFT: Experience Lean NFV Operations includes “book a workshop”. RIGHT: Become a Dynamic Lean NFV Operator.


I would like to take this chance to thank Ted East and Phil Tilley for their input.

Lean NFV Ops: scaling basics.


“Just-in-Time means making only what is needed, when it is needed, and in the amount needed […] it is necessary to create a detailed production plan […] to eliminate waste, inconsistencies, and unreasonable requirements, resulting in improved productivity.”Just in Time, Philosophy of Complete Elimination of Waste byToyota.

“Each unpredictable feature demanded by customers is considered an opportunity […] this requires rapid adjustment of production capability. Dynamic and flexible network utilizations in functional modules can maximize the strength of each resource and the overall risk and costs are reduced.”Flexible Manufacturing System for Mass Customization Manufacturing by Guixiu Qiao, Roberto Lu and Charles McLean.

“Providing capacity in a more expedient fashion allows us to deploy a functioning and consumable business service more quickly […] at the core of our self-service functionality is a hosting automation […] On-demand self-service is a critical aspect of our cloud environment; however, without underlying business logic, controls, and transparency, an unconstrained on-demand enterprise private cloud will quickly exceed its capacity by doling out allocations beyond its supply.”Implementing On-Demand Services by Intel.

“Elasticity is commonly understood as the ability of a system to automatically provision and deprovision computing resources on demand as workloads change […] in a way that the end-user does not experience any performance variability.”Elasticity in Cloud Computing: What It Is, and What It Is not by Nikolas Roman Herbst, Samuel Kounev and Ralf Reussner.


image

This past few months I’ve followed a few discussions on virtualization and scalability.

There is such a thing as becoming a victim of success when pent up demand strikes and a business fails to scale accordingly.

Capacity management has typically prompted over-engineering decisions and long lead times taking a year or more in the telecoms industry. This can result in concerns about delayed breakeven points, underutilizing precious resources as well as limited offerings due to the higher cost of oversubscribing.

Lean means staying nimble at any size, streamlining and keeping lead times as short as possible by design. Effective and efficient capacity management relies on understanding economies of scale and scope. The first relates to achieving larger scales triggering more efficient utilization levels and, therefore, lower and more competitive average costs.

Scope means taking advantage of synergies and common infrastructure and platforms to deliver a variety of services, application multi-tenancy being an example in NFV’s (Network Functions Virtualization’s) context.

Active portfolio management follows: complementary application lifecycles can share resources and raise overall utilization levels in the process. Moreover, some applications can be deconstructed and modularized so that specific subsets become standalone services available to (or reused by) other applications. These can be decoupled to join a common pool and scale independently.

In some discussions we refer to growth models where “scale” follows a “vertical” approach while “scope” adds breath with new functions and is, therefore, a horizontal expansion model. This breakdown allows for plotting and segmenting growth/de-growth scenarios in a simple matrix. I am experimenting with new ways of helping visualize these concepts. This is work in progress and the final result will look different from early drafts poste here. Though, I think that they can be used for the time being.


One other thought… elasticity relates to following demand curves: offer meets demand by dynamically adapting capacity. This entails provisioning, deprovisioning and a virtuous circle by means of gracefully tearing down resources, which are freed up and exposed for other applications to leverage. Elastic computing seems to make us think of unlimited just-in-time capacity, but there are upper and lower boundaries involving diminishing returns. It just so happens that virtualization has pushed the envelope by considerably widening and shifting these constrains.

It is worth reflecting on Gordon Moore’s law in this context: many incremental and disruptive innovations yield exponential performance improvements in today’s cloud age. That can be coupled with NFV’s (Network Functions Virtualization’s) shift from lengthy lead times, cumbersome operations and costly dedicated hardware to automated systems working with a wide supply of more affordable COTS (Commercial of The Shelf) hardware and open source solutions.


image

Let’s now focus on the notion of service decomposition and how that impacts scaling.

This exercise often starts with deconstructing monolithic systems typically relying on vertically integrated architectures, then looking at the actual services involved, dependencies, flows… and figuring out what is best to keep integrated vs. modularized, centralized vs. distributed.

This also entails looking at opportunities for what it takes to streamline development time, such as code reuse and processes worth exposing by means of API (Application Programming Interfaces). Note that many applications do not need to duplicate assets and can become distributed systems consuming resources and processes running elsewhere.

In this section’s graphic, the application is a VNF (Virtual Network Function) which has been decomposed and right-sized to run in three different VMs (Virtual Machines) of different volumes instead of procuring a single physical server for just this application.

Lighter gray blocks at the back end present a pool of services available to that and other applications. As an example, when decoupling an application’s logic from the app’s data we get to leverage DaaS (Database as a Service) as one of the shared services.

These are the “scaling” terms provided by ETSI (European Telecommunications Standards Institute) NFV reference documents:

  • Scaling up: extending a resource (compute, memory, storage) to a given VM.
  • Scaling down: decreasing resource allocation.
  • Scaling out: creating a new instance, adding VMs.
  • Scaling in: removing VMs.

Circling back with service decomposition: there are scaling scenarios where there is no need to go through the trouble of scaling out an entire application, but just a specific service at stake, such as one of the VMs or the database in the previous example.

In some other scenarios scaling can prompt application updates and/or upgrades to enable new functionality. Suitable “upgrade windows” can be hard to find when multiples services are in demand and expected to remain always-on anytime. A stateless architecture means that the session’s state is kept outside of the application, with the shared database in this example. Traffic can be redirected to an application’s mated pair, this is a second instance which was kept on active standby mode until the maintenance event.

This also means going beyond 1+1 models where everything is duplicated (mated pair concept) for failover sake. There often are more efficient n+k systems in HA (High Availability) environments. Note that, paradoxically enough, rolling out upgrades happens to be a primary source of maintenance issues thereafter, adding to the need for sustaining service continuity at all times coupled with zero touch and zero downtime.


Zero touch is delivered by automation, which relies on continuous system monitoring, engineering triggers and preceding work with recipes, templates and/or playbooks (these are alternative terms based on different technologies) detailing what needs to happen for to execute a lifecycle event. Scaling is the subject of this post and onboarding, backup, healing, termination are other lifecycle events just to name a few more.

Programmability drives flexible automation, which is data driven and based on analytics. Predictive analytics goes a step further to project and address trends so that actions can be taken in advance. In our Lean NFV Ops demonstration we purposely stimulate network traffic with a load generator to exemplify this. We run scenarios illustrating both (a) fully automated scaling and (b) autonomation by switching to manual controls that put the operations team in charge at every step.

Autonomic computing is powered by machine learning. Research on NFV autonomics points to the ability to self-configure, specially so under unplanned conditions. Looking into automation and distribution modes helps define maturity levels for NFV, that being a topic for another article.


image

Let’s zoom out to discuss scaling in the context of the platform.

ETSI NFV defines MANO as the Management and Orchestration system. “Managing” refers to addressing the application’s lifecycle needs, scaling being one of them. The notion of “orchestrating” focuses on the underlying resources to be consumed.

The MANO layer is thought out as NFV’s Innovation Platform, which I show in purple color: the thickness of that layer conveys the degree to which an application uses more (right) or less (left) of MANO’s capabilities. This is an application multi-tenant environment where VNF1 shows a monolithic app example in contrast to VNFn which is meant to take full advantage of MANO’s automation.

This cross-section shows a horizontal architecture as the platform supports multiple applications as well as back end systems. Horizontal and vertical solutions scale differently. A common platform presents à la carte features and start small, growing and scaling to enable homogenous end to end management across the applications, while the monolithic approach moves forward with siloed operations on an application by application basis.

One more example, growing by adding interdependent services is a discouraging endeavor when reconfiguring multiple functions becomes overwhelming. SFC (Service Function Chaining) comes to the rescue in a virtual environment by providing network programmability and dynamic automation to create networks connecting new services. NFV’s scaling needs make a good case for SDN (Software Defined Networking), the technology behind SFC.


image

Now moving to what’s under the hood.

NFVI stands for Network Functions Virtualization Infrastructure. Most typically, what we can see and touch is a data center environment providing resources consumed by the applications such as compute, memory, storage and networking to begin with.

The visual in this section shows a conceptual server farm right under the platform. Blue nodes on the left and brown ones on the right are physically placed at different geographic locations, yet forming part of the same NFVI orchestrated by MANO. The gray one is being added: scaling out of the existing infrastructure. The green node lays outside and can be leveraged when bursting:

  • Scaling out: adding more servers (gray cube).
  • Scaling up: leveraging clusters and/or distributed computing to share the load (blue and brown cubes).
  • Bursting: tapping into third party infrastructure to address capacity spikes (green cube).

Note that, in this context, scaling up can also mean upgrading servers to handle larger workloads. This can also be about using an existing chassis while replacing a server with a new node featuring more processing, data acceleration, lower energy needs, etc.


Early on we talked about COTS’ being easier to scale out when compared to proprietary dedicated hardware. It has partly to do with standardization, centralized management and consolidation, the existing supply chain for x86 systems and node automation.

We can also factor consumption based models where a given application’s business case is not impacted by up-front CAPEX (Capital Expenditures). Instead, the application business case accounts for resource usage levels which, once again, benefits from economies of scale and scope. The notion of elasticity makes infrastructure planning transparent to the application.

Capacity and performance management skills remain of the essence: the move to applications based on stateless architectures means that scaling distributed applications places a greater emphasis on API behavior by addressing capacity and speed in terms of RPS (Requests Per Second). And, nonetheless, the telecommunications industry is known to require high capacity, low latency SFC, which is driving data plane acceleration solutions.


image

We can now zoom out.

Scaling is not a new thing or need. Conventional architectures can scale, they just don’t do it fast or effectively enough in a cost effective fashion. Taking months and years to get the job done risks missing markets and taxing resources which would have been needed to create innovative services.

Admittedly, one of the objectives behind writing this was wrestling with jargon by outlining “scaling” terms in context, whether related to application, platform or infrastructure. Hopefully, that goal was accomplished. Otherwise, please let me know.

One other thought… NFV is a change agent. Hence, cool technical wizardry alone does not suffice. We are discussing emerging technologies causing interest in connecting dots across behavioral economics (and not just the business case) and organizational cultures and decision making in the telecoms sector. Understanding the human factor matters.

As usual, I will be glad to continue the conversation by exchanging emails, over LinkedIn or in person if you happen to be around at IDF15, Intel Developers Forum, in San Francisco’s Moscone Center on August 18-20.

Lean NFV Ops: automation and self-service.


“The automatic telephone switchboard was introduced in 1892 along with dial telephones. By 1929, 31.9% of the Bell system was automatic. Automatic telephone switching originally used vacuum tube amplifiers and electro-mechanical switches, which consumed a large amount of electricity. Call volume eventually grew so fast that it was feared the telephone system would consume all electricity production, prompting Bell Labs to begin research on the transistor. The logic performed by telephone switching relays was the inspiration for the digital computer.” – “Automation” by Wikipedia.


image


imageWe kept extremely busy in Q1 to deliver the Lean NFV Ops demo at Mobile World Congress back in March. I am glad to share that the project’s success led to a hectic roadshow in Q2: our live demo system has been showcased at a number of industry and private events as well as in customer workshops worldwide.

Each conversation with network operators, partners, analysts and public officials has delivered a wealth of insights: most validating the project’s objectives while some challenging us to do even more to take things to the next level.

Q3 is about furthering the Lean NFV Ops conversation and we will soon make available a brief paper and a full length video sharing design principles. Stay tuned. Though, I would like to first start with a brief discussion on S2O (Self-Service Ops) given a recent batch of questions on what that entails.


This is just a quick note: all conversations regarding Lean NFV Ops involve data driven automation and the human factor. This is a live demonstration system that couples (a) flexible “automation” involving correlated metrics, predictive analytics, directories, policies and research findings on “autonomics” (machine learning) with (b) visibility and controls where “autonomation” engages human intelligence in terms of situational awareness, supervision, root cause analysis, programmability… and new skills involving workstyles and organizational behaviors. There you have it: managed to get “automation”, “autonomics” and “autonomation” in just one paragraph : )


S2O, this post’s focus subject, reflects the fact that a number of CSP (Communication Service Providers) are developing B2B (Business to Business) markets by providing services to other network operators under the carrier’s carrier model, MVNOs (Mobile Virtual Network Operators) and enterprise verticals and customers of all sizes. Though, we are also learning about lengthy resource consuming operations that trigger costlier services than planned and/or limited offerings constrained by what can effectively be managed under the current PMO (Present Mode of Operations).

Thinking of Network Functions Virtualization (NFV) means shifting to a FMO (Future Mode of Operations) based on cloud economics. More specifically, this means enabling business models such as Infrastructure and Platform as a Service (IaaS and PaaS) which are driven by self-service interactions.

This 10+ minute video shows the first version of the Lean NFV Ops demo where our emphasis was on communicating what NFV can deliver to CPS’ in-house ops teams. The above graphic portrays the S2O use case where:

  1. B2B: A CSP is in business with several customers (other carriers, MVNOs, enterprises, public administration).
  2. XaaS: A given CSP’s customer works with the same toolset leveraged by the CSP’s own in-house ops team and benefits from the “X” (anything) as a Service model.
  3. DevOps: That CSP customer’s own IT team embraces self-service by deploying apps and creating service chains at multiple sites, scaling and reconfiguring systems as needed.

image

Left: Screen capture of the demo’s NFV Ops Center – S2O View. Right: Screen captures of support systems involved: Motive Dynamic Operations, CloudBand Management System, Nuage Networks, Bell Labs Analytics.


In a nutshell: a significant share of operations have been outsourced by the CSP to the business customer under the S2O use case . This is a mutually beneficially arrangement as follows:

  • The CSP’s business customer is empowered to best conduct timely operations as they see fit.
  • The CSP leverages automation to reap self-service efficiencies whether that involves in-house teams or those engaged by business customers themselves.

S2O prompts CX (Customer Experience) implications encompassing fulfillment and assurance, as well as consumption based pricing models, in a highly dynamic environment, which makes Lean NFV Ops’ end-to-end system engineering approach of the essence.

As usual, I will be happy to address your comments, exchange emails or trade messages over LinkedIn. Our team will be doing demos at IDF 2015 (Intel Developers Forum) in San Francisco on August 18-20 at Alcatel-Lucent’s booth. Hope to see you there : )