“The Mother of All Demos is a name given retrospectively to Douglas Englbart’s December 9, 1968 […] The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or more commonly, NLS. The 90-minute presentation essentially demonstrated almost all the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revisions control, and a collaborative real-time editor (collaborative work). Engelbart’s presentation was the first to publicly demonstrate all these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.” – The Mother of All Demos, Wikipedia.
Compelling demonstrations can make all the difference when introducing emerging technologies. There is no slideware or paper substitute for the kind of revelations, quality insights, and lasting emotions that we all get when experiencing things live and first hand. On the research side, interactive demonstrations have become invaluable tools that expose and test concepts. Moreover, they prompt invaluable feedback by questioning, validating, unveiling unsuspected items as well as winning hearts and minds to further advance a cause.
Those are some of the reasons why I prioritize demo development and my research process involves activities such as field trips and ethnographic insights captured in environments like the Museum of Science and Industry (MSI) in Chicago and open-door showcases at renowned institutions like Fermilab. Successful science exhibits make complex topics approachable and engaging. They are carefully designed with craftsmanship pride to be perceived as astute, immersive and to appeal to our brain’s intuition and intellect.
The above graphic features quotes from Albert Einstein and Nicholas Negroponte on the left, coupled with Salvador Dalí and Arthur C. Clarke on the right. I created that poster’s first version a few years ago and became my reference framework for prototyping and demonstration since. The photographs are courtesy of Wikipedia. Here are further insights on what these quotes mean to me:
1.- DEMO OR DIE – The introduction of inventions and diffusion of innovations relies on effectively conveying clear and concise value. Interacting with engaging demonstrations can be best supported by well thought out whiteboarding sessions. This communication strategy works best when allowing dynamic conversations instead of long agendas packed with presentation monologues. Most people can talk about the many times when they were either overwhelmed, underwhelmed or just bored to death by slideware… and became suspicious of hype. Note that we all deal with an unfavorable Signal-to-Noise (S/N) ratio in today’s information rich environment and, therefore, compete for customers and/or users’ undivided attention. Once again, memorable hands-on demonstrations can make all the difference.
2.- GROW TO LOOK LIKE THE PORTRAIT – High tech is a fast paced industry. One can be left wondering if the technology, toolset, application and/or overall system being discussed will grow and scale as needed beyond day one. There can also be concerns around maturity levels, roadmapping options and future proofing when working with emerging technologies. Demos can be used to convey a tangible vision based on attainable end-goals. They can also be used for what-if-analysis, sunny and rainy day scenarios (which can include full lifecycle and stress tests) and plot plausible journeys to go from A to B and any steps in between. Helping everyone come to terms with what lays ahead is key to defining product strategies and planning decisions “to grow to look like the portrait.”
3.- EXPLAIN IT SIMPLY – Apparently unavoidable jargon and well intended technical kumbaya can become easily entangled. Complex explanations suffer from information overload. Convoluted narratives pleasing the presenter’s ego can make unclear what specific problem or pain point he/she solving, and what the sought after benefits and priorities are. When “less is more” it definitely pays to define a vantage point, zoom out, distill fundamentals and synthesize the essence. Knowing your audience and getting the job done in the clearest and most effective terms possible means striking a balance and staying away from oversimplifying or complicating matters. This is an iterative exercise that often demands more time, effort and reviews than the usual information dump. We also need to be able to step-zoom to deliver the next level of detail and to conduct deep dives… without incurring information overload. Humanizing technology, storytelling techniques and ease of information visualization are key to developing a coherent narrative.
“The meaning of a communication is defined by the Change and Affect it creates for the audience. Stories are concerned with transformation. In stories something Changes to create an emotion […] The Change has to resonate with the Audience to generate an Affect; a feeling, a reaction or an insight […] We shall consider these two defining characteristics of narrative to clarify the purpose of any communication […] Change and Affect create meaning. – “Crackle and Fizz. Essential Communication and Pitching Skills for Scientists.” – Caroline van den Brul. Imperial College Press.
4..- IT’S MAGIC – This is all about the so called X-FACTOR: an unsuspected quality making something be different and special in unequivocal terms. To be more precise, the X-FACTOR’s experience can be broken down as follows:
- SURPRISE FACTOR – this relies on managing perceptions and the discovery process, the tipping point being delivered by a timely and unsuspected clever twist and a defining punch line – the “aha” moment.
- WOW FACTOR – high impact, impressive, awe-inspiring outcome, benefits and results that can be easily understood and embraced – the “I didn’t know we could do that” and “I want to know more” moment.
- COOL FACTOR – elegant sophistication and grace, clear object of desire – the “I want that” moment, this being most demos’ ultimate Call-To-Action (CTA.)
The art and science behind the above is known as “affective design.” Techniques such as perceptual learning and emotional intelligence in design (emotional design in short) are applied in Human-Computer-Interaction (HCI) to foster pleasant ease of use, drive further engagement and productive usage in the process. Widespread digitalization and the advent of wearables make HCI commonplace, which is influencing product design.
The above is a demo’s “full disclosure” chart, which breaks down what’s real and what’s not. This is needed because vaporware can be an issue of concern.
1.- PRIOR ART – In the above example, a given percentage of the demonstration system involved known technologies, some from third party partners.
2.- STATE OF THE ART – The greatest and latest features, cutting edge delivered by technologies that are available today.
3.- FUTURE ART – A sneak preview of new features and capabilities that are planned, undergoing development and/or committed, but not yet available.
4.- ART OF THE POSSIBLE – Proof of Concept illustrating experimentation results and potential, bleeding edge capabilities that are not yet committed.
By the way, vaporware is the result of positioning 3 and 4 as part of 2. Avoiding unpleasant misunderstands prompts the need for disclosing these four different maturity levels. Note that one graphic applies to a comprehensive demonstration system encompassing those four aspects and their relative weight.
One other thought, there is a difference between incremental and disruptive innovation. The first delivers improved qualities such as better performance in A/B comparison testing as an example, “A” being prior art and “B” state of the art. Most would agree on defining disruptive innovations as game changers which deliver unique capabilities that clearly supersede legacy and conventional systems. That alone renders “A” obsolete. A/B comparison testing leads to discussions on the difference between Present Mode of Operations (PMO) and Future Mode of Operations (FMO.)
“Humanists must be educated with a deep appreciation of modern science. Scientists and engineers must be steeped in humanistic learning. And all learning must be linked with a broad concern for the complex effects of technology on our evolving culture.” – Jerome B. Wiesner.
“See inner relationships and make connections that others usually don’t see; we learn to think the unthinkable. On the other hand we may be uncomfortable with the insights that arise from from seeing the world differently. However, we need innovation and creativity that steams from seeing things differently […] I recommend that you start to manage your own dilemmas.” – Get There Early: Sensing the Future to Compete in the Present by Bob Johansen. 2007 Edition published by Berrett-Koehler.
“They preferred to think they worked not in a laboratory but in what Kelly once called ‘an institute of creative technology.’ This description aimed to inform the world that the line between the art and science of what Bell Labs scientists did wasn’t always distinct […] many of Kelly’s colleagues might have been eccentrics […] working within a culture, and within an institution, where the very point of new ideas was to make them into new things.” – The Idea Factory by John Gertner. 2012 edition published by The Penguin Press.
“He kept asking Kay and others for an assessment of trends that foretold what the future might hold for the company. During one maddening session, Kay, whose thoughts often seemed tailored to go directly from his tongue to wikiquotes, shot back a line that was to become PARC’s creed: the best way to predict the future is to invent it.” – The Innovators by Walter Isaacson. 2014 edition published by Simon & Shuster.
Inventions involve the creation of a novelty which is, therefore, something new and different. Note that innovations take matters further since they entail realization, introduction and adoption processes. I am fortunate enough to have experienced both. My research work is credited as either inventor or co-inventor in patents and awards. But that alone does not necessarily imply actual development. Getting into innovating as such came to fruition when undertaking product management responsibilities.
Those of us thinking of the commercialization of inventions and the so-called diffusion of innovations are attracted to qualitative and quantitative metrics. These are valuable insights and data speaking to the correlation between inventing and innovating, which leads to articulating best practices, processes, budget and resource allocations. However, it is also true that success can, often times, be powered by outliers.
As the “black swan theory” states: there can be easily dismissed and hard to predict impactful events that end up changing everything. Long story short, the art of serial innovation is a dynamic endeavor: just relying on what you think that you knew well can cloud and betray one’s otherwise better judgment. This is when “objects in the mirror are closer than they appear,” metaphorically speaking, and things just happen at unprecedented speed.
Most would agree that good ideation can come to the surface anytime and anywhere from subject experts, users themselves as well as unusual suspects. Inventing takes a higher commitment level to address how things should work… and there can be alternative and competing solutions to a given problem. Serial innovation becomes a greater challenge since is it measured by repeated success.
I created the above framework in the context of the high tech sector. It conveys a need for striking an equilibrium point between unmanageable complexity (right) and either self-defeating oversimplification or undifferentiated simplicity for that matter (left.)
Semantics matter: anyone can argue the merits and faults of simplicity and complexity. Though, delivering elegant sophistication displays consensus thanks to a clear level of quality and refinement, functional depth and differentiation, effortless operations and ease of use. One other thought: I would also like to claim that purposely engineering effortless ops and ease of use drives everyone’s energy to focus on value based activities. We democratize innovation in the process.
The first chart became a vehicle to discuss the difference between invention and serial innovation. Let’s now look at the difference between incremental and disruptive innovation.
Innovating drives changes. Nonetheless, legacy systems can continue to benefit from incremental innovation. This means bettering and further optimizing current technologies and operations. Existing footprint and know-how combined with economies of scale, as well as risk aversion, expensive switching costs when considering emerging tech and possible resistance to change… all favor that phenomenon. So, it pays to understand Daniel C. Snow’s teaching on “old technologies’ last gasp” when outlining transition and/or transformation plans.
The lower right quadrant is where new paradigms are set to deliver disruptive innovation. B2 is is clearly set beyond the reach of legacy systems: diseconomies of scale and diminishing competitiveness with declining returns being key reasons. B2 means that legacy tech is clearly outdated and superseded.
Disruptive innovation is the game changer. That’s the kind of paradigm shift that new entrants and green field players will take advantage of. The so-called industry establishment can continue to skim incremental innovation, though only up to a point at which they are rendered “old guard” and obsolete. That is the essence behind Clayton Christensen’s Innovator Dilemma.
The upper row shows quadrants A and B1, and an obvious intersection zone in between. Established players can operate hybrid environments to cross G.A. Moore’s chasm. They can gradually transform or fully re-invent themselves at that intersection. The above chart is designed to help leaders and management consultants plot portfolios in each quadrant as well as their evolution (e.g. course and speed.) based on KPI (Key Performance Indicators) or set phased discontinuity.
Quick recap. Incremental innovation delivers better (technical, operational, financial) performance, which is usually presented in the form of A/B (before and after) comparison tests. Disruptive innovation brings about unique capabilities that legacy systems cannot match. We are talking about emerging technologies, so capability and maturity models come into play. I will discuss that in one of my next posts on Lean Ops Redefined.
We have discussed insights around invention and serial innovation, incremental and disruptive innovation. My next tool is design to map out where value exists, new value is created and value migration across the two.
No doubt, disruptive innovation alters the landscape: value migrates (or circles back) to any of the above quadrants. Some markets are placing a premium in the upper right quadrant already. That’s where end-to-end solutions and services create new value and dominate, which commands higher margins. Service focus seeks understanding and developing customers’ experiences instead of a product push or pull approach. Solution focus forces a more holistic systems engineering approach encompassing the value (supply) chain and relevant ecosystems.
That combination delivers significant competitive advantages with the advent of virtualization and cloud computing technologies. Early draft versions of that chart showed a different breakdown, namely: hardware, platforms, applications and services. When testing and putting this kind of charts to work, I could plot everything by applying color coding, then size of the addressable market, revenue and growth would determine each circle’s size. In any case, that basic template can be customized as needed.
“Inventing the future” can certainly take unique instincts, skills, workstyles and eccentric behaviors. When acknowledging that talent is a critical success factor, we then need to get serious about quipping individuals to make a difference while understanding that it takes a cross-functional team to make things happen. Serial innovation takes foresight, situational awareness, leadership and organizational agility. I hope that the above tools helped with mapping and discussing concepts such as (c) defining value, (b) transformation, and (a) moving the needle with elegant sophistication as the defining delivery.
Wondering about the last chart on Lean Ops? That one is just a sneak preview in advance to an incoming post also centered on “Innovation Management Essentials.”
As usual, looking forward to comments and emails, as well as meeting at any of these venues:
“A strategic relationship with Alcatel-Lucent, intended to bring new levels of innovation to network operations. The agreement establishes a joint reseller and OEM agreement between the two companies (…) the transformation to a software-defined data center is going to keep picking up steam. Working with Alcatel-Lucent we increase our ability to help enterprise customers and carriers be a part of it.” – Dell and Alcatel-Lucent Team Up for Network Virtualization by Arpit Joshipura.
“The digitization of our world will be the driver of change. The enabler will be the “cloud integrated network,” which has seemingly infinite capacity and scales from local to global with sustainable economics. And the benefit will be the emergence of automated systems that provide augmented intelligence to any critical analysis task.” – The Future X Network: A Bell Labs Perspective.
Link to Dell World 2015 photo album.
Our team was in Austin at Dell World 2015 on October 20-22. Alcatel-Lucent’s booth featured the Lean NFV Ops demonstration, which captured quite a bit of attention. This was remarkable given NFV’s (Network Functions Virtualization) painstaking focus on the telecommunications sector (the carrier environment to be more specific) and the event’s broader scope providing coverage for a wide range of enterprise IT (Information Technology) systems.
We were asked if we we “brought our stuff from the future” a few times. That recurrent compliment referred to Marty McFly’s trip to October 21, 2015, the main character in the blockbuster “Back to the Future” movie series. Dell World’s DeLorean (above) is a replica of the movie’s time machine.
The fact is that adapting known and proven to work Lean principles and, moreover, creating new ones for the cloud age has been consistently well received and praised since we first unveiled this program. The Lean NFV Ops Roadshow has been running for about a year already and keeps growing strong.
Working with a set of virtualization technologies that are readily available, and doing so by operating a sophisticated end-to-end system delivering 4G’s mobile broadband and VoLTE (Voice over Long Term Evolution) services makes Lean NFV Ops expose a very compelling value proposition. We embrace Lean by engineering systems that are effective and highly efficient.
Effectiveness refers to operating in an HA (High Availability) environment driven by SLA (Service Level Agreements) compliance, which entails performance and QoS (Quality of Service) requirements. This being an environment where the services’ QoE (Quality of Experience) is paramount.
High efficiency means getting all of that done with a holistic approach (end-to-end systems engineering x TCO, total cost of ownership) optimizing “cost per bit” delivered coupled with “cost per workload.” TCO is key as we operate at the intersection between “effectiveness” and “high efficiency” at any scale: it is imperative to start and stay nimble by saving virtualization systems from sprawling, inflict overhead and become bloated overtime.
What’s really exciting is that the end result powers a new generation of dynamic services, spurring innovation and continuous improvement in the process.
By the way, we do have “time machine” viewer in our system. No joke, this is the one feature that we use to review events and deployments that have already taken place, step by step and in an time lapse mode.
One more thing… : ) Bell Labs has released a new book, which I look forward to reading. My understanding is that “The Future X Network” addresses the landscape for networking technologies in 2020 and beyond.
“The Future X Network: A Bell Labs Perspective outlines how Bell Labs sees this future unfolding and the key technological breakthroughs needed at both the architectural and systems levels. Each chapter of the book is dedicated to a major area of change and the network and systems innovation required to realize the technological revolution that will be the essential product of this new digital future.”
Where to meet next?
- Mexico City workshops, October 26-28
- Carrier Network Virtualization. Palo Alto, November 30 – December 3.
See you there ; )
“The GSMA’s Regional Interest Groups provide forums in which the mobile industry can discuss and address issues that are specific to particular regions or the regional angle on global issues. The GSMA has Regional Internet Groups in Europe, Latin America, Asia, the Arab World, Africa and North America.” – GSMA Regional Interest Groups.
“GSMA North America is steered by its CTO Advisory Group, which drives the activities of its various technical working groups. These working groups, which meet several times per year, include the Services Working Group, the Smart Card Group, the Terminal Working Group, the Fraud and Security group, the Inter-Working, Roaming Expert Group (IREG), the Billing, Accounting & Roaming Group (BARG) and the Standards & Wireless Alerts Task Force.” – GSMA North America.
“GSMA NA NG (formerly IREG): This group specifies technical, operational and performance issues supporting international roaming. It focuses on the study, from a compatibility and interoperability perspective, of the signaling and inter-working of roaming issues between PLMNs (Public Land Mobile Networks), PSTNs (Public Switched Telephone Networks), ISDNs (Integrated Services Digital Networks) and PPDNs (Public Packet Switched Networks) modes, to define guidelines and test procedures for voice and data services.” – GSMA NA #67.
Last week I attended the GSMA North America – Region Interest Group (RIG) event in Chicago. Steven Wright, AT&T, moderated the “Operator Panel Discussion” which followed presentations from his company as well as Sprint and Verizon on Network Functions Virtualization (NFV) and Software Defined Networking (NFV). These talks addressed network operators’ expectations, strategies and progress made. By the way, Steven also discussed the variety of activities involving standard bodies and technical communities.
My tech talk on Lean NFV Ops was scheduled for the following day as part of the Network Group’s (NG) agenda. I would like to thank Nars Haran, NG Track Chair, for all of his help and for extending my session from 60 to approximately 90 minutes given the topic’s interest, following and audience participation. This was the presentation I covered for this event, which I delivered on behalf of our Cloud Innovation Center (CIC) at Alcatel-Lucent:
Note that I a have updated page 14 to better show the difference between Present Mode of Operations (PMO) and Future Mode of Operations (FMO). I am glad to share that the room was full and I am grateful for everyone’s positive response. Right after my talk I became engaged in a couple of hallway conversations, which focused on “NFV Deployment Strategies” on page 3, the “ETSI NFV Use Case #5” outlined on page 10, and Lean NFV Ops’ definition on page 18. Please see the below table for questions on additional materials:
Additionally, here is the Lean NFV Ops Roadshow brochure should you be interested in scheduling a live demonstration:
Once again, I’d like to thank everyone attending my tech talk this past week in Chicago and I sincerely appreciate the encouraging emails received to date. Last but not least, there is a need for crediting Jack Kozik’s coaching, his invaluable insights and time spent when preparing for this event.
Crossing the chasm between inventing and innovating has a lot to do with a technology’s diffusion level and depth of adoption. Generally speaking, inventions talk to new forms, functions and applications while innovations have more to do with whether that novelty becomes a game changer.
Innovations qualify as such because of causing a significant industry impact. This is beyond just filing for a patent or making something commercially available. Otherwise, we would just be talking about inventions.
When an emerging technology is first conceived, those of us rallying behind it might do so because we sense and foresee potential. We strive to work with all of the assumptions involved in how it will unfold, evolve and even transform and mutate. But, accidental innovation happens. Moreover, a majority of entrepreneurs would acknowledge that what made a business successful might not necessarily be the source concept they started with.
As Antonio Machado (Spanish poet, 1875-1939) stated in one of his most popular writings: “we make paths as we go.” While NFV (Network Functions Virtualization) has crossed a point of no return and aims to shift from invention to innovation status, we cannot yet benefit from defining maturity levels in hindsight. Moving forward in the midst of changes and uncertainty calls for exercising thought leadership.
NFV qualifies as an emerging technology of great interest in the telecommunications sector, jointly with SDN (Software Defined Networking). When I worked on the the above visuals, my goal was to convey a dynamic service delivery environment resembling neural activity with different but interconnected layers. I see this as an application driven and a constantly morphing system where new connections get instantiated, any needed assets surface just in time and resources get fired up without incurring in self defeating trade-offs, therefore the above right chart.
Note that this is in contrast with today’s rigid network systems where service innovativeness can be either halted or negatively impacted by lengthy lead times and cumbersome operational constrains. Couple that with performance and reliability concerns. As a matter of fact, the way Clayton defines his “innovator’s dilemma” is worth reviewing and understanding in this context.
Having depicted a desirable vision within the scope of what’s eventually possible, when undertaking technology roadmapping my next batch of questions are more about [a] what fruitful immediate steps can be undertaken now, as well as [b] looking for a sense of direction by outlining the journey… while keeping things flexible and agile enough to pivot as needed. The first question’s answer has to do with the notion of “incremental innovation” while the second question can be addressed in terms of “disruptive innovation.”
As an example, the journey (above left chart) starts with getting to leverage tools and technologies that currently exist, such as today’s many virtualization projects, which deliver early success stories. In some cases, this just means achieving better asset utilization levels as new virtual machines can be easily created. This circumvents lead times for new equipment orders and, hence, can also translate into faster time to market. Early success stories are like taking baby steps that build confidence and momentum: learning to walk before we can run.
By looking at how to best address [a] and [b] and zooming in and out in the process, we can also make decisions on what projects should be future proofed at each step, or which ones make sense to continue to exploit as trail blazing bets, yet experimental initiatives.
About a year later I delivered these other two charts that you see right above this paragraph. We are talking about a turning point where pilot virtualization projects where expected to level the playing field already. Time flies in the cloud age and table stakes prompts a need for moving up the value chain. This means seeking the kind of advance and differentiation that convert into new competitive advantages… and whether first movers’ set the pace, can sustain or further them and get a better overall deal.
This 2×2 matrix maps out capabilities (vertical axis) and readiness (horizontal axis) progressing from proof of concept prototyping all the way to live deployments. It also helps discuss two other significant chasms based on what it really takes for initiatives in the lower left quadrant to move forward from the labs and pre-production to live environments, as well as whether they evolve upward toward a pure carrier cloud environment, that being the ultimate end goal.
The spiral on the right evolved from a radar chart. Admittedly, I keep toggling between the above spiral and my source radar chart depending on what I need to communicate. When showing the spiral version I discuss a fast evolving and complete delivery and operations lifecycle. Then by switching to the radar version I can plot how far a given project is on any of those axis. What usually follows in that discussion is drawing a polygon connecting the multivariate data and how the resulting shapes can be different across projects due to product management decisions.
Last but not least, here is another multivariate view for a more recent talk with observations and insights linking how infrastructure, analytics, management systems and services evolve and, once again, where a given project might lay. This time around, my goal was to lead the discussion with four hot and attention grabbing topics such as:  cloud fabrics,  big data,  automation and  software defined elements as a service.
This is an animated chart uncovering a column for each topic. Once completed, it becomes easier to engage in a meaningful conversation on the bigger picture where these four pillars turn to be interdependent. The above display was the result of a whiteboarding exercise where a fifth column outlined ecosystem items and a sixth one was dedicated to human factors and organizational behaviors.
You can see these and other charts in context as part of Alcatel-Lucent presentations such as:
These materials are also available in the “content” section of this blog.