“Argyle Executive Forum is bringing together senior digital & IT executives from a variety of industry verticals for our biannual CIO Chicago Forum. Throughout a full day of content and networking, we will focus on the most pressing issues facing IT executives with regards to leading the business through digital transformation, with an agenda geared specifically towards Chief Information officers, Chief Data Officers, Chief Digital Officers, as well as Data/ Analytics/MIS VPs, Directors, and Architects in a leading role.
It is worth noticing that this event featured partners who we work with such as HP Enterprise, Thought Leader Sponsor, and IBM, Breakout Session Sponsor.
That talks to the criticality of collaborative undertakings as Digital Transformation becomes a pressing objective across industries, academia, public service and government sectors.
What follows is my notes and personal insights. While all the sessions and discussions were quite relevant, I would like to highlight the opening keynote, which set the tone and narrative of the event.
James P. MacLennan, SVP & CIO at IDEX, discussed “The Five Components of a Great Digital Strategy,” which addressed the fact that “Design Thinking”, “Human Factors” and a collaborative culture involving interdisciplinary workstyles and “Great Teams” have become of the essence.
Moreover, he stated that “a Digital Business” will only succeed when it understands hot to connect with people.” The “human element” and, therefore, “people centered” strategies turn out to be critical success factors.
I would like to add that this entails engineering a continuum of (a) stakeholders, who are all human personas by definition, and to do so across (b) UX (user experience) and CX (customer experience) domains.
This job takes (c) a holistic understanding of customer facing (front end) and resource facing (back end) elements forming a coherent end-to-end system. Otherwise, operational fragmentation will take a toll and will deny the intended DX benefits.
James’ presentation displayed the convoluted UI (user interface) shown in this picture to illustrate the paradox of well intended yet counterproductive implementations that negate transformation initiatives.
Here is another valuable insight coming out of Argyle’s Executive Forum: information technologies (IT) and tech and processes for operations cannot longer be worlds apart, which demands superb cross-functional teamwork.
Cognitive overload, deficient information architecture, and poor usability translates into: human error, risk aversion, costly budget overruns, missing or deviating from goals, so on and so forth.
Any and all of these issues combined can be silently impacting quality or, simply, just lowering the bar for a business to get through noisy and cluttered operational environments. That is hardly the stuff that operational excellence calls for.
Obviously, in the context of CX, customer satisfaction becomes harder and harder to attain and, more specifically, to get that effectively done in a consistent fashion.
Predictability and consistency are key objectives for any Quality Management program. If that scenario alone wasn’t troublesome enough, Customer Delight (rather than just satisfying agreed upon requirements) is Design Thinking’s ultimate performance indicator, which commands a premium clearly beyond reach under those circumstances.
Quality management wise, “satisfaction” is the fulfilment of expected specifications while “delight” is about great pleasure, or great satisfaction if you will. “Satisfaction” can be rationalized and is the acceptance ticket to be in business. “Delight” accounts for human affects (emotions) and is a powerful source of differentiation. Those who think that’s just about splitting hairs should take a pause and think twice because DX is set to enable game changing experiences on all counts and fronts.
Thoughtout the forum and session after session, Jim’s “Design for Humans” principle gained more and more critical mass as presenters and panelists discussed the reasons why we should be mindful of the user journey and how to best improve all touch points along the way.
In one of the panel discussions this became even more evident when the question on aligning people, processes and technologies pointed to difficult prioritization exercises. Note that there was immediate consensus on the need for putting people first and humanizing technology and processes by applying Design Thinking, a human centered methodology that is corner stone to the job of creative technologists.
That means projects that are driven by clear missions and specific experiential outcomes and lifecycles (Goal Directed Design) rather than just an I/O approach. It also means rapid experience prototyping and A/B multivariate testing to explore possibilities since Design Thinking is a serial innovation engine.
Chicago’s NPR station aired a rerun of “The Power of Design” this past weekend. The discussion was centered on “How Can We Design For A Better Experience.”
By the way, TED’s acronym actually stands for the convergence of Technology, Entertainment and… Design.
Interview with Tony Fadell, one of the main designers of the iPod (Apple) and founder of Nest (Google).
“Design begins by also noticing all those little problems that many ignore (…) we we though our lives accepting these design flaws that actually don’t improve our lives.”
“Steve Jobs challenged us to see our products through the eyes of the customer, the new customer, the one that has fears and possible frustrations, and hopes and exhilaration that the new technology can work straight away for them. He called it “staying beginners” and wanted to make sure that we focused on those tiny little details to make things work faster and seamless for the new customers.”
“There is this positive emotional momentum that builds on itself at each step of the process (…) when you hit a brick wall you loose all the momentum (…) and though away an entire great experience.”
“There are to halves to design, just as there are two halves to your brain, the emotional part and the rational part. If you want people to truly adopt your product it has to have an emotional component, something that grabs you (…) that unlocks your curiosity, it also needs to rationally work (…) because people see value beyond the sexiness.”
Interview with Joe Gebbia, Airbnb cofounder.
“Any time that you see duct tape in the world, that’s a design opportunity (…) it’s an indicator that something is broken, that something did not perform the way it was design to and that there is an opportunity to improve it.”
“Design is the key to (Airbnb) success (…) and as a competitive advantage, design is thing that can separate you (…) the next thing that can differentiate you. All things being equal, two comparable products side by side with the same technical features and components… you will be crazy to choose the one that is harder to use.”
“Airbnb’s design decisions not only made the service easy to use but it helped millions of complete estrangers trust each other (…) and open their homes (…) design is more than the look and feel of something, it is the whole experience.”
“One area receiving a lot of focus this cycle is NFV (Network Functions Virtualization). We’ve started an upstream NFV sub-team for OpenStack that is tracking and helping to drive requirements and development efforts in support of NFV use cases […] The main consumers of NFV are Service providers (telecommunication providers and the like) who are looking to accelerate the deployment of new network services, and to do that, need to eliminate the constraint of slow renewal cycle of hardware appliances, which do not autoscale and limit their innovation. […] The opportunities for OpenStack in the NFV space appear to be huge.”
Left: with CloudBand’s Guy Shemesh at Alcatel-Lucent’s Tech Symposium’s demo station.
Right: Bell Labs “Networked Cloud” demonstration presented at Tech Symposium – Silicon Valley.
I just finished listening to Red Hat’s Nicolas Lemieux and CloudBand’s Idan Green who delivered a 30 minute webinar on OpenStack for NFV. This is worth watching. Here is the link to CloudBand’s NFV Mashup Series, which is hosted by Valerie Noto. On that webpage you will find this and 9 other presentations at the time of posting this article.
Today’s webinar reminded me of a Bell Labs project that we unveiled at Mobile World Congress in 2012 and further developed for Alcatel-Lucent’s Tech Symposium Silicon Valley. Bell Labs’ “Networked Cloud” PoC (Proof of Concept) helped illustrate benefits behind distributed “cloud-and-network” systems while taking full advantage of CloudBand’s management system and cloud nodes.
I ended up conducting quite a few demonstrations of this project for network operators, industry and financial analysts because predictive analytics fueled with smart algorithms cleverly figured out where to best place a given load anytime. This exercise factored both cloud nodes and network capacity, resource optimization practices, and the actual application requirements and load impact, coupled with deterministic behaviors subject to SLA (Service Level Agreements).
There were several use cases worth considering. Demonstration wise, it made sense to first focus our conversation on the one that could be best visualized and experienced. As an example, sudden demand growth led to the automatic spinning of VM (Virtual Machines), onboarding the right applications, instantiating and deploying a given service (enterprise productivity and collaboration applications for that one demo), and scaling in the process.
- This scenario’s narrative talked to taking down silos and gaining visibility to improve both server utilization levels and network capacity, all under a centralized management system such as CloudBand. This assumed dramatically shorter lead times, more efficient power consumption and subsequent higher ROA (Return on Assets). Though, the wow factor was delivered by operating under QoS (Quality of Service) parameters, such as latency constrains with a SLA in place, being the result of intelligently placing loads at the edge of the network, closer to the end user for performance sake.
- Concepts such as monitoring, data correlation, predictive analytics and service continuity would come to the surface under a second use case. Worth emphasizing that both use cases take advantage of the distributed nature of the networked cloud paradigm, which the above map (right screen) helped visualize as the demo progressed.
- This second use case showed what specific node-and-link combination would be best performing at the time of re-instantiating an application. The objective was to prevent service degradation when network traffic worsens for any reason. There were A/B comparison scenarios facing the same issues, such as a network link being compromised.
- “A” showed the known behavior of a conventional architecture where the user experience would either be negatively impacted or, alternatively, addressed by means of costly and lengthy over engineering and, therefore, extremely poor and self-defeating ROA.
- “B” presented the benefits of distributed systems under the “networked cloud” paradigm, where performance was sustained in an unparalleled cost efficient fashion with loads dynamically placed and relocated as needed; all being back-end stuff that is completely transparent to the end user.
More recently, our EPC (Evolved Packet Core) team conducted a similar NFV demonstration at Mobile World Congress 2014 where the end user’s mobile experience featured video streaming instead. NFV’s distributed architecture is key to also managing not just service continuity and self-healing, but also: resource isolation in multi-tenant environments, security, RAS (Reliability, Availability and Serviceability) and overall service delivery and lifecycle assurance under SLA.
Some other use cases are related to regulatory compliance, which can involve: lawful intercept, local data protection mandates, as well as regional coverage requirements and engineering for no-single-point-of-failure.
Source: courtesy of Alcatel-Lucent and Red Hat. CloudBand’s NFV Mashup Series #10.
FOSS (Free Open Source Software) is becoming a de-facto standard in the telecommunications industry. Some years ago, my team used Euclyptus to deploy and manage cloud computing infrastructure. We needed to create a number of virtual machines and that initiative helped with working on a hybrid AWS-compatible (Amazon Web Services) environment. When projects became more focused on communication networks we then took advantage of CloudStack, which is also positioned as turnkey IaaS (Infrastructure as a Service). Here is a link to a presentation discussing CloudStack in the context of NFV.
More recently, OpenStack has made significant inroads in this nascent space and is part of trials for virtual: CPE (Customer Premises Equipment), CDN (Content Delivery Network), DNS (Domain Name System), AAA (Authentication, Authorization, Accounting), SBC (Session Border Controller), EPC (Evolved Packet Core), and IMS (IP Multimedia Subsystem). In many cases NFV’s MANO (Management and Orchestration) interfaces directly with OpenStack and in some others that is the application’s EMS (Element Management System, or virtual equivalent) job, depending on the workflow.
So, is OpenStack enough to support NFV? I addressed Bell Lab’s “Networked Cloud” research demo as an example where “OpenStack-as-is” does not happen to be yet equipped to address NFV’s own challenges. To be more specific, we are talking about those presented by distributed carrier cloud systems, sophisticated networking, more complex transactions, CPU intensive packet processing and high availability in multi-tenant environments.
As discussed by Nicolas in today’s webinar, NFV injects workload dependencies spanning: control and data planes, signal processing, storage; and more strict requirements for performance, determinism and RAS. These items impact projects shown in the upper part of the above graphic and there are OpenStack Foundation teams looking into these:
Table source: OpenStack NFV Use Cases.
Note that Red Hat is also addressing KVM (Kernel Based Virtual Machine) as the open source hypervisor, which creates and runs the VMs; supports libvirt for node management APIs beyond what’s provided by hypervisors; and works with DPDK, Intel’s Data Plane Development Kit with the drivers to accelerate packet processing on x86 platforms.
What follows is the architecture of our integrated joint solution aiming to bring together the best of carrier and IT (Information Technologies) worlds with NFV in mind. This also takes SDN (Software Defined Networking) into consideration.
Picture source: courtesy of Alcatel-Lucent and Red Hat. See CloudBand’s NFV Mashup Series #10.
“Both companies are creating the basic building blocks of a distributed cloud based on OpenStack and the foundational infrastructure for a best of breed open NFV Platform. Red Hat and CloudBand are set to help accelerate NFV for service provider networks.”
I am glad to share that I will be presenting as well as joining a panel discussion at Software Telco Congress. I picked the below two topics because the more we talk about operations and making things happen in the context of NFV (Network Functions Virtualization) the more making the business case as well as understanding behavioral economics happen to be of the essence.
When working on emerging technologies, technological prowess alone might not move the needle close enough to the tipping point. The fact is that “brain-ware” and organizational dynamics can be overlooked and, in turn, become harder to address than just figuring out and debating hardware and software roadmaps.
Presentation: The Impact of NFV on Service Provider Economics.
Tuesday, 08/12/14 10:00-10:45am
“While many of the questions around NFV focus on the roadmap to achieving a virtualized network infrastructure, the ultimate question is how will it impact the service provider from an economic perspective. Without that proof point, it’s difficult to make the case for NFV. What kind of impact should operators expect NFV have on OPEX? What markets will NFV open for SPs to pursue with greater success? Will the introduction of NFV open up the opportunity for new lines of revenue and new service offerings? How will this transition impact end customers in both SMB and Enterprise markets?”
Panel discussion: Making the Transition to Software – Are We Ready?
Wednesday, 08/13/14 10:45-11:30am
“One of the barriers to achieving the software telco is the support of the existing infrastructure, especially at the edge of the network, and while most providers can ill afford to make wholesale network changes for fear of negatively impacting revenues, they also realize the move to software is a necessity. How can new SDN and NFV technologies be deployed? What strategies are being discussed and which make sense and which don’t? There are number of opportunities that do not require service providers to boil the proverbial ocean.”
Looking forward to seeing you at Software Telco Congress, collocated with IT EXPO, which is promoted as the “business technology” event.
Crossing the chasm between inventing and innovating has a lot to do with a technology’s diffusion level and depth of adoption. Generally speaking, inventions talk to new forms, functions and applications while innovations have more to do with whether that novelty becomes a game changer.
Innovations qualify as such because of causing a significant industry impact. This is beyond just filing for a patent or making something commercially available. Otherwise, we would just be talking about inventions.
When an emerging technology is first conceived, those of us rallying behind it might do so because we sense and foresee potential. We strive to work with all of the assumptions involved in how it will unfold, evolve and even transform and mutate. But, accidental innovation happens. Moreover, a majority of entrepreneurs would acknowledge that what made a business successful might not necessarily be the source concept they started with.
As Antonio Machado (Spanish poet, 1875-1939) stated in one of his most popular writings: “we make paths as we go.” While NFV (Network Functions Virtualization) has crossed a point of no return and aims to shift from invention to innovation status, we cannot yet benefit from defining maturity levels in hindsight. Moving forward in the midst of changes and uncertainty calls for exercising thought leadership.
NFV qualifies as an emerging technology of great interest in the telecommunications sector, jointly with SDN (Software Defined Networking). When I worked on the the above visuals, my goal was to convey a dynamic service delivery environment resembling neural activity with different but interconnected layers. I see this as an application driven and a constantly morphing system where new connections get instantiated, any needed assets surface just in time and resources get fired up without incurring in self defeating trade-offs, therefore the above right chart.
Note that this is in contrast with today’s rigid network systems where service innovativeness can be either halted or negatively impacted by lengthy lead times and cumbersome operational constrains. Couple that with performance and reliability concerns. As a matter of fact, the way Clayton defines his “innovator’s dilemma” is worth reviewing and understanding in this context.
Having depicted a desirable vision within the scope of what’s eventually possible, when undertaking technology roadmapping my next batch of questions are more about [a] what fruitful immediate steps can be undertaken now, as well as [b] looking for a sense of direction by outlining the journey… while keeping things flexible and agile enough to pivot as needed. The first question’s answer has to do with the notion of “incremental innovation” while the second question can be addressed in terms of “disruptive innovation.”
As an example, the journey (above left chart) starts with getting to leverage tools and technologies that currently exist, such as today’s many virtualization projects, which deliver early success stories. In some cases, this just means achieving better asset utilization levels as new virtual machines can be easily created. This circumvents lead times for new equipment orders and, hence, can also translate into faster time to market. Early success stories are like taking baby steps that build confidence and momentum: learning to walk before we can run.
By looking at how to best address [a] and [b] and zooming in and out in the process, we can also make decisions on what projects should be future proofed at each step, or which ones make sense to continue to exploit as trail blazing bets, yet experimental initiatives.
About a year later I delivered these other two charts that you see right above this paragraph. We are talking about a turning point where pilot virtualization projects where expected to level the playing field already. Time flies in the cloud age and table stakes prompts a need for moving up the value chain. This means seeking the kind of advance and differentiation that convert into new competitive advantages… and whether first movers’ set the pace, can sustain or further them and get a better overall deal.
This 2×2 matrix maps out capabilities (vertical axis) and readiness (horizontal axis) progressing from proof of concept prototyping all the way to live deployments. It also helps discuss two other significant chasms based on what it really takes for initiatives in the lower left quadrant to move forward from the labs and pre-production to live environments, as well as whether they evolve upward toward a pure carrier cloud environment, that being the ultimate end goal.
The spiral on the right evolved from a radar chart. Admittedly, I keep toggling between the above spiral and my source radar chart depending on what I need to communicate. When showing the spiral version I discuss a fast evolving and complete delivery and operations lifecycle. Then by switching to the radar version I can plot how far a given project is on any of those axis. What usually follows in that discussion is drawing a polygon connecting the multivariate data and how the resulting shapes can be different across projects due to product management decisions.
Last but not least, here is another multivariate view for a more recent talk with observations and insights linking how infrastructure, analytics, management systems and services evolve and, once again, where a given project might lay. This time around, my goal was to lead the discussion with four hot and attention grabbing topics such as:  cloud fabrics,  big data,  automation and  software defined elements as a service.
This is an animated chart uncovering a column for each topic. Once completed, it becomes easier to engage in a meaningful conversation on the bigger picture where these four pillars turn to be interdependent. The above display was the result of a whiteboarding exercise where a fifth column outlined ecosystem items and a sixth one was dedicated to human factors and organizational behaviors.
You can see these and other charts in context as part of Alcatel-Lucent presentations such as:
These materials are also available in the “content” section of this blog.
“Mobile World Congress set additional records, with more than 1,800 exhibiting companies showcasing cutting-edge products and services across 98,000 net square meters of exhibition and hospitality space. More than 3,800 international media and industry analysts attended the event to report on the many significant industry announcements made at the Congress. Preliminary independent economic analysis indicates that the 2014 Mobile World Congress will have contributed more than €356 million and 7,220 part-time jobs to the local economy.”
“The four-day conference and exhibition attracted executives from the world’s largest and most influential mobile operators, software companies, equipment providers, Internet companies and companies from industry sectors such as automotive, finance and healthcare, as well as government delegations from across the globe. Over 50 per cent of this year’s Mobile World Congress attendees hold C-level positions, including more than 4,500 CEOs and 18 per cent of attendees for the 2014 show were women.”
This year’s Mobile World Congress registered in excess of 85,000 attendees, that is 18% over last year’s record attendance.
At Alcatel-Lucent’s booth our NFV (Network Functions Virtualization) teams handled more than 300 private demonstrations, all by invitation only and most scheduled in advance to MWC. That not only signals strong interest in cloud computing in the telecommunications industry, but also our company’s leading edge in the carrier cloud space.
We have already crossed the chasm between researching the art of what’s possible and reality. This is my fourth consecutive year discussing cloud computing technologies for network operators at MWC. Our head start has equipped us with know-how and compelling solutions, which our customers are deploying today.
This was a very successful MWC for our teams, which called for a celebration after MWC’s closed its doors to the public (right picture).
Earlier in the day I visited Catalonia’s booth which promoted a number local start-ups. In the afternoon I joined SCTC (Society of Communications Technology Consultants) for the “Consultants Day” in Barcelona, and off site event sponsored by Banc de Sabadell and Argelich Networks.
This conference’s presentations involved consultants and service providers, as well as a thought provoking panel discussion on cloud computing. Among many other topics of interest, I was glad to hear Orange’s J. Nabet stating the need for SLA (Service Level Agreements) engineered to take care of both cloud and network services, a key differentiator for network operators.
I would like to thank the organizers and, specially, A. Argelich for inviting me to join the conference’s speakers to continue discussions over dinner.
Another intensive day at Alcatel-Lucent’s booth packed with back to back meetings with service providers, press, analysts and public officials at our demo station.
Our discussions on the Cloud Communications Platform ranged from deep dives on what it takes to implement and operationalize NFV today to some discussions on cloud fundamentals in the context of telecommunication networks. This signals the fact that the industry is no longer wondering whether to adopt cloud technologies. Our customers are now zeroing in on what it takes to make things happen and what specific use cases make the most sense to start with.
Later that evening, Alcatel-Lucent’s CloudBand held the “NFV Mixer” co-sponsored by Intel, a reception for ecosystem partners in the hospitality section of our booth.
Opening day. Alcatel-Lucent’s MWC14 slogan is “Mobile Meets Cloud.”
Our booth was designed to host a number of demonstration stations. Two of them where exclusively dedicated to NFV: vIMS (virtual IP Multimedia Subsystem) and vEPC (virtual Evolved Packet Core) both supported by CloudBand’s Management System and Cloud Node.
Telefonica’s and China Mobile’s own press releases featured Alcatel-Lucent NFV solutions. Our customers also followed up on our collaboration with Intel, which was announced just the day before. This program aims to accelerate the development of key cloud technologies, such as data plane acceleration.
That night I attended TechCrunch’s & BoB gathering at El Palauet, an Art Nouveau mansion built in the early 1900s. I would like to thank Deustche Telekom’s J. Noronha and Concise Software’s L. Hostynski for sharing interesting insights on emerging technologies and markets. This event focused on connecting influencers from around the world with innovators in town for MWC.
Our team met with M. Combes, Alcatel-Lucent’s CEO, the day before MWC’s official opening.
The Cloud Communications Platform experience involved two large touch screens portraying a live demonstration (left screen) and interactive infographics (right screen). While focusing on value and DevOps (Development & Operations) we also took care of exemplifying end user benefits, these were clearly illustrated with tablets and real time communications applications.
That night I also enjoyed a good conversation with C. Chappelle at Heavy Reading’s gathering. She is a well respected industry analyst whose research focuses on the service provider’s adoption of cloud computing technologies. I would like to thank Heavy Reading for their kind invitation and S. Reedy for letting me know about Chicago’s “Big Telecom” event, scheduled for June of this year.
MWC remained under construction and we all were asked to comply with safety rules. We conducted demo dry-runs supported by M. Gerdisch and S. Furge’s of Alcatel-Lucent’s CIC (Cloud Innovation Center).
Our Cloud Communications Platform was ready for prime time, a solution featuring vIMS and the NFV Platform with SDN (Software Defined Networking). Given its sophistication, we could have deployed three standalone demonstrations instead. In hindsight, showing how these technologies come together to deliver greater value for our customers was the right approach.
Landed in Barcelona and partnered with D. Johnson, IMS Virtualization & Cloud Enablement Head, at Alcatel-Lucent’s booth, which was under construction. Our objective that day was to sync on the latest developments and to discuss messaging updates for MWC.
In my first evening in Barcelona I was fortunate to make it for my high school’s 50th Anniversary celebration. I couldn’t be happier about reconnecting with faculty and joining distinguished guests such as I. Rigau, Minister of Education with the Government of Catalonia and J.Jané, Vice President, Parliament of Spain. I would like to thank Principle Fr. M. Muñoz for his kind invitation.
Click below to access my photo set on Flickr.
Greetings from London.