The preceding post on QXbD research notes – Part 1.1 shared a retrospective with insights from my early college years, which were influenced by Bruno Munari‘s “projected methodology” and the Bauhaus‘ design principles.
Research Notes Part 1.2 (this post) takes me back to BarcelonaTech’s school of engineering in the early 90s, which I joined to study Human Factors Engineering while pursuing my last year of Industrial Design at Escola Massana, an art & design school.
Those days, Donal Norman’s “The Psychology of Everyday Things” and Henry Petroski’s “The Evolution of Useful Things. How Everyday Artifacts Came to be as They Are” became must-read books for anyone interested in thoughtful design principles in a new light. Norman was an Apple Fellow and became VP of the Advanced Technology Group in the mid 1990s. He popularized the term of User Experience.
Petroski was an engineer whose best known work focuses on failure analysis. He stated that the best Industrial Design involves “seeing into the future of a product” and that Human Factors Engineering is concerned with “how anything will behave at the hands of its intended and not intended users.” Here is a summary of some of his design principles:
- Tools make tools.
- Artifacts multiply and diversify in an evolutionary way.
- There always is room for improvement.
- Good can be better than best.
- Efficacy can be subjective, want overpowers need.
- Form follows failure: inventors should be technology’s severest critics.
- Focus on different faults means different solutions to the same problem.
- Engineering is invention institutionalized.
- Sometimes it is about a new job, sometimes about a better or faster job.
“Though the best designs deal successfully with the future, that does not mean they are futuristic […] There is an apparent reluctance to accept designs too radically different from what they claim to supersede […] if things are redesign too dramatically and the function that they perform can be less obvious”.
“Loewy summarized the phenomenom by using the acronym MAYA, standing for most advanced yet acceptable. Dreyfuss emphasized the importance of a survival form, thus making the unusual acceptable to many people who would otherwise reject it [Industrial Designers] have learned to strive for a delicate balance between innovation in order to create interest, and reassuringly identifiable elements”.
Donald Norman pointed to design issues leading to human error and making users unfortunately blame themselves in the process. He claimed that the “paradox of technology” takes effect when added functionality comes with unwanted complexity, which denies the sought-after benefits. These are some of the design principles:
- Design should be user-centric and consistent.
- Identify the true root cause of a problem.
- Well designed products teach the user how to use them.
- Make things visible, give clear clues, enough information and feedback.
- Get mapping and system state right, simplify task structure.
- Design for error, exploit the powers of constraint.
- Make possible to reverse actions, and make it harder to do what cannot be reversed.
Following up on the topic of technology’s paradoxes, it is worth reviewing Geoffrey A. More’s “Crossing the Chasm“, which was published in 1991. He explored the rationale behind the failure of emerging technologies, which fail to take hold.
There can be a deep chasm between enthusiasts and early adopters and the broader user groups shaping the mass market. Avoiding the Valley of Death starts with an understanding the adoption lifecycle: different user groups come along with different expectations. That prompts the need for the design of specific transitions and adaptations.
“Whole Product R&D […] begins not with creative technology but with creative market segmentation. It penetrates not into protons and processes but rather into habits and behaviors […] it implies a new kind of cooperation between organizations traditionally set apart from each other.”
HUMAN-MACHINE-SYSTEM DESIGN PRINCIPLES
BarcelonaTech’s teaching addressed Human-Machine Systems as an interdisciplinary undertaking. Human dynamics entailed the study of individuals and collectives such as teams and organizations. That would encompass the following disciplines and an strengths and limitations
- Psychology – skills, cognitive appraisal and workload, workstyles…
- Physiology – form factors, motions, anthropometry, biomechanics…
- Social Sciences – teamwork, organizational behaviors, culture…
Tools and machines involved hardware and software components. HMS’ holistic approach consistently tackled end-to-end solutions. These were placed in context and in specific physical environments. The sough-after outcomes of “Designing for People” zeroed in on:
- The delivery of capable high performance systems as defined by productivity by effectiveness and efficiency metrics, and success rates.
- Designing for users’ wellbeing and safety.
- Human Error is often a consequence of poor design.
- Addressing the broader user base possible, typically set at 95% coverage with adaptations, accounting for diversity rather than designing for just averages.
- Extreme case and stress testing, factoring life-long / lifecycle changes as solutions evolve and/or can be deployed in other context and environments.
We followed this iterative methodology, starting with due diligence on:
- Initial problem statement and goal setting.
- Operations assessment: use cases’ current state / present mode.
- User Taxonomy and Analysis: jobs, tools, work motion studies (tasks, workflows, success and failure rates) often relying on instrumentation.
- Data collection, processing, analysis and insights.
- Identification of value based activities, waste and risks.
- Critical success factors and possible scenarios at play.
- Information, process, hardware and software specifications.
- Contextual and environmental considerations.
The next phase focused on Human-Machine-System design, including all relevant subsystems and interactions across them:
- Operations review: new target state and mode.
- Interaction Matrix* correlating human and design factors.
- Prioritization criteria and conflict resolution.
- Job and process streamlining, often leading to redesign, or new design.
- Goal setting based on metrics optimizing for system wide operability.
- Iterative improvement cycling through experiments, prototyping, simulations and testing.
The *Interaction Matrix correlated human factors (rows) for a given design option with the following “realization” ones (columns) and the degree to which those relationships were weak, medium or strong (matrix).
- Customer acceptance criteria.
- Operability levels, including safety.
- Conformance with functional requirements.
- Reliability and performance levels, as well as maintenance.
- Productization feasibility and costs.
- Aestetics and affective considerations.
Just a quick reminder about the fact that this article is still discussing topics set all the way back in the early 90s. Those days, Total Quality Management (TQM) and Lean lead the way. Note that ISO 9000 standards had been first released in 1987.
The top three key values were: Customer Intimacy, Operational Excellence and Product Leadership:
“customer intimacy: tailoring offerings to match demand […] detailed customer knowledge with operational flexibility […] customizing a product and fulfilling special requests […] engendering tremendous customer loyalty“.
“operational excellence: providing customers with reliable products or services at competitive prices and delivered with minimal difficulty or inconvenience“.
“product leadership: continuous stream of state-of-the-art products and services. First, they must be creative […] Second, must commercialize their ideas quickly […] business and management processes have to be engineered for speed. Third, product leaders must relentlessly pursue new solutions”.
High operational performance was broken down as follows:
- Productivity & scalability.
- Flexibility & adaptability.
- Mix complexity.
J.M. Juran discussed quality in the context of “Big Q” and “Little Q” where the former addresses a business problem and is all encompassing, while the latter is siloed and focuses on tackling technical issues. Big Q delivers the sort of value that users can appreciate.
Strategic Quality Management was meant to learn from customer experiences and leveraged House of Quality charts to design with.
The first step was to map out a taxonomy of customer attributes (CA) decomposed in primary, secondary and tertiary levels, the latter being the most granular list of customer requirements and expectations… all largely based on surveys and user feedback. This was done for the value chain consisting of end users, consumers, retailers, distributors, regulators, etc. Weightings were set to prioritize attributes based on contextual relevance.
CA items would then be placed on the left rows of the above spreadsheet for the purpose of cross-checking them with technical features to be shown as column headers. That was done by correlating CA and engineering characteristics (EC). The resulting center matrix was used to assess what items were positively and negatively impacted, co-variance, and to what extend. Each cell featured icons and color coding for strong, medium, weak relationships.
The pyramidal roof at the top was filled out afterwards to look into technical synergies and conflicts alone. Basically, becoming aware of how engineering characteristics interact and making decisions on optimizations and conscious trade-offs.
SOME OTHER THOUGHTS…
Attending both Art and Engineering schools was a fascinating experience to say the least. The opportunity to cross-pollinate across disciplines could made anyone feel like being in a reenactment of the Renaissance’s blending of arts and sciences.
Both Industrial Design and Human Factors Engineering optimize for the human experience and, therefore, make their professions be about “Designing for People”. Technology that does not account for human skills, strengths as well as limitations, all in context and in the scenarios and environments will operate under… becomes greatly exposed to failure.
Striving to make designs that fit people’s potential, rather than just expecting users to just fit… does require an interdisciplinary and iterative practice, painstaking attention to detail being critical. At that point, it also became clear that addressing the Big Q also had to do with articulating the business value of design.
- D. Norman. The Psychology of Everyday Things. Basic Books, 1988.
- G.A. Moore. Crossing the Chasm. Harper Business, 1991
- H. Petroski. The Evolution of Useful Things. Vintage Books, 1992.
- J. Krafcik and J. Womack. Triumph of the Lean Production System. MIT Sloan Management Review, 1988. Accessed on May 18 2019 http://www.lean.org/downloads/MITSloan.pdf
- J.R. Houser and D. Clausing. The House of Quality. Harvard Business Review, May 1988. Accessed on May 18, 2019 https://hbr.org/1988/05/the-house-of-qualityT.S. Clark and E.N. Corlett. La Ergonomia de los Lugares de Trabajo y de las Maquinas. Tylor and Francis, 1984.
- M. Treacy and F. Wiersema. Customer Intimacy and Other Value Disciplines. Harvard Business Review, January – February 1993. Accessed on May 18, 2019 https://hbr.org/1993/01/customer-intimacy-and-other-value-disciplines
- House of Quality Template. QFD Online. Accessed on May 19, 2019 http://www.qfdonline.com/templates/
“Together with his identical twin brother, Scott, he has laid the groundwork for the future of space exploration as the subjects of an unprecedented NASA study on how space affects the human body, which is featured in Scott’s New York Times best-selling memoir, Endurance: A Year in Space, A Lifetime of Discovery.”
“Currently, Mark is on the Commercial Crew Safety Board at Space X […] and is the co-founder of World View, a full-service commercial space launch provider.”
Endeavour to Succeed. College of DuPage, Department of Physics. February 14 2019.
I managed to attend Captain Mark Kelly’s talk in Chicago just the day before I was leaving for Barcelona’s Mobile World Congress. M. Kelly’s presence and insightful remarks commanded both admiration and utmost respect.
Among many other fascinating topics, he discussed NASA’s “None of US is as Dumb as All of Us“ as a reminder of the negative impact of ‘groupthink‘ in the context of faulty decision making. Most specifically, he referred to dramatic mistakes leading to the Space Shuttle Columbia disaster, which disintegrated upon re-entry in 2003.
“Large-scale engineered systems are more than just a collection of technological artifacts. They are a reflection of the structure, management, procedures, and culture of the engineering organization that created them.”
“They are also, usually, a reflection of the society in which they were created. The causes of accidents are frequently, if not always, rooted in the organization—its culture, management, and structure.”
“Blame for accidents is often placed on equipment failure or operator error without recognizing the social, organizational, and managerial factors that made such errors and defects inevitable.”
Nancy G. Leveson, MIT. Technical and Managerial Factors in the NASA Challenger and Columbia Losses: Looking Forward to the Future. Controversies in Science and Technology Volume 2, Mary Ann Liebert Press, 2008.
Groupthink is part of the taxonomy of well-known cognitive biases and takes hold when divergent thinking and disagreement are discouraged (and even repressed) as part of group dynamics.
Hindsight is 20/20 and, statistically speaking, ‘black swan’ events are characterized by seemingly random surprise factors. Groupthink can obfuscate the early detection of predictors such as leading outliers and anomalies, which left unattended can overwhelm a given system over time… and be the source of cascading effects and critical failure.
Groupthink’s negative impact compromises any best intentions such as organizational cohesiveness in the spirit of consensus, agility, productivity, timely project progress and de-escalation management.
Often times, there might be neither adequate situational and risk awareness nor a basis for sense making drawing from the comparative analysis that comes with diligent scenario planning.
Individuals and organizational cultures with a succesful track record can also experience complacency. Over-confidence fosters the sort of behaviors and decisioning that served the group well in the past.
Though, when in the mix of a changing environment defined by new parameters under the radar, only operating within the perimeter of a given set of core competences and comfort zones, makes those specific behaviors blindsight and betray the team’s mission and purpose.
Many plans do not survive first contact (or a subsequent phase for that matter) as their implementation creates ‘ripple effects’ of various shapes and propagating speeds. Some of that can be experienced as ‘sudden risk exposure.’ Once passed the ‘point-of-no-return,’ if that challenge is met with neither contingency planning nor the ability to timely course correct, pivot or even deploy a basic safety-net offsetting the impact, the project fails to ‘cross the chasm’ and is headed for what’s technically known as the ‘valley of death.’
This was one of the key issues discussed by Clyton M. Christiansen when I took his Harvard class on the ‘Innovator’s Dilemma,’ and is also a key point behind Risto Siilasmaa’s ‘Paranoid Optimism’ as well Paul Romer’s ‘Conditional Optimism,’ all of which advocate for scenario planning and sensing optimization to be able to calibrate or re-assess the path forward.
“Michael Shermer stated in the September 2002 issue of Scientific American, ‘smart people believe weird things because they are skilled at defending beliefs they arrived at for nonsmart reasons.”
Groupthink can also manifest itself by means of ‘eco chamber’ effects’ as misguided consensus amplifies what becomes a “self-serving” bias. That is, in effect, a closed feedback loop process that magnifies logical fallacies. These can come across as reasonable enough postulates, though if based on rushed judgement and selective focus they can also suffer from ‘confirmation bias.’ This is the case when new evidence is only used to back-up the existing belief system rather than share new light.
In the context of Decision Support Systems and Cognitive Analytics, the above reasoning deficits become root causes of errors impacting operations. That can involve both (a) Human-Human and (b) Human-Machine interactions, as well as impacting programming work resulting in (c) biased algorithms and automation pitfalls when left unsupervised.
Carisa Callini. Human Systems Engineering. NASA, August 7 2017. https://www.nasa.gov/content/human-systems-engineering
Carisa Callini. Spaceflight Human Factors. NASA, December 19 2018. https://www.nasa.gov/content/spaceflight-human-factors
Clayton M. Christensen. The Innovator’s Dilemma. Harvard Business Review Press, 1997.
COD Welecomes Astronaut Mark Kelly. Daily Herald, February 13 2019. https://www.dailyherald.com/submitted/20190201/cod-welcomes-astronaut-mark-kelly-feb-17
Geoffrey Moore. Crossing the Chasm. Haper Collins, 1991.
MIT Experts Reflect on Shuttle Tragedy. MIT News, February 3 2003. http://news.mit.edu/2003/shuttle2
Tim Peake. The Astronaut Selection Test Book. Century. London, 2018.
Scott Kelly. Endurance: A Year in Space, a Lifetime of Discovery. Knopf. New York, 2017.
Scott Kelly. Infinite Wonder. Knopf. New York, 2018.
Steve Young. Astronaut: ‘None of Us is as Dumb as All of Us.’ USA Today – Argus Leader, May 13, 2014. https://www.argusleader.com/story/news/2014/05/13/astronaut-none-us-dumb-us/9068537/
Will Knight. Biased Algorithms are Everywhere, and No One Seems to Care. MIT Technology Review, July 12 2017. https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/
Every once in a while we get to experience Murphy’s (dreaded) Law. This time around that had to do with stability issues with a media webcasting platform. We are now working on rescheduling NOKIA HFE18 under a different format. In parallel, we are also kicking off planning for HFE19… and we will take full advantage of lessons learned.
We regret any inconvenience that this eleventh hour change in plans might cause, and remain extremely grateful to both speakers and volunteers who have already invested time and efforts, which should not go to waste.
In the meantime, I’d like to volunteer just a handful of insights on the session that I was scheduled for and, therefore, keep the discussion going. The objective is to further improve what’s already available and allow for an even better session when we get to reconvene. Here is my session’s abstract to begin with.
THE SOFT & HARD NATURE OF ANYTHING DIGITAL
“Our quest to deliver productivity tools yielding operational excellence for DSPs, Digital Service Providers leads to the design of signature experiences by innovating in the process.”
“The Studio at Nokia Software’s Solutions Engineering is set to work with deceptively simple techniques and elegant sophistication… because neither oversimplification nor self-defeating complexity allow end-to-end systems to efficiently operate at digital speed and global scale.”
“This discussion intersects the soft and hard natures of dynamic systems by modeling Human Machine Systems (HMS) and the design of cybernetics. This practice focuses on critical success factors for the early acceptance and broader adoption of emerging technologies.”
“The work at the Studio embraces a renewed approach to QbD, Quality by Design, which is set to left-shift and unveil instrumental considerations at early design stages. The result is Nokia Studio’s QXbD, Quality Experiences by Design, optimizing for customer delight rather than table-stakes customer satisfaction.”
NI – WHAT IS NATURAL INTELLIGENCE? At the time of writing this, we humans possess NI, Natural Intelligence. NI involves naturally developed cognitive functions and models leveraged by the sort of biological beings, which humans happen to be. Intelligence (a) captures, (b) generates, (c) applies and (d) evolves knowledge. Our individual and collective brainpower can be gauged in terms of (e) skills and (f) talent levels, jointly with an understanding of (g) the underlying decisioning process and (h) our perceived experiences in context.
AI – WHAT IS ARTIFICIAL INTELLIGENCE? Intelligence that is not naturally occurring, simulated knowledge in other words. This is generated by programmable artifacts consuming, processing and producing data under closed loop models. Whether working with individual or networked machine intelligence, there is neither information derived from mindfulness nor the type of general purpose sense making that match those of the human experience. The year is 2018… and that’s where state of the art is today.
GI – WHAT IS GENUINE INTELLIGENCE? Earlier in the year I introduced this topic at Design Thinking 2018 (plenary session) and at IEEE Emerging Technologies Roundtable (invitation only workshop.) Coincidentally, both were held in Austin, TX, back in May. I proposed thinking about GI as the outcome of NI powered by AI.
By the way, “genuine” means acting in bonafide. To be clearer: with honesty and without the intention to deceive. Given the trade-offs (pros and cons) that NI and AI bring to the table, GI gets us a step closer to productive bonafide systems.
GI is, therefore, the outcome of purposely crafting optimal technology solutions that augment human possibilities. This is addressed by Human Factors Engineering interdisciplinary science given HFE’s holistic approach and focus on value driven Human-Machine-Systems, HMS.
Quick side note: those of you into Lean and Lean Six Sigma can approach this topic with Jikoda (autonomation.) Ditto for anyone working on Human-in-the-Loop Computing, Affective Computing, RecSys (Recommender Systems,) Human Dynamics and Process Mining with Machine Learning or, better yet, xAI, Explainable Artificial Intelligence.
DDESS – The most tangible design work entails the delivery of DDESS, Digital Decision & Execution Support Systems. This is where GI gets interesting because we need to apply new optics to take a fresh look at what Operational Excellence is (and is not) moving forward.
In a nutshell DDESS’ purpose is to reveal and inform decisions and to make decisions, all in context. But, I will pause here as this topic will be better covered in subsequent posts… just one more thought: DDESS addresses decision support for (NI) humans, (AI) machines, and (GI) human-machine systems. Coming to terms with that one insight alone becomes a critical success factor.
Some other thought… it turns out that, in today’s day and age, projects that are techno-centric heavy only succeed a fraction of the time, 10% or so by some estimates. Selective memories tend to focus and celebrate the 10% that make it… but that is a terrible ROI, Return on Investment, which inflicts (1) severe technical debt, (2) latency costs in systems engineering and (3) a huge opportunity cost as funding and good efforts could have been put to work for more productive endeavours.
By many other well documented and more recent accounts, HCD, Human Centered Design, happens to flip that ratio as designers are obsessed with optimizing for user acceptance and frictionless adoption from day one. HFE takes painstaking work on purposeful and value driven technological solutions where a smart combination of Outside-IN-innovation and Inside-OUT-ingenuity happens to make all the difference.