Tagged: Human Factors

Driving AI with QXbD, Quality eXperiences by Design @ Design Thinking 2021


We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence (…) so let’s start researching this today rather than the night before the first strong AI is switched on.

Stephen Hawking. Reddit Science AMA Series, July 27, 2015.

I would first like to thank those of you who participated in the live AI session at Design Thinking 2021 this past month. Hope that the information and insights shared during our discussion were of value to everyone. I appreciate the positive reviews and the encouragement to keep moving in this direction. Some of you were interested in just understanding how to approach the subject, while others have been working on AI projects for years already and met in the hallway to continue the discussion.

The definition of ‘deep tech’ has evolved. Just for this discussion’s sake and in the context of this specific session, we can think of it as the set of technologies that are not directly developed for end user services. For instance, core research in quantum computing is set to foster pivotal advancements in computational thinking, which leads to game changing performance and capabilities in AI, unleashing new possibilities as a result.

Design wise, our work is driven by devising meaning. We look at benefits and drawbacks, opportunities and risks, and anything in between. These matters become quite tangible when architecting outcomes, which materialize when specific value happens to be created in the process.

That is different from just producing an artifact because we simply can. Design involves experimental undertakings as a deliberate practice. That was the reason behind positioning TOPP, Test Oriented Progressive Prototyping, in our session. TOPP’s concept is key to generating data, analytics, insights and decisions (learning and training in other words) in the realm of QXbD, Quality eXperiences by Design.

Value and quality are correlated. Both are evolving human considerations resulting from our experience with a given service doing some meaningful job for us. This thought applies whether the service is enabled by a conventional product or delivered as a pure play service as such. That statement remains true for physical, digital and hybrid solutions.

De-risking investments in advanced technologies and elevating the chances to succeed in the process can be best approached by ‘designing to value’ and, therefore, by being quality minded. It takes a purposeful human centric orientation, which is achieved by intersecting HCD (Human Centered Design) and HFE (Human Factors Engineering.) It makes not only sense, but it is of the essence in HCAI, Human Centered AI.

As discussed during the session at Design Thinking 2021, investing in ‘AI’s deep tech’ is a critical success factor, but not the only one. We now see a growing share of R&D being devoted to HCAI. System visualization, observability, explainability, intuitive programmability, ease of command and control… all being very relevant and the substance behind the list provided on page 24 in the presentation.

Trending-wise, that happens to be consistent across the board when looking at digital transformation projects. In a discussion with peers at MIT a couple of years ago, I learned that close to 50% of R&D is best devoted to human centric technologies, so that we can reap the benefits and make the most of digital solutions, which turns out to be instrumental to technological leadership. Our conversations at Design Thinking 2021 concurred.

I have made my deck available on the Design Thinking 2021 site, where those who registered for the conference have access to the event’s presentations. As usual, I have also posted a copy on my SlideShare and, this time around, the same file is also available from my LinkedIn profile. Glad to continue the conversation.

Click here to see this and other presentations on SlideShare


QXbD, Quality Experiences by Design – Research Notes, Part 1.2

The preceding post on QXbD research notes – Part 1.1 shared a retrospective with insights from my early college years, which were influenced by Bruno Munari‘s “projected methodology” and the Bauhaus‘ design principles.

Research Notes Part 1.2 (this post) takes me back to BarcelonaTech’s school of engineering in the early 90s, which I joined to study Human Factors Engineering while pursuing my last year of Industrial Design at Escola Massana, an art & design school.

Those days, Donal Norman’s “The Psychology of Everyday Things” and Henry Petroski’s “The Evolution of Useful Things. How Everyday Artifacts Came to be as They Are” became must-read books for anyone interested in thoughtful design principles in a new light. Norman was an Apple Fellow and became VP of the Advanced Technology Group in the mid 1990s. He popularized the term of User Experience.

“Everything that touches that touches upon your experience”.
“Today, that term [User Experience] is terribly misused”.

Petroski was an engineer whose best known work focuses on failure analysis. He stated that the best Industrial Design involves “seeing into the future of a product” and that Human Factors Engineering is concerned with “how anything will behave at the hands of its intended and not intended users.” Here is a summary of some of his design principles:

  • Tools make tools.
  • Artifacts multiply and diversify in an evolutionary way.
  • There always is room for improvement.
  • Good can be better than best.
  • Efficacy can be subjective, want overpowers need.
  • Form follows failure: inventors should be technology’s severest critics.
  • Focus on different faults means different solutions to the same problem.
  • Engineering is invention institutionalized.
  • Sometimes it is about a new job, sometimes about a better or faster job.

“We are all in this together. Artists and engineers are part of the same effort”.

“Though the best designs deal successfully with the future, that does not mean they are futuristic […] There is an apparent reluctance to accept designs too radically different from what they claim to supersede […] if things are redesign too dramatically and the function that they perform can be less obvious”.

Loewy summarized the phenomenom by using the acronym MAYA, standing for most advanced yet acceptable. Dreyfuss emphasized the importance of a survival form, thus making the unusual acceptable to many people who would otherwise reject it [Industrial Designers] have learned to strive for a delicate balance between innovation in order to create interest, and reassuringly identifiable elements”.

Donald Norman pointed to design issues leading to human error and making users unfortunately blame themselves in the process. He claimed that the “paradox of technology” takes effect when added functionality comes with unwanted complexity, which denies the sought-after benefits. These are some of the design principles:

  • Design should be user-centric and consistent.
  • Identify the true root cause of a problem.
  • Well designed products teach the user how to use them.
  • Make things visible, give clear clues, enough information and feedback.
  • Get mapping and system state right, simplify task structure.
  • Design for error, exploit the powers of constraint.
  • Make possible to reverse actions, and make it harder to do what cannot be reversed.

Following up on the topic of technology’s paradoxes, it is worth reviewing Geoffrey A. More’s “Crossing the Chasm“, which was published in 1991. He explored the rationale behind the failure of emerging technologies, which fail to take hold.

Illusion and Disillusion: Cracks in the Bell Curve.
Technology Adoption Lifecycle. Crossing The Chasm, 1991.

There can be a deep chasm between enthusiasts and early adopters and the broader user groups shaping the mass market. Avoiding the Valley of Death starts with an understanding the adoption lifecycle: different user groups come along with different expectations. That prompts the need for the design of specific transitions and adaptations.

Whole Product R&D […] begins not with creative technology but with creative market segmentation. It penetrates not into protons and processes but rather into habits and behaviors […] it implies a new kind of cooperation between organizations traditionally set apart from each other.”


BarcelonaTech’s teaching addressed Human-Machine Systems as an interdisciplinary undertaking. Human dynamics entailed the study of individuals and collectives such as teams and organizations. That would encompass the following disciplines and an strengths and limitations

  • Psychology – skills, cognitive appraisal and workload, workstyles…
  • Physiology – form factors, motions, anthropometry, biomechanics…
  • Social Sciences – teamwork, organizational behaviors, culture…

Tools and machines involved hardware and software components. HMS’ holistic approach consistently tackled end-to-end solutions. These were placed in context and in specific physical environments. The sough-after outcomes of “Designing for People” zeroed in on:

  1. The delivery of capable high performance systems as defined by productivity by effectiveness and efficiency metrics, and success rates.
  2. Designing for users’ wellbeing and safety.
  3. Human Error is often a consequence of poor design.
  4. Addressing the broader user base possible, typically set at 95% coverage with adaptations, accounting for diversity rather than designing for just averages.
  5. Extreme case and stress testing, factoring life-long / lifecycle changes as solutions evolve and/or can be deployed in other context and environments.


We followed this iterative methodology, starting with due diligence on:

  1. Initial problem statement and goal setting.
  2. Operations assessment: use cases’ current state / present mode.
  3. User Taxonomy and Analysis: jobs, tools, work motion studies (tasks, workflows, success and failure rates) often relying on instrumentation.
  4. Data collection, processing, analysis and insights.
  5. Identification of value based activities, waste and risks.
  6. Critical success factors and possible scenarios at play.
  7. Information, process, hardware and software specifications.
  8. Contextual and environmental considerations.

The next phase focused on Human-Machine-System design, including all relevant subsystems and interactions across them:

  1. Operations review: new target state and mode.
  2. Interaction Matrix* correlating human and design factors.
  3. Prioritization criteria and conflict resolution.
  4. Job and process streamlining, often leading to redesign, or new design.
  5. Goal setting based on metrics optimizing for system wide operability.
  6. Iterative improvement cycling through experiments, prototyping, simulations and testing.

The Ergonomics of Workspaces and Machines. A Design Manual.

The *Interaction Matrix correlated human factors (rows) for a given design option with the following “realization” ones (columns) and the degree to which those relationships were weak, medium or strong (matrix).

  1. Customer acceptance criteria.
  2. Operability levels, including safety.
  3. Conformance with functional requirements.
  4. Reliability and performance levels, as well as maintenance.
  5. Productization feasibility and costs.
  6. Aestetics and affective considerations.

Quality Management

Just a quick reminder about the fact that this article is still discussing topics set all the way back in the early 90s. Those days, Total Quality Management (TQM) and Lean lead the way. Note that ISO 9000 standards had been first released in 1987.

The top three key values were: Customer Intimacy, Operational Excellence and Product Leadership:

customer intimacy: tailoring offerings to match demand […] detailed customer knowledge with operational flexibility […] customizing a product and fulfilling special requests […] engendering tremendous customer loyalty“.

operational excellence: providing customers with reliable products or services at competitive prices and delivered with minimal difficulty or inconvenience“.

product leadership: continuous stream of state-of-the-art products and services. First, they must be creative […] Second, must commercialize their ideas quickly […] business and management processes have to be engineered for speed. Third, product leaders must relentlessly pursue new solutions”.

High operational performance was broken down as follows:

  • Productivity & scalability.
  • Flexibility & adaptability.
  • Mix complexity.

J.M. Juran discussed quality in the context of “Big Q” and “Little Q” where the former addresses a business problem and is all encompassing, while the latter is siloed and focuses on tackling technical issues. Big Q delivers the sort of value that users can appreciate.

Quality Function Deployment (QFD) – House of Quality Template.

Strategic Quality Management was meant to learn from customer experiences and leveraged House of Quality charts to design with.

The first step was to map out a taxonomy of customer attributes (CA) decomposed in primary, secondary and tertiary levels, the latter being the most granular list of customer requirements and expectations… all largely based on surveys and user feedback. This was done for the value chain consisting of end users, consumers, retailers, distributors, regulators, etc. Weightings were set to prioritize attributes based on contextual relevance.

CA items would then be placed on the left rows of the above spreadsheet for the purpose of cross-checking them with technical features to be shown as column headers. That was done by correlating CA and engineering characteristics (EC). The resulting center matrix was used to assess what items were positively and negatively impacted, co-variance, and to what extend. Each cell featured icons and color coding for strong, medium, weak relationships.

The pyramidal roof at the top was filled out afterwards to look into technical synergies and conflicts alone. Basically, becoming aware of how engineering characteristics interact and making decisions on optimizations and conscious trade-offs.


Attending both Art and Engineering schools was a fascinating experience to say the least. The opportunity to cross-pollinate across disciplines could made anyone feel like being in a reenactment of the Renaissance’s blending of arts and sciences.

Both Industrial Design and Human Factors Engineering optimize for the human experience and, therefore, make their professions be about “Designing for People”. Technology that does not account for human skills, strengths as well as limitations, all in context and in the scenarios and environments will operate under… becomes greatly exposed to failure.

Striving to make designs that fit people’s potential, rather than just expecting users to just fit… does require an interdisciplinary and iterative practice, painstaking attention to detail being critical. At that point, it also became clear that addressing the Big Q also had to do with articulating the business value of design.


The Impact of Groupthink in Decision Systems


“Together with his identical twin brother, Scott, he has laid the groundwork for the future of space exploration as the subjects of an unprecedented NASA study on how space affects the human body, which is featured in Scott’s New York Times best-selling memoir, Endurance: A Year in Space, A Lifetime of Discovery.”

“Currently, Mark is on the Commercial Crew Safety Board at Space X […] and is the co-founder of World View, a full-service commercial space launch provider.”

Endeavour to Succeed. College of DuPage, Department of Physics. February 14 2019.


I managed to attend Captain Mark Kelly’s talk in Chicago just the day before I was leaving for Barcelona’s Mobile World Congress. M. Kelly’s presence and insightful remarks commanded both admiration and utmost respect.

Among many other fascinating topics, he discussed NASA’s None of US is as Dumb as All of Us as a reminder of the negative impact of ‘groupthink‘ in the context of faulty decision making. Most specifically, he referred to dramatic mistakes leading to the Space Shuttle Columbia disaster, which disintegrated upon re-entry in 2003.

“Large-scale engineered systems are more than just a collection of technological artifacts. They are a reflection of the structure, management, procedures, and culture of the engineering organization that created them.”

“They are also, usually, a reflection of the society in which they were created. The causes of accidents are frequently, if not always, rooted in the organization—its culture, management, and structure.”

“Blame for accidents is often placed on equipment failure or operator error without recognizing the social, organizational, and managerial factors that made such errors and defects inevitable.”

Nancy G. Leveson, MIT. Technical and Managerial Factors in the NASA Challenger and Columbia Losses: Looking Forward to the Future. Controversies in Science and Technology Volume 2, Mary Ann Liebert Press, 2008.

NASA 1Groupthink is part of the taxonomy of well-known cognitive biases and takes hold when divergent thinking and disagreement are discouraged (and even repressed) as part of group dynamics.

Hindsight is 20/20 and, statistically speaking, ‘black swan’ events are characterized by seemingly random surprise factors. Groupthink can obfuscate the early detection of predictors such as leading outliers and anomalies, which left unattended can overwhelm a given system over time… and be the source of cascading effects and critical failure.

Groupthink’s negative impact compromises any best intentions such as organizational cohesiveness in the spirit of consensus, agility, productivity, timely project progress and de-escalation management.

Often times, there might be neither adequate situational and risk awareness nor a basis for sense making drawing from the comparative analysis that comes with diligent scenario planning.

Individuals and organizational cultures with a succesful track record can also experience complacency. Over-confidence fosters the sort of behaviors and decisioning that served the group well in the past.

Though, when in the mix of a changing environment defined by new parameters under the radar, only operating within the perimeter of a given set of core competences and comfort zones, makes those specific behaviors blindsight and betray the team’s mission and purpose.

Many plans do not survive first contact (or a subsequent phase for that matter) as their implementation creates ‘ripple effects’ of various shapes and propagating speeds. Some of that can be experienced as ‘sudden risk exposure.’ Once passed the ‘point-of-no-return,’ if that challenge is met with neither contingency planning nor the ability to timely course correct, pivot or even deploy a basic safety-net offsetting the impact, the project fails to ‘cross the chasm’ and is headed for what’s technically known as the ‘valley of death.’

This was one of the key issues discussed by Clyton M. Christiansen when I took his Harvard class on the ‘Innovator’s Dilemma,’ and is also a key point behind Risto Siilasmaa’s ‘Paranoid Optimism’ as well Paul Romer’s ‘Conditional Optimism,’ all of which advocate for scenario planning and sensing optimization to be able to calibrate or re-assess the path forward.

“Michael Shermer stated in the September 2002 issue of Scientific American, ‘smart people believe weird things because they are skilled at defending beliefs they arrived at for nonsmart reasons.”

Groupthink can also manifest itself by means of ‘eco chamber’ effects’ as misguided consensus amplifies what becomes a “self-serving” bias. That is, in effect, a closed feedback loop process that magnifies logical fallacies. These can come across as reasonable enough postulates, though if based on rushed judgement and selective focus they can also suffer from ‘confirmation bias.’ This is the case when new evidence is only used to back-up the existing belief system rather than share new light.

In the context of Decision Support Systems and Cognitive Analytics, the above reasoning deficits become root causes of errors impacting operations. That can involve both (a) Human-Human and (b) Human-Machine interactions, as well as impacting programming work resulting in (c) biased algorithms and automation pitfalls when left unsupervised.



Carisa Callini. Human Systems Engineering. NASA, August 7 2017. https://www.nasa.gov/content/human-systems-engineering

Carisa Callini. Spaceflight Human Factors. NASA, December 19 2018. https://www.nasa.gov/content/spaceflight-human-factors

Clayton M. Christensen. The Innovator’s Dilemma. Harvard Business Review Press, 1997.

COD Welecomes Astronaut Mark Kelly. Daily Herald, February 13 2019. https://www.dailyherald.com/submitted/20190201/cod-welcomes-astronaut-mark-kelly-feb-17

Geoffrey Moore. Crossing the Chasm. Haper Collins, 1991.

MIT Experts Reflect on Shuttle Tragedy. MIT News, February 3 2003. http://news.mit.edu/2003/shuttle2

Tim Peake. The Astronaut Selection Test Book. Century. London, 2018.

Scott Kelly. Endurance: A Year in Space, a Lifetime of Discovery. Knopf. New York, 2017.

Scott Kelly. Infinite Wonder. Knopf. New York, 2018.

Steve Young. Astronaut: ‘None of Us is as Dumb as All of Us.’ USA Today – Argus Leader, May 13, 2014. https://www.argusleader.com/story/news/2014/05/13/astronaut-none-us-dumb-us/9068537/

Will Knight.  Biased Algorithms are Everywhere, and No One Seems to Care. MIT Technology Review, July 12 2017. https://www.technologyreview.com/s/608248/biased-algorithms-are-everywhere-and-no-one-seems-to-care/