• Ozzie Paez

Can damaging biases affect AI? Yes!

Updated: Nov 18

Decision-making biases damage the quality of decisions and decision-making by undermining our ability to objectively perceive and analyze problems and the world around us. Confirmation bias, for example, undermines analysis by promoting the perceived value of confirming information, while filtering and demoting the value of contrarian inputs. Its capacity to steer and strengthen false conclusions and beliefs is anchored to our innate desire to be validated and proven right. Research suggest that biases lurk largely undetected in our minds, which complicates efforts to identify and tame their damaging influences.



I have encountered damaging biases throughout my career in many situations and contexts. They include nuclear weapons strategy[1] and defense policy, executive leadership and decision-making, terrorism response and homeland security policy[2], and corporate strategy development. I have also noted their damaging influences on our understanding, reactions, and decision-making during the COVID-19 pandemic.


Biases in smart technologies and artificial intelligence


There is a general assumption that smart technologies and artificial intelligence are largely immune from human-type biases, prejudices, and stereotypes. Theoretically they should be capable of making objective, data driven decisions free of cognitive glitches; in practice they can’t. The reasons are foundational and include the roles people play in their design, operation, and use. To paraphrase a Roman saying: People are biased; smart technologies and AI are created and run by people, therefore smart technologies and AI are also biased.

I’ve learned through study and many years of involvement with smart tech and AI that models, data, and programs can and do reflect biases, including cultural biases. The processes are different than those that affect people, but they can be just as damaging to decision-making. These biases also affect decision-making in smart systems like self-driving vehicles leading to unforeseen normal accidents that human supervision cannot prevent[3]. The human factors implications of AI and smart tech are largely unknown and unproven.


Summary

Biases undermine the quality of analysis, decisions, and decision-making. Many believe that artificial intelligence and smart technologies will eliminate their damaging influences, but experience and research suggest otherwise. Engineers, organizational leaders, and decision-makers should keep this in mind when they design and introduce AI and smart tech into their organizations. OPR continues to research these technologies so we can help our clients maximize their benefits and reduce related risks.


References

[1] Ozzie Paez, Decision-Making in a Nuclear Middle East, 2016, Amazon, https://www.amazon.com/Decision-Making-Nuclear-Middle-East/dp/1532837577 [2] Ozzie Paez, Stop confusing my assumptions!, August 30, 2011, Ozzie Paez Decisions Blog, https://ozziepaezdecisions.wordpress.com/2011/08/30/stop-confusing-my-assumptions/ [3] Ozzie Paez, Dean Macris, Who’s responsible for Uber’s self-driving vehicle accident?, June 15, 2018, Ozzie Paez Research, https://www.ozziepaezresearch.com/post/2018/06/15/uberselfdrivingvehicleaccident