top of page
  • Ozzie Paez, Dean Macris

Human risks of the robotic revolution

Updated: Oct 28, 2020

According to the Guardian, 39-year-old Bhavesh Patel was captured on camera sitting in the driver’s seat of his Tesla s60 as it travelled along the M1 motorway. The self-driving car did not cause an accident and arrived at its destination as expected. Mr. Patel admitted his actions in court. He received an 18-month suspension, had to attend a ten-day driving program and pay £1,800 in court costs.

PC Kirk Caldicutt, a traffic policeman, described Patel’s behavior as “grossly irresponsible.” He stressed that automatic controls “are in no way a substitute for a competent motorist in the driving seat who can react appropriately to the road ahead.” Tesla engineers also asserted that the autopilot function “was only intended to assist a fully attentive driver.[1]”

In this case, all parties, including the officer and Tesla engineers, misjudged the implications. Mr. Patel should have been in the driver’s seat in case he noticed, or the car warned him of a problem with the autonomous driving system. Unfortunately, he still would have been unable to intervened in time to prevent many accidents. Our upcoming analysis of a tragic Uber accident in Phoenix, Arizona[2] will show how and why this is the case.

The traffic officer's expectations were also unrealistic. Self-driving cars don't rely on human drivers to cope with road conditions. They are designed to cope and adapt on their own as road conditions change. The driver is secondary until he chooses to intervene. The Tesla engineers’ characterization is equally unrealistic. Self-driving cars do the driving. Human drivers are the ones expected to assist the technology if something goes wrong. The policeman and engineers got the human-factors implications of autonomous driving vehicles precisely backwards!

Self-driving cars are not alone in fomenting discords between technology and human operators. Experiences in the use of smart Flight Management Systems (FMS) in modern jetliners illustrate the logic-trap created by automation. Modern FMS carry out most tasks during flights. Some can perform unassisted take-offs and landings. Pilots in these environments operate in caretaker mode, which reduces their situational awareness. The effects can be catastrophic during unexpected emergencies. The tragic 2009 loss of Air France Flight 447 is a case in point[3].

The effects of automation are also reflected in incidents of pilots falling asleep during flights. Their stories have captured the attention of the media and of safety boards, including the US NTSB:

The crossroads of technology and the human element

The conflicts between humans and automation point to shortcomings in human factors engineering (HFE). HFE emerged in the late 19th century as a discipline dedicated to improving human-machine interactions. It expanded in the early 20th century to address safety and accident prevention in industrial environments.

Initially, workers were blamed for most accidents. Those injured on the job were often tagged as injury prone and were reassigned or fired. Safety was treated as a personal moral imperative. Operations with increasingly complicated machines during World War II changed these attitudes. Engineers recognized that many designs were inherently unsafe. Their realization changed the underlying assumptions and expectations. New standards were developed that required designs to prevent even klutzes from hurting themselves and others[4].

21st century technologies like self-driving cars are again changing the relationship between humans and machines. Specifically, they are introducing new factors that can confuse and put people at risk. Accidents have already claimed many lives. Existing standards are proving inadequate to address and correct these deficiencies. The underlying causes are technical and cultural. Engineers and software developers often assume that smart systems are smart enough to prevent people from making tragic mistakes. They also assume that operators will be fast and aware enough to intervene when necessary. Operating experience suggests that these assumptions are false.


Automation and artificial intelligence are promoting new business strategies and business models. They are expected to deliver improved performance, efficiency and competitive advantage. Unfortunately, they are also introducing safety weaknesses that go unrecognized until catastrophe happens. Potentially unsafe systems, processes and practices are being unintentionally architected into business models and operations. These flaws put lives and businesses at risk.

Human factors engineering has yet to catch up with these implications. The practice is benefitting from growing experience and stimulated by unfortunate tragedies. We have been studying, researching and writing about these issues for years. There are practical solutions available, just not simple, off-the-shelf ones. Addressing these issues require context, analysis and careful consideration of the interactions between technologies and the human element.

The marriage between technology and people has become increasingly complex. We are learning that, while technology can enhance performance, it still can't replace all human judgment. Maximizing the benefits of these technologies for competitive advantage is a process, not an event. Mastering that process is the key to improving competitiveness, while ensuring safety and containing risks.



[1]Ruth McKee, 'Autopilot driver' who sat in passenger seat is banned for 18 months, April 28, 2018, The Guardian,

[2] Ozzie Paez, Safety implications of self-driving cars, April 3, 2018, Ozzie Paez Research,

[3] Ozzie Paez, Dean Macris, Automation and the unaware caretakers, May 1, 2018, Ozzie Paez Research,

[4] Sidney Dekker, Making the world a better place, in Safety Differently: Human factors for a new era, 2015, CRC Press.

26 views0 comments

Recent Posts

See All
bottom of page