Automakers and their tech partners may be looking toward a future when they can offer fully autonomous cars, but Mica Endsley, the author of the recently released Human Factors paper, "From Here to Autonomy," contends that without considering the cognitive constraints of the driver, safety can't be assured. People will still be needed to oversee, direct, and intervene in the actions of autonomous systems for the foreseeable future. Automobiles are only one among many areas of implementation to which this applies; the same can be said for cybersecurity, unmanned aircraft, space exploration, cargo delivery, health monitoring, and more.
Endsley, who conceptualized and has worked extensively on human situation awareness, examines four decades of human factors research on human-automation interaction (aka autonomy) and has developed the Human-Autonomy System Oversight model (HASO). The model identifies critical cognitive processes and user interface features that must be considered when designing and testing any autonomous system.
The challenge for humans in monitoring autonomous systems is remaining aware of and attentive to the system state and the environment so they can respond when the system alone can't do so safely and be able to act in partnership with the automation to achieve optimum results. Behind this challenge are a number of cognitive factors, including becoming complacent when nothing is going wrong, overly trusting the automation to take the correct action when needed, and the sheer boredom that can arise when not actively engaged in automated tasks, all of which get worse the better the automation is. This creates what Endsley posits is an automation conundrum:
"The more automation is added to a system, and the more reliable and robust that automation is, the less likely it is that human operators overseeing the automation will be aware of critical information and able to take over manual control when needed."
According to Endsley, autonomous systems should be, among other things,
- Used only where necessary to keep human engagement high
- Consistent to help people understand the system
- Focused on improving user awareness through information integration
- Mapped to users' goals and mental models, minimizing cognitive load
- Salient in making mode transitions to the human highly obvious
- Transparent, providing understandability and predictability
- Noncomplex to improve understanding and reduce user errors.
Endsley notes, "the article includes specific guidelines that designers can implement in building the operator interfaces for autonomous systems to better keep people in the loop and able to interact with the system autonomy effectively. This will help engineers and organizations that are trying to figure out what to do about the problem."
Endsley concludes, "As long as human oversight of autonomous systems and intervention is required to achieve successful performance, the automation conundrum will undermine performance and safety in many applications."
###
To obtain a copy of "From Here to Autonomy: Lessons Learned from Human-Automation Research" for media-reporting purposes, contact HFES Communications Director Lois Smith , 310/394-1811.
The Human Factors and Ergonomics Society is the world's largest scientific association for human factors/ergonomics professionals, with more than 4,500 members globally. HFES members include psychologists and other scientists, designers, and engineers, all of whom have a common interest in designing systems and equipment to be safe and effective for the people who operate and maintain them. "Human Factors and Ergonomics: People-Friendly Design Through Science and Engineering."
Media Contact
Lois Smith
[email protected]
310-394-1811
@HFES
http://hfes.org
############
Story Source: Materials provided by Scienmag