Automation is methods of production that rely on mechanical
or electronic technologies as a replacement for human labor. Automatic systems have gained
in autonomy and authority, whereby the activity of the systems has become less dependent
on operator interventions.
Human-centered automation problems have multiple
attributes: an attribute reflecting human goals and capabilities, and an attribute
reflecting automation goals and capabilities.
The use of automation within high-risk industrial
production systems has increased markedly during the last 50 years. Automatic systems have
gained in autonomy and authority, whereby the activity of the systems has become less
dependent on operator interventions. This has brought forward the suggestion that
human-automation transactions should be conceptualized within the framework of
cooperation, and consequently that automatic systems should be designed to be cooperative.
The question is then how design can promote human-automation cooperation, and how the
quality of cooperation can be assessed. The OECD Halden Reactor Project performed two
closely related experiments, which allowed assessments of whether the quality of
human-automation cooperation would be promoted by a humanmachine interface designed
to increase the observability of the automatic system's activity using graphical and
verbal feedback, as compared to a conventional humanmachine interface.
The experiments were performed in a full-scale nuclear
power plant simulator, using licensed operators as subjects, and applied a 2×2
within-subject design. The quality of human-automation cooperation was assessed from
subjective operator judgements. The experiments demonstrated a clear improvement in
human-automation cooperation quality when the observability of the automatic system's
activity was increased. The relationship between human-automation cooperation quality and
the effectiveness of the joint humanmachine system's performance was furthermore
explored, but no clear results were found. As the trend in automation design seems to
imply an increase in system autonomy and authority, the issue of human-automation
cooperation can be expected to further gain in importance in the future settings. - The
quality of human-automation cooperation in human-system interface for nuclear power
plants - Ann Britt Miberg Skjerve, and Gyrd Skraaning, Jr,.Institute for Energy
Technology, Industrial Psychology, OECD Halden Reactor Project, Halden, Norway -
Humans and Automation Lab -
Research in the Humans and Automation Lab (HAL) focuses on the multifaceted interactions
of human and computer decision-making in complex sociotechnical systems.
With the explosion of automated technology, the need for humans as supervisors of complex
automatic control systems has replaced the need for humans in direct manual control. A
consequence of complex, highly automated domains in which the human decision-maker is more
on-the-loop than in-the-loop is that the level of required cognition has moved from that
of well-rehearsed skill execution and rule following to higher, more abstract levels of
knowledge synthesis, judgment, and reasoning.
Employing human-centered design principles to human supervisory control problems, and
identifying ways in which humans and computers can leverage the strengths of the other to
achieve superior decisions together is the central focus of HAL. Current research projects
include collaborative human-computer decision making for command and control domains,
investigating human understanding of multivariable optimization algorithms and
visualization of cost (objective functions); the need for bounded collaboration, design of
Lunar Lander displays, human supervisory control of multiple heterogeneous unmanned
vehicles; collaborative time sensitive targeting; and developing metrics for evaluating
display complexity. - Current research sponsors include the Office of Naval Research,
NASA, Boeing, Ford, and the FAA.
Designing human-centered automation: trade-offs in collisionavoidance system design
Goodrich, M.A. Boer, E.R. Res. & Dev., Nissan Cambridge Basic Res., MA, USA;
This paper appears in: Intelligent Transportation Systems, IEEE Transactions on
Publication Date: Mar 2000 - ieeexplore.ieee.org Volume: 1, Issue: 1
Abstract: Human-centered automation problems have multiple attributes: an attribute
reflecting human goals and capabilities, and an attribute reflecting automation goals and
capabilities. In the absence of a general theory of human interaction with complex
systems, it is difficult to define and find a unique optimal multiattribute resolution to
these competing design requirements. We develop a systematic approach to such problems
using a multiattribute decomposition of human and automation goals. This paradigm uses
both the satisficing decision principle which is unique to two-attribute problems, and the
domination principle which is a common manifestation of the optimality principle in
multiattribute domains. As applied to human-centered automation in advanced vehicle
systems, the decision method identifies performance evaluations and compares the safety
benefit of a system intervention against the cost to the human operator.