Issues Concerning Human-Centered Intelligent Systems:

What's "human-centered" and what's the problem?

 

Charles E. Billings

Cognitive Systems Engineering Laboratory

The Ohio State University, Columbus, Ohio

 

 

Introduction

In medicine, an expert used to be defined as, "some specialist from the Mayo Clinic, lost in a railroad station with slides". I am not an expert and I don't have any slides, but I do have viewgraphs and I hope I'm at the right place in the right city.

You could fairly ask why I'm on this program at all. My background is in aerospace medicine, not computer science. Some of your position papers terrify me. Furthermore, much of what I am going to say has already been expressed in some eloquent position papers by Emilie Roth, Jane Malin and others.

I think perhaps I'm here because I am a user - a consumer - of the concepts and products you have given much of your lives to developing. My domain, aviation, has done as much to stimulate advanced technology, by buying it and using it, as any endeavor since the dawn of the industrial revolution. We in the aviation community have been working with complex human-computer systems in a highly dynamic, distributed, real-time environment for over two decades - shortly after people in the computer business figured out how to make computers small enough so we could get them off the ground. These computers have helped us move from an aviation system in which operators never had enough information to one in which we can drown operators in information.

In the course of these two decades of constant exposure, we have learned some lessons about how computers and people can work together, regardless of where they are located, to accomplish difficult tasks under sometimes difficult conditions. Sadly, we have also failed to learn some lessons we should have learned about how to do exactly the same thing - and we have left some shards of steel and aluminum in various odd spots in the process. Those lessons - the ones we have failed to learn - are what I would like to share with you today, as you begin this workshop on Intelligent Human-Machine Systems. It is my real hope that you can avoid some of the mistakes we have made as we have conceptualized, constructed and operated high-technology devices in pursuit of social goals. Foremost among the mistakes I hope you will avoid is the mistake of conceptualizing human-centered systems, then designing and building technology-centered systems. Dave Woods has said that, "The road to technology-centered systems is paved with human-centered intentions". I shall try to point out that he was quite right.

What Does it Mean to be "Human-Centered"?

Investigators have been studying human-machine systems for as long as such systems have been around. The problems people have in interacting with such systems have long been recognized. Ever since World War II, investigators have tried to lay down principles by which such systems should be constructed. These principles have been variously called "user-centered", "use-centered", "user-friendly", "human-centered", and more recently, "practice-centered". What do these terms mean? What principles must be embodied in a human-machine system to warrant such appellations?

As a user, I am not going to become involved in which of these terms or constructs is the best to describe what we are trying to conceptualize. Instead, I am going to offer some more principles I believe are necessary in what I will continue to call "human-centered" systems, simply because I'm comfortable with that term. Though most of my experience has been in the aviation domain, and my illustrations will reflect that, I am convinced that these principles apply to many human-machine systems in a variety of domains, and that they are therefore deserving of careful attention by designers and operators of any intelligent system. I'm going to describe what I'll call some "first principles": principles that I believe are essential elements in any over-arching philosophy for such systems.

First Principles of Human-Centered Systems

Premise: Humans are responsible for outcomes in human-machine systems.

I shall proceed from a premise, which stated in Human-Centered Intelligent Systems terms is that human operators are entirely responsible for the outcomes of processes conducted by humans and machines.

Axiom: Humans must be in command of human-machine systems.

If one accepts that premise, I think it is axiomatic that humans must be in command of all components of the systems that undertake those processes. They must have full authority over the systems, which means that they must have the means to intervene constructively in the processes. I shall try to justify this axiom as we go along.

This axiom implies certain corollaries, which appear to be consistent with our experience with human-machine systems in aviation. Briefly stated, they are as follows.

Corollary: Humans must be actively involved in the processes undertaken by these systems.

Many human-machine systems distance the operator from ongoing processes, some by intention, others by default. Without continuing active involvement in a process, the human operator will be unable to understand the problem and reenter the performance loop in case of machine failure.

Corollary: Humans must be adequately informed of human-machine system processes.

Without good information concerning an ongoing process, a human operator cannot remain actively involved in that process. If this happens, the machine, not the human, is in control.

Corollary: Humans must be able to monitor the machine components of the system.

As machines have progressed from simple inner-loop control tasks to management of information and more recently to management of entire processes, it has become harder to follow what they are doing. This leads to a need to inform the human that such machines are still functioning properly, rather than simply when they have failed.

Corollary: The activities of the machines must therefore be predictable.

Unless a machine behaves predictably, a human cannot form an internal model of how it functions, and thus cannot remain involved in the ongoing process.

Corollary: The machines must also be able to monitor the performance of the humans.

Humans fail too. Machines know a good deal about human-machine processes, and this knowledge can permit machines to monitor human performance for errors, just as humans must be able to monitor machine performance for errors or failures.

Corollary: Each intelligent agent in a human-machine system must have knowledge of the intent of the other agents.

In order to understand what outcome is desired, any agent in a human-machine system must understand what the other components of the system are trying to accomplish. This requires knowledge of the intentions of each of the agents, by all of them.

Why Was it Necessary to Construct Yet More Principles for HCIS?

My annunciation of these principles was motivated by serious and in some cases spectacular failures of human-machine systems in aviation. These are the operating modes of the flight control system in a modern transport. Modern aircraft automation is very capable, very flexible - and sometimes very hard to understand. There are ten altitude-change modes on some modern airplanes, several of which interact with or are conditional on other modes.

Aircraft automation has become highly autonomous. A flight from New York to Tokyo may require very little active participation by the pilots once the machine has been programmed. Removing authority over aircraft control or management of systems from the human operator may require only a line or two of source code.

Yet the human always remains responsible for the outcomes. The "first principles" I have enumerated are an attempt to go back to basics: to state what the relationship between the human and machine components of the system must be if the human is to be able to remain in command of the system. Let me state more specifically what the problem is, in terms of hard data from the domain in which I work.

Since the mid-1970s, a number of incidents have come to light that were associated with, and in some cases were enabled by, complex machine systems. Table 1 shows a partial list of a number with which I am familiar. I have taken a few liberties with this list of relevant factors in these incidents; not all were signed out this way by investigating authorities, though I am certain that the factors shown were critical to the outcomes.


 

MISHAP

COMMON FACTORS

DC-10 landing in CWS mode

Complexity, mode feedback

B-747 upset over Pacific Ocean

Lack of feedback

DC-10 overrun at JFK, New York

Trust in autothrust system

B-747 uncommanded roll, Nakina

Trust in automation behavior

A320 accident at Mulhouse-Habsheim

System opacity and autonomy

A320 approach accident at Strasbourg

Inadequate feedback

A300 approach accident at Nagoya

System complexity and autonomy

A330 takeoff accident at Toulouse

System complexity, inadequate feedback

A320 approach accident at Bangalore

System complexity and autonomy

A320 approach at Hong Kong

System coupling, lack of feedback

B-737 wet runway overruns

System coupling and autonomy

A-320 landing overrun at Warsaw

System coupling and autonomy

B-757 climbout at Manchester

System coupling

A310 approach at Orly Airport, Paris

System coupling and autonomy

B-737 go-around at Charlotte

System autonomy, lack of feedback

B-757 approach to Cali, Colombia

System complexity, lack of feedback


Table 1: Common factors in aviation mishaps associated with automation (Billings, 1996)

 

For each accident shown, there have been from a few to many incidents incorporating the same problems, but under circumstances in which the pilots were able to avert a disaster. But Ruffell Smith has reminded us that no error or failure is trivial if it occurs often enough; sooner or later, it will occur under the worst possible circumstances.

Let me emphasize that it is not only these accidents, which are classical rare events, that motivates my interest in human-centered systems. Experience and research in simulators and aircraft, data from the NASA Aviation Safety Reporting System and other sources, and knowledge elicitation sessions, all converge on certain automation attributes that seem to be causing problems for human operators of today's complex systems.

What Attributes are Common to these Occurrences?

These attributes of advanced human-machine systems seem to be important in untoward occurrences. To summarize them succinctly, a common factor in these mishaps is:

  Loss of situation or state awareness, associated with: 

automation complexity;

interdependencies, or coupling, among machine elements;

machine autonomy;

inadequate feedback to human operators (opacity).

There's a simpler way to put it. In 1994, Dave Woods said this: "Automation that is strong, silent, and hard to direct is not a team player".

Other problems are also seen in these mishaps, and I shall discuss them, but most are derivatives of these fundamental attributes. Because of their central importance to the design and realization of human-machine systems, each of these attributes deserves some attention here.

Automation Complexity

Complexity makes the details of machine performance more difficult for humans to learn, understand, model, internalize, and remember when that knowledge is needed to explain machine behavior. This is especially true when a complex function is invoked only rarely. The details of machine functions may appear quite simple because only a partial or metaphorical explanation has been provided, yet the true behavior may be extremely complex. Woods (1996) has discussed "apparent simplicity, real complexity" of aircraft automation behavior.

One example was an accident approaching Bangalore, when the flying pilot, transitioning to a new airplane and descending in "open idle descent" mode, forgot that both of two flight directors had to be disengaged to arrest the descent at the proper altitude. The airplane crashed before the consequences of the error could be corrected. I should note that the same problem was detected more quickly on another approach, to San Francisco, which otherwise would have resulted in a landing in the bay.

Coupling Among Machine Elements

Coupling refers to internal relationships or interdependencies between or among machine functions. These interdependencies are rarely obvious; many are not discussed in system documentation available to users of the machine. As a result, human operators may be surprised by apparently aberrant machine behavior, particularly if it is driven by conditions not known to the human and thus appears inconsistently. Perrow (1984) discussed coupling in machine systems and its potential for surprises.

One example occurred during an approach to Orly Airport, in Paris. When the airplane's speed exceeded the flap limit speed, the autopilot autonomously reverted to "level change" speed; the plane added power and tried to climb, while the pilot continued his attempt to descend. The autopilot added nose-up trim in direct proportion to the pilot's attempt to push the nose down. The autopilot won, for awhile, and the airplane nearly stalled at a low altitude before the pilots recovered and completed the landing.

Machine Autonomy

Autonomy is a characteristic of advanced automation in aircraft and elsewhere; the term describes real or apparent self-initiated machine behavior, which is often unannounced. If autonomous behavior is unexpected by a human operator, it is often perceived as "animate"; the machine appears to have a "mind of its own". The human must decide whether the perceived behavior is appropriate, or whether it represents a failure of the machine component of the system. This decision can be rather difficult, especially if the system is not well documented or does not provide feedback, not unheard-of problems in complex machine systems.

Another case of the crew fighting with the autoflight system occurred at Nagoya, Japan, when an inexperienced copilot flying accidentally activated the go-around switch during the final stages of an approach. The autopilot added power and nose-up trim, though the pilots had no indication of these actions. The flying pilot continued to push forward on the control column; the more he pushed, the more rapidly nose-up trim was added. The autopilot could not be disengaged below 1500 feet; when the Captain was able to disengage the autopilot, the airplane was at full nose-up trim. It pitched up to a 50 angle, then stalled and slid backward to the ground, killing nearly all on board.

Inadequate Feedback

Inadequate feedback, or opacity, denotes a situation in which a machine does not communicate, or communicates poorly or ambiguously, either what it is doing, or why it is doing it, or in some cases, why it is about to change, or has just changed, what it is doing. Without this feedback, the human must understand, from memory or a mental model of machine behavior, the reason for the observed behavior. A pilot friend has described this problem succinctly: "If you can't see what you've got to know, then you've got to know what you've got to know".

Perhaps the most obvious case of inadequate feedback occurred at Charlotte a couple of years ago. The pilots were aware of thunderstorms in the vicinity of the airport, but they had a clear view of their runway until very late in the approach, when the runway became obscured by very heavy rain. They initiated a missed approach, but were caught in a severe wind shear and crashed. The airplane had a wind shear warning system, but it failed to warn the pilots because they were retracting their flaps in the go-around maneuver. What they did not know, and had not been told during their training on the system, is that their wind shear advisory system is desensitized when flaps are in transit. The system thus gave no warning of the shear they had entered, nor of the fact that it was less effective while flaps were in transit. They could not see what they needed to know, and they did not know what they needed to know.

What Effects do these Attributes Have on Humans?

Peripheralization

Complex machines tend to distance operators from the details of an operation. Over time, if the machines are reliable, operators will come to rely upon them, and may become less concerned with the details of the process. Though this has the desirable effect of moderating human operator workload, it also has the undesirable effect of making the operator feel less involved in the task being performed.

Recent accidents, among them those I have mentioned here, have demonstrated how easily pilots can lose track of what is going on in advanced aircraft. The mishaps that have occurred serve as a warning of what lies ahead unless we learn the conceptual lessons these accidents can teach us. An important lesson is that we must design human-machine interfaces so that the human operator is, and cannot perceive him or herself as other than, at the locus of control of the human-machine system, regardless of the tools being used to assist in or accomplish that control.

Another important lesson is that machines must keep us involved by keeping us informed of what they are doing, and sometimes why they are doing it. As the machines become more complex and the software more tightly coupled, it becomes more and more difficult for the human to keep up with machine behavior. None of the pilots I have just mentioned really understood what their automation was doing to them. The automation "knew", but it didn't tell them clearly enough.

The result, particularly in extremely complex machine processes, can be that human operators encounter situations in which they cannot possibly keep up, as they could not in some of these cases. Such situations can lead to "learned helplessness", in which operators simply "throw up their hands" and "let the machine do its thing". This is not a solution open to pilots, or indeed to operators in many critical industrial processes, though it has occurred in both domains and has been followed by disasters. It is extremely demoralizing when it occurs, because it defeats the operator's attempts to remain in command of the process.

If it seems that these sorts of problems may indeed be problems in aviation or other real-time processes in which risk is high, but that they are trivial for the person simply operating a complex information system for research, think again. In varying degrees, these machine attributes cause an erosion of trust in the machines being used to perform difficult work.

Some of you may recall what was involved in a multi-disciplinary literature search before computer search and retrieval systems came along. I still have 3x5 file cards-hundreds of them-with handwritten notes on articles dug out of obscure journals. But I know what's there and where it came from. When I was managing the NASA Aviation Safety Reporting System, I worked, through a computer, to gain new knowledge using perhaps 40,000 reports of aviation incidents. I often wondered, and still do, whether the database management system I used was actually doing what I thought I had asked it to do, even though I had participated in the design of the information system.

How can we question the trustworthiness of a search or other process conducted with such modern technology? How often do we even know what sources that machine may have accessed on its way to providing us with information or data? If the machines have transformed, or collated, or screened and filtered the data available to them, do we know what they have done or how they have done it? The machines rarely tell us, yet if we can't "see what we need to know, then we've got to know what we need to know", if we are to evaluate the results of the process. Without such evaluation, can we really be comfortable with the results of the processes we have invoked? Who - we, or the computer - is really in command in such a situation?

Brittleness

I mentioned some derivative problems we have encountered in these data. One is the problem of brittleness. The system performs well while it is within the envelope of tasks allocated to it, but when a problem takes it to the margins of its operating envelope (as defined in advance, by its designers) it behaves unpredictably. I should point out that the designer is usually home in bed when this occurs, leaving it to the operators on the scene to sort out the problem.

This attribute has been a factor in several air accidents, notably a test flight at Toulouse, France, in which the autopilot's operating limits were being deliberately tested. When the pitch angle of the airplane rose above 25 on takeoff, the display of flight modes decluttered, denying the pilots of essential information at a time when they critically needed that information. Despite the best efforts of the pilots, the airplane stalled at an altitude too low to permit them to regain control before impacting the ground, still within the airport boundaries.

The lessons to be learned from this are that all systems are underspecified at the design stage. Even if designers are experts in the domain for which they are designing new technology, it is unlikely that they will be able to foresee all of the environmental and other problems that the devices may encounter in service. Given the complexity of many modern machine systems, it is also unlikely that a new machine will be tested truly exhaustively prior to its introduction.

Clumsiness

Clumsiness is another attribute that causes problems for the operators of a human-machine system. There is little to do when things are going well, but the computer demands more activity on the part of the human at times when workload is already high. This is a more serious problem in real-time systems, but it can tax anyone if he or she is under time pressure to accomplish a task, as one always is during an approach to a busy airport. I suffer from Locke's tabula rasa with respect to tabular formatting programs, as do some of the word processing programs I use. The design of a complex table brings my creative activities to a total halt while I attend to the machine's requirements.

Surprises

I have mentioned surprises. These can be a real problem in flying; an example is the software that occasionally caused an advanced airplane to turn away from, rather than toward, the runway during instrument approaches-a nasty surprise, though usually at an altitude at which recovery can be affected easily once the problem is detected.

On the other hand, while preparing this lecture, I was surprised a few times by my computer's newfound habit of freezing when I reduced a figure slightly while incorporating it into the text of this paper. The behavior was consistent - once I learned the one specific (and infrequent) action that caused it to occur - but I wasted a lot of time waiting for the computer to re-boot after each occurrence, and figuring out how to avoid the problem. It did little to increase my trust in the reliability of my normally reliable machine.

This sort of machine behavior, no doubt quite understandable to the software engineer who programmed my machine, was not, and is not, understandable to me, the user. To someone like myself, these machines do indeed appear to be animate - to have minds of their own - when they behave this way. Surprises are not liked by people who require predictability in the tools they use in their daily work, and we all require that predictability.

Are there Solutions for these Problems?

It seems clear that we have not solved all of the problems that confront us. The attributes I have discussed detract from the useability of our human-machine systems, and thus diminish our effectiveness when we use the machines to perform essential work. In critical domains, their failure may have catastrophic results, as in the cases I have cited. Can we do anything about this?

To begin to respond to this question, let me back away from the specifics of the problems cited here. There are two larger lessons to be learned from these data.

Earl Wiener has observed (1989) that pilots in automated aircraft frequently ask three questions: "What's it doing now? Why is it doing that? What's it going to do next?" These questions reflect the first lesson: that our data indicate that at a fundamental level, we sometimes do not understand our tools. We do not understand their behavior, and we often do not understand exactly what they were designed to do, and how. Why do we not understand? Sometimes, it's because they don't work as advertised, but more often, it is because we were not told. Somewhat less often, it is because even the designers of the tools did not understand, either what they could do, or the conditions under which they would be asked to do it, and they therefore could not tell us, even if they wanted to.

The second lesson: these tools, however cleverly designed, can operate only in the ways they have been programmed to operate. Humans tailor, or adapt, their tools in accordance with their perceptions of the demands of their jobs. They are highly creative, and they will use tools as they think they can be used, not necessarily in the ways designers intended them to be used. Whether the tools are up to these tasks is often not known to the users in advance.

Given these issues, what might we do to minimize the problems we have in dealing with advanced technology? I believe we must think harder about the capabilities and limitations of the components of human-machine systems. Let us consider first a range of attributes of the machines in our systems, as shown in table 2. Fadden (1991) has pointed out that these attributes can be thought of as bipolar.


Computers can be:

Self-sufficient <-------> Subordinate

Adaptable <-------> Predictable

Flexible <-------> Comprehensible

Independent <-------> Informative


Table 2: Characteristics of computers in various applications (Billings, 1991)

In a robotic or fully-autonomous system, the attributes at the left are clearly desirable, but this workshop is not concerned with robotic systems. It is oriented toward systems in which people and machines must work together, to pursue goals that are specified, not by the machines, but by the people. In such systems, I suggest that the attributes on the right are the ones required. It hardly needs to be pointed out that if humans specify the goals of a particular endeavor, the tools they use must be subordinate to their goals.

But let us not think of the system components as a master-slave relationship, and in fact, in many advanced systems, it is not. The relationship should be complementary, as suggested by Nehemiah Jordan (1963). Computers cannot do things that humans cannot conceive, but once they have been conceived, computers are better at many tasks than any human can possibly be: complex calculations, monitoring for infrequent errors or improbable outcomes, retrieving and utilizing very large amounts of data in reaching conclusions.

Perhaps it is not necessary to demand that our computers also have the attributes of flexibility, creativity, a comprehensive knowledge of world states, and the ability to reason in the face of uncertainty and ambiguity. These are precisely the attributes that humans bring to any cooperative endeavor pursued by a human-machine system. Perhaps it is sufficient that the system be designed so that humans can clearly, quickly and unambiguously indicate their desires - their intent - to their machines, then can follow, or be informed of, the machines' progress toward their joint destination. The humans, of course, should remain involved in helping the machines, where necessary, by contributing their greater knowledge of the state of the world in which the endeavor is being undertaken and profiting from the machines' greater ability to manipulate, integrate and transform complex data into information.

Computers as Intelligent Assistants

Whether we are technologists or scientists, we should not lose sight of our goal, which is to accomplish useful work. At one time, scientists and engineers alike worked by themselves, with pencils, paper, and a slide rule. They reached conclusions based on knowledge or empirical research, and they disseminated those results in terms of reports or hardware. But those days are long gone, and we are no longer self-sufficient. We cannot do our jobs without assistants, who often bring as much value to an enterprise as we do, and sometimes more. Intelligent devices can be such assistants. Jordan's principle of complementarity suggests that they should be such assistants, and those of us who have had good graduate students to help us know that humans as well can be such assistants.

Assistants must possess certain attributes and perform certain functions to help us. Let me suggest that an intelligent machine assistant should be able to do these things, among others:

It should be able to manage data or information, to ease our cognitive burdens;

It should coordinate among independent processes and integrate their results;

It should provide us with decision and action options and support us in the execution of our plans;

It should keep us informed of its progress so that we are able to monitor its actions.

It should monitor our actions, to shield against human errors, just as we must be able to monitor its behavior to shield against its "errors" or failures.

A machine that could assist us by performing these functions for us would truly be an intelligent assistant with which, working as a system, we could engage in collaborative problem-solving, whatever the nature of the problem we are trying to solve.

Conclusion

I am not a Luddite. More, and more sophisticated, machines will be needed to help us solve our increasingly complex problems. So let me return to Dr. Woods' maxim and modify it a bit, in order to close this talk on a more optimistic note. I suggest that machines that are compliant with our demands, communicative regarding their processes, and cooperative in our endeavors can indeed be team players - and team play is at the heart of a human-centered intelligent system.

I have taken my examples from aviation, because that is the domain I know best. But make no mistake; these principles do not apply only to real-time human-machine systems. The problems I have discussed exist in many domains, including yours. It is not only in real-time domains that we encounter complexity, coupling, autonomy and lack of feedback, nor is it only in such domains that we observe brittleness, or clumsiness, or surprises.

A dear friend and long-time mentor, the late Hugh Patrick Ruffell Smith, admitted in 1949 that, "Man is not as good as a black box for certain specific things. However, he is more flexible and reliable. He is easily maintained and can be manufactured by relatively unskilled labour." This maxim is as true today as when he formulated it almost 50 years ago.

I think it comes down to this. We have fought with computers for many years to get them to do our bidding. Computers are now smart enough either to go off and do their own thing, dragging us along for the ride, or to work with us to accomplish our things, but much more effectively than we can do them without such help. But it is easier to design technology-centered systems than human-centered systems, and Woods was right: the road to technology-centered systems is paved with human-centered intentions. If this state of affairs is to be improved, it is people like yourselves who will have to do it.

References

Billings, C.E. (1991), "Human-centered aircraft automation: A concept and guidelines," NASA Technical Memorandum 103885, Moffett Field, CA: NASA- Ames Research Center.

Billings, C.E. (1996), Aviation Automation: The Search for a Human-Centered Approach, (Mahwah, NJ: Lawrence Erlbaum Associates).

Fadden, D.M. (1990), "Aircraft automation challenges," in Challenges in Aviation Human Factors: The National Plan, abstracts of AIAA-NASA-FAA-HFS Symposium, (Washington, DC.: American Institute of Aeronautics and Astronautics).

Jordan, N. (1963), "Allocation of functions between man and machines in automated systems," Journal of Applied Psychology, 47(3), 161-165.

Perrow, C. (1984), Normal accidents, (New York: Basic Books).

Wiener, E.L. (1989), "Human factors of advanced technology (Glass Cockpit) transport aircraft," NASA Contractor Report 177528, Moffett Field, CA: NASA-Ames Research Center.

Woods, D. D. (1994b, in press) "Decomposing Automation: Apparent simplicity, real complexity," Proceedings of the 1st Automation technology and human performance conference, (Hillsdale, NJ: Erlbaum Associates).