THE CHALLENGE OF HUMAN-CENTERED DESIGN

T. Winograd and D. D. Woods

Report from Working Group 3

Version of April 9, 1997

>>>>>>> DRAFT: DO NOT CITE OR DISTRIBUTE <<<<<<<

Working Group Leaders:

T. Winograd (Stanford University)

D. D. Woods (Ohio State University)

Group Members:

J. Miller (Apple),

R. Jeffries (Sun Microsystems),

G. Fischer (University of Colorado),

O. Garcia (Wright State University),

G. McConkie (University of Illinois),

M. Holloway (Netscape),

P. Ehn (Malmo University),

V. De Keyser (University of Liege),

J. Grudin (University of California, Irvine),

P. Agre (University of California, San Diego),

S. J. Mountford (Interval Corp.)

Table of Contents

1. What Is A Human-Centered Approach?

1.1. Wide Interpretations of the Label "Human-Centered"

1.2. Technology-Driven Development

1.3 Why Are These Interpretations Insufficient?

1.4 The Strong Interpretation of Human-Centered Design

1.5 How Do We Foster (Strong) Human-Centered Design?

2. Research To Advance Strong Human-Centered Design

2.1 New Modes of Relating Design and Research: Complementarity

2.1.1 The "experimenter as designer" and the "designer as experimenter"

2.1.2 Data and theories about sets of people working with artifacts within a context

2.1.3 Supporting human-centered design and innovation

2.2 Developing Cognitive And Social Technologies to Complement Computational Technologies

2.3 Measures / Quality Metrics

2.4 Examples of Contexts and Challenges in Human-Centered Design and Research

2.4.1 Human-centered system integration

2.4.2 Integrated collaborative tools

2.4.3 Tools for "co-bots"

3 Models For Research And Education

3.1 Testbed-Style Research Projects within Situated Contexts

3.2 Human-Centered Reflection on Design Activities

3.3 Case-based Research

3.4. Researcher Training in Conjunction with Apprenticeship

4 Recommendations

5 References

1. WHAT IS A HUMAN-CENTERED APPROACH?

To create truly human-centered systems, we need to shift the focus of research and design, to put human actors and the field of practice in which they function at the center of technology development. This will make a significant difference in our ability to harness of the power of computers for an expanding variety of people and activities in which those people will use computers and computer-based technologies.

The term "human-centered" is used by many people in a variety of related but non-identical ways. It is important to understand the consequences of taking a "strong" interpretation of the term, which we recommend. It can be contrasted with "wide" interpretations that may be useful for other groups or contexts.

1.1 Wide Interpretations of the Label "Human-Centered"

One can identify areas of computer science research as being human-centered in several ways:

Wide Interpretation 1: The motivation for technology development is grounded in a statement about human needs.

Priority choices in research directions can be motivated either by the abstract logic of the discipline, or by a prediction of how the research results will be applied to meeting human needs. For example, research on medical technology has a clear need basis, while research on graph theory (though it may end up having medical and other applications) is "abstraction-driven" -- not directly motivated by considerations of how the results will be used. We might say that research is human-centered if it is need-driven: motivated by considerations of its applications.

Wide Interpretation 2: People are "in the loop" or part of the system to be developed.

For some computer systems and applications, the role of human-computer interaction is secondary -- there may be some human startup and interventions, but to a large extent, the "beef" is in the computing, not the interaction. For a large and growing class of systems at every level, human-computer interaction plays a central role, and attention to this dimension can be thought of as human-centered. By this definition, "human-centered computing" is another phrase for describing the field of Human-Computer Interaction.

Wide Interpretation 3: Technology that happens to be about interacting with or across people is human-centered.

Work to advance the development of computer-based visualizations, natural language capabilities of computers, intelligent software agents to digest and filter information autonomously, networking tools to link diverse people in diverse locations, and many other examples are human-centered in the sense that the technology under development is intended to interact with or to support interactions across people. The research and development work focuses on expanding the capabilities of the computer with the assumption that advancing these technologies will in and of itself produce benefits. These benefits are sometimes presumed to flow directly from technological advances. In other cases the developers may make allowance for a usability testing and refinement stage after the technology is sufficiently mature.

Wide Interpretation 4: Technology development and change is justified based on predicted improvements in human cognition, collaborations across people, or human performance.

Developments in new computational technologies and systems often are justified in large part based on their presumed impact on human cognition, collaboration and performance. The development or introduction of new technology is predicted to reduce practitioner workload, reduce errors, free up practitioner attention for important tasks, give users greater flexibility, hide complexity, automate tedious tasks, or filter out irrelevant information, among other claims. In effect, prototypes and designs embody hypotheses about how technology change will shape cognition, collaboration and performance. As a result, technology is often based on human-centered intentions, in the sense of changing cognition and collaboration. Whether those intentions are matched by human-centered practice is another question, one addressed by the strong interpretation of the label.

Making such predictions presumes some research base of evidence and models about how related technology changes have affected cognition, collaboration, and performance, and it implies empirical tests of whether the predictions embodied in systems match actual experience. These are one part of a stronger interpretation of what it means to be human-centered in system development.

1.2 Technology-Driven Development

All of the wide interpretations of human-centered design still leave the development process in the position illustrated in Figure 1. The diagram shows a sequence from left to right.

* First, technologies are developed which hold promise to influence human cognition, collaboration and activity. The primary focus is pushing the technological frontier or creating the technological system. The technologist is at the heart of this step.

* Eventually, interfaces are built which connect the technology to users. These interfaces typically undergo some usability testing and usability engineering to make the technology accessible to potential users. Human-computer interaction and usability specialists come into play at this stage.

* When the technologies are put into use, they have social and other larger consequences which can be studied by social scientists.

* Presumably, the human factors and social consequences from past developments have some influence on future development (the small arrows back towards the left).

Figure 1: A technology-driven approach

This sequential approach is fundamentally technology-driven because developing the technology in itself is the primary activity around which all else is organized. Norman (1993) illustrates this by pointing to the original technology-centered motto of the Chicago World's Fair 1933:

Science Finds,

Industry Applies,

Man Conforms

1.3 Why Are These Interpretations Insufficient?

As the powers of technology explode around us, developers recognize the potential for benefits and charge ahead in pursuit of the next technological advance. Expanding the powers of technology is a necessary activity, but research results have shown that is rarely sufficient in itself. Sometimes, useful systems emerge from the pursuit of technological advances. However, empirical studies on the impact of new technology on actual practitioner cognition, collaboration and performance has revealed that new systems often have surprising consequences or even fail (e.g., Norman, 1988; Sarter, Woods and Billings, in press). Often the message from users, a message carried in their voices, their performance, their errors, and their adaptations, is one of complexity. In these cases technological possibilities are used clumsily so that systems intended to serve the user turn out to add new burdens often at the busiest times or during the most critical phases of the task and create new types of error traps.

For example, users can be surprised by new autonomous technologies that are strong but silent (Billings, 1996), asking each other questions like:

* What is it doing now?

* What will it do next?

* Why did it do this?

In other words, new technology transforms what it means to carry out activities within a field of practice -- changing what knowledge is required and how it is brought to bear to handle different situations, changing the roles of people within the overall system, changing the strategies they employ, changing how people collaborate to accomplish goals.

A large set of breakdowns in the human-computer interaction have been identified. These have been compiled (e.g., Norman, 1988) sometimes as "ways to design things wrong" from a human-centered point of view or as "classic" design errors in that they occur over and over again. These problems include:

Bewilderment

For every user at some time, and for some users at almost every time, computers are hard to use, hard to learn, and puzzling to work with. Even experienced users find that they don't remember how to do infrequent tasks, aren't aware of capabilities the system has, and end up with frustrating breakdowns in which it isn't clear how to proceed. Many potential users of computer systems throw up their hands altogether because of the complexity (real or perceived) of the computer systems they encounter.

Overload

As computerization increasingly penetrates a field of activity, the power to collect and transmit data outstrips our ability to interpret the massive field of data available. This problem has expanded beyond technical fields of activity (an airplane cockpit or power plant control room) to everyday areas of activity as access to and the capabilities of the Internet have grown explosively. Our problem is rarely getting the needed data, instead the problem is finding what is informative given my interests and needs in a very large field of available data. From email overload to the thousands of "hits" returned by a web query, people find that they don't have the tools to cope with the huge quantities of information that they must deal with.

Error and Failure

Computerization transforms tasks eliminating some types of error and failure while creating new types of errors sometimes with larger consequences (Woods et al., 1994). Some of the forms of error exist only in the interaction of people and computers, for example, mode error, as Norman (1988) puts it, if you want to create errors, "... change the rules. Let something be done one way in one mode and another way in another mode."

Clumsiness

Computer systems intended to help users by reducing workload sometimes have a perverse effect. Studies have revealed clumsy automated systems (e.g., cockpit automation), that is, systems which make even easier what was already easy, while they make more difficult the challenging aspects of the job. This clumsiness arises because designers have incomplete and inaccurate models of how workload is distributed over time and phase of task and of how practitioners manage workload to avoid bottlenecks in particular fields of activity.

Fragmentation and Creeping Featurism

As we continually expand the range of activities that people do with computers, we also tend to increase the diversity of ways in which they interact. From the technical point of view, there is a plethora of "systems," "applications," "interfaces," and "options" which the user combines to get things done. From the human point of view, each individual has a setting of concerns and activities that is not organized according to the characteristics of the computing system, software application, or computerized device. The machine environment becomes more and more complex and confusing as new technologies overlap in the service of the user's spheres of activity.

More to Know and Remember

Computer systems, despite their information processing and display capabilities, seem to keep demanding that users know more and remember more. Enter a workplace and we almost always find that users keep paper notes as a kind of external memory to keep track of apparently arbitrary things that a new computer system demands they remember to be able to interact with the system. There seems to be a "conspiracy against human memory" in the typical way that computer systems are designed (Norman, 1988).

Displeasure

In the early days of computing, the point was to get a job done, which could not have been done without the computer. The "user experience" was not a consideration -- if operators could be trained to do the ballistics or code calculations, that was sufficient. In today's computing world, the axis has shifted. People use computers at their discretion, not just because they need the capabilities, but because they find the experience to be positive. In many cases, they are bored, frustrated, or forced to operate in ways they don't find appropriate. The effect on how they respond is not just emotional. It has s direct impact on their ability to learn and use systems effectively, Concern with what is pleasing or displeasing to the user is not a "frill", but a key tool in creating effective systems and deploying them to the people who need them. The underlying principles of human-centered design apply for everything from weapons control systems to video games.

As computers become more pervasive in everyday life, people are increasingly confronted with interactions that are both important and difficult. As computing systems and networks move into a central position in many spheres of work, play, and everyday activity, they seem to take on more functions and increase in complexity. As a result, the kinds of breakdowns described above take on new urgency.

For example, there are an expanding variety of people using computers. Computers are no longer the exclusive province of science and business. We see computers in schools, in homes, in public spaces, and in every place where people lead their lives. A major goal of the government's efforts in developing the information infrastructure is to bring universal transparent affordable access to an information society. This universal reach magnifies the breakdowns, both in number and in consequences.

Systems have become more integrated and collaborative. Most early computer systems were designed to get some specific task done. Today, the "system" encompasses a wide variety of users and tasks within a closely linked collection of devices and activities. The Internet can be thought of as an extreme example of this integration. With it comes complexity and all the other problems mentioned above.

Increasingly, there is software that mediates the use of computer systems. In an attempt to deal with the breakdowns of computing, a number of researchers and software producers are developing programs that can be thought of as "agents," which mediate between a person and the computer systems that are useful to her or him. The motivation is admirable, and sometimes these agents can be quite effective. However, such mediators can create more of the complexity they are intended to reduce as well as create new forms or error and user frustration if they are not designed with human-centered principles in mind.

Ultimately, technological advances are needed but they are not sufficient to produce useful and successful systems. The actual impact of technology change on cognition, collaboration and performance varies depending on how the power and possibilities afforded by technology are integrated into ongoing fields of human activity. As Norman puts it, "technology can make us SMART and technology can make us DUMB" (Norman, 1993). Our central problem is often not, can we develop it, but what should we develop. Our central problem is not less or more technology, but rather skillful or clumsy use of the wide range of technological possibilities available to us.

1.4 The Strong Interpretation of Human-Centered Design

The goal for this workshop was to look beyond current directions and approaches in order to support future development of systems that are human-centered in ways that are now difficult or impossible to achieve.

For a truly human-centered design, we need to move beyond the current bounds of what is popularly thought of as "usability" or "user friendliness." We need to shift our focus beyond the immediate interactions between person and machine, toward the role those interactions play in a larger picture of human activity.

Norman (1993) indicated the challenge by rewriting the technology driven motto of 1933 world's fair to create a new, human-centered motto:

People Propose,

Science Studies,

Technology Conforms.

Basically, in a user-centered approach designers consider, up front, the impact of introducing new technology and automation on the role of people in the system and on the structure of the larger system of which the technology is a part. Human-centered design is not a call for less technology. In contrast it calls for developing technology that is adapted to the characteristics and pressures of different fields of activity.

This is a strong interpretation of the label "human-centered," and we can characterize this perspective in terms of three basic attributes: Human-centered research and design is problem-driven, activity-centered, and context-bound.

1. Human-centered research and design is problem-driven.

We distinguish "problem-driven" research and development from "need-driven" and "abstraction-driven" as described earlier (although there is overlap). A problem-driven approach begins with an investment in understanding and modeling the basis for error and expertise in that field of practice. What the difficulties and challenges that can arise? How do people use artifacts to meet these demands? What is the nature of collaborative and coordinated activity across people in routine and exceptional situations?

There is a particular perspective that emerges from being situated in a specific human problem situation. The specificity of the problem gives both a focus and a context. The powerful (and difficult) part is to use the specific problem as a grounded basis for developing generally applicable theories and mechanisms -- to use the particulars as a lever, without reducing the project to special-case problem solving.

2: Human-centered research and design is activity-centered.

In building and studying technologies for human use, researchers and designers often see the problem in terms of two separate systems (the human and the computer) with aspects of interaction between them. Although this can reveal interesting questions, the focus is on the participants in isolation, not the activity that brings them together. The strong interpretation of human-centered means that we are trying to make new technology sensitive to the constraints and pressures operating in the actual field of activity.

New possibilities emerge when the focus of analysis shifts to the activities of people in a field of practice. These activities do or will involve interacting with computers in different ways, but the focus becomes the practitioner's goals and activities in the underlying task domain. The question then becomes (a) how do computer-based and other artifacts shape the cognitive and coordinative activities of people in the pursuit of their goals and task context and (b) how do practitioners adapt artifacts so that they function as tools in that field of activity.

3: Human-centered research and design is context-bound.

Human cognition, collaboration, and performance depend on context. A classic example is the representation effect -- a fundamental and much reproduced finding in Cognitive Science. How a problem is represented influences the cognitive work needed to solve that problem, either improving or degrading performance (e.g., Zhang and Norman, 1994). In other words, the same problem from a formal description, when represented differently, can lead to different cognitive work and therefore different levels of performance. Another example is the data overload problem. At the heart of this problem is not so much the amount of data to be sifted through. Rather, this problem is hard because what data is informative depends on the context in which it appears. Even worse, the context consists of more than just the state of other related pieces of data; the context also includes the goals, the expectations, and the state of the problem solving process of the people acting in that situation.

Working within a context and at the same time being able to generalize about that context is both fruitful and difficult. The traditional power of the sciences comes from their ability to abstract away from the particular context of a problem, and to develop general rules or "laws" that can be stated in a context-free form (typically mathematical equations) and applied to a wide variety of problems and situations. Much of computer technology rests on a scientific and engineering basis of this classical kind, but when we approach the complexities of interactive human-computer systems, the questions that need answers are often not the ones to which formal context-free techniques apply successfully.

The three attributes of the strong interpretation of human-centered design make the problems of research and technology design much more challenging than they would be if the relevant domains were amenable to traditional formal modeling and prediction. People with backgrounds and experience in classical areas of science and engineering often view the three characteristics of a strong human-centered view as reasons why there cannot be a coherent scientific and research agenda on human-centered systems. We draw the opposite conclusion -- the domain is challenging and will be advanced by new ideas, not just about the systems we design, but about the nature of design and research in human-relevant technologies.

These new ideas have begun to emerge and take hold over the last decade. For example, one can point to a series of books that use the strong interpretation of human-centered as the basis for research and design -- Norman and Draper, 1986; Winograd and Flores, 1986; Norman, 1988; Ehn, 1989; Norman, 1993; Hutchins, 1995; Billings, 1996. This framework has been used as the basis for research and design in

* computers in medicine (e.g., Cook and Woods, 1996; Smith et al., 1996),

* cockpit automation (e.g., Hutchins, 1995; Sarter, Woods and Billings, in press),

[others]

If it were possible to advance the design of human-centered systems within the traditional framework of research and development, we could avoid having to deal with these difficult issues. But as the motivation for this workshop indicates, there is a broad consensus that there are problems and disappointments with today's computer systems that will require new thinking and new directions. The rest of this report sketches an initial set of issues and questions that frame those directions.

1.5 How Do We Foster (Strong) Human-Centered Design?

When a person uses a computer, there are a specific set of interactions going on, which can be analyzed in terms of cognitive processes and usability considerations. While the detailed design of these interactions is important, it is only a part of the picture. For a truly human-centered design, we need to move beyond the current bounds of what is popularly thought of as "usability" or "user friendliness," bringing in a larger context. We need to shift our focus beyond the immediate interactions between person and machine, toward the role those interactions play in a larger picture of human activity. Figure 2 illustrates the relationships in a strong human-centered approach, suggesting revised priorities for research and design.

Figure 2: A Human-Centered Approach

This perspective begins with the activity of people and other components in complex networks of action, viewing each component as both a potential cause and a potential locus of change. 'Human centered' implies putting human actors and the field of practice in which they function at the center of focus; This implies a 'practice-centered' approach that depends on a deep analysis of how people work individually, in groups, and in organizations, and of the actual demands of the field of practice (Ehn, 1989).

As an example, consider the use of the world-wide web for education. The Web was developed in the technology driven style of Figure 1. When it was applied to education, the obvious mode was to use its information distribution capability to automate the mechanics of traditional educational structures: distributing handouts, automating exams, etc. But a different starting point would take web-like mechanisms as a potential area of development, driven by considerations of what new possibilities they create for how education is done. A constructivist approach is possible, in which students learn by doing and sharing what they do with other learners. This might require rethinking some of the technical aspects of the web (e.g., the asymmetry of providing and receiving information), which might in turn lead to yet other uses. By considering a specific context of activity (in this example, of facilitating the learning of a subject by some group of learners), there is a potential for creativity in all three circles, with each pushing the others.

In Figure 2, the context of the user's activity is made explicit in the background circle. The differences among applications, constituencies, and settings will require attention to different contextual factors for each design. It is impossible to always consider all factors in interaction, and every design process will include simplifications and specializations of this general picture. What we are arguing for in common is a focus --- a stance --- in which attention to the human and social context plays an explicit and central role in the design of any system.

2. RESEARCH TO ADVANCE STRONG HUMAN-CENTERED DESIGN

We can identify directions for future research in terms of our goals:

* Designing for the full diversity of what people do.

* Putting people on top of the technology change curve.

* Bringing a human scale to the increasing complexity of interacting with interconnected computer systems.

These are, of course, goals for success, not a recipe for how to achieve that success. The activities undertaken by researchers and designers will need to produce new understanding, innovate new ways to use technological powers, and be based on new ways of working together.

These advances have already begun paced by researchers and designers who step outside of traditional roles. Those contributing to human-centered research and design of computing systems stand at the intersections of:

* research and design

* the lab and the field

* individual and social perspectives

* work activities and more playful, engaging activities

* application and theory

* technological areas of inquiry and behavioral/social science areas of inquiry.

Similarly, the issues that we raise in the following sections arise and will be solved by work at these intersections.

2.1 New Modes of Relating Design and Research: Complementarity

Research on human-centered design requires a complementarity between research and design because of the desire to influence what systems are developed and to make those systems more effective in terms of supporting people acting in some field of practice.

Building a research base that informs design means "... developing a theoretical base for creating meaningful artifacts and for understanding their use and effects" (Winograd, 1987, p.10). In another way, the research needed to advance strong human-centered design should advance our understanding of the relationship between technology change and cognition, collaboration and activity. This includes both how does technology change shape cognition and collaboration and how do people adapt technology to serve their ends.

In the sequential process diagrammed in Figure 1, there are independent research agendas for each of the circles, technological research; human factors research; and social impact research, each with their own well developed traditions, methodologies and established results. But the shift to the activity centered, context-bound view diagrammed in Figure 2, requires a different approach grounded in data and theory on the relationship between technology change and cognition, collaboration and other forms of human activity.

There are several characteristics that reflect this complementarity of design and research in a strong human-centered agenda:

* the designer, in part, becomes an experimenter because new computer based prototypes and systems also embody hypotheses about what would be useful,

* the experimenter, in part, becomes a designer since aspects of technology become variables in studies to understand how technology change shapes and is shaped by human activity.

* converging studies help build a base of empirical results and models derived from the study of the development and use of artifacts in different contexts,

* in turn, this broader knowledge about technology change and human activity helps guide and focus innovation and design practice for particular cases.

2.1.1 The "experimenter as designer" and the "designer as experimenter"

It is important to recognize that artifacts have a dual status: new computer based prototypes and systems exist as an object, but these designs also embody hypotheses about what would be useful, i.e., hypotheses about how technology change shapes cognition, collaboration and other human activities.

The possibilities of technology seem to afford designers great degrees of freedom. The possibilities seem less constrained by questions of feasibility and more by concepts about how to use the possibilities skillfully to meet operational and other goals. Computer technology is very often justified based on predictions about how the new systems will improve aspects of human cognition, collaborations across people, or other aspects of human activity.

This means that designs embody hypotheses about the relationship between technology and useful changes in human cognition, collaboration, and activities. The adaptive response of people and organizations to new systems tests the hypotheses about what would be useful embodied by particular prototypes or systems. To develop operationally effective systems for particular contexts means designers, at least in the long run, should adopt the attitude of an experimenter trying to understand and model of the interactions of task demands, artifacts, cognition, collaboration across agents, and organizational context. This data allows designers

* to see if their implicit models about the relationship of technology and human activity are on track,

* to modify and develop better models for future development, and

* to learn more about the field of activity to guide further innovation and concept generation.

This is a process of reflective design practice and serves as the base for deriving more generic lessons from particular contexts and systems.

From another point of view, it is important to see that artifacts play a role in most human activities, especially cognitive and collaboration. As studies have shown (e.g., Winograd and Flores, 1986), technology change transforms cognitive and collaborative activity through the introduction of agent-like machines, through new couplings across people and through the introduction of tools that constrain cognitive work. As a result, the introduction of new technology is a kind of experimental manipulation into an ongoing field of activity. How do artifacts shape cognition and collaboration given organizational context and problem demands in a field of practice, and how do practitioners, individually and as groups, informally and formally, shape artifacts to meet the demands of the field of activity within the pressures and resources provided by larger organizations? These are interesting questions to guide research about intercoupled systems of people and machines that perform or influence cognitive work and other human activities.

The development of prototypes and new systems is one important resource for advancing this research agenda. New technology functions as a kind of experimental manipulation that can be exploited to help understand the dynamics of task demands, artifacts, cognition, collaboration across agents, and organizational context indicated by Figure 2.

2.1.2 Data and theories about sets of people working with artifacts within a context

One of the problems posed by a strong human-centered approach is the development of research methodologies that can be context-bound yet still produce generic results. If context is critical to design, then what can we say theoretically about context? How do we identify the deeper factors at work behind the unending variety of individual settings and particular systems and technologies?

The needed research base can be built, and is in fact in the process of being built, from reflective investigations of practice and specific cases of technology and human change. These studies:

* use technology as a variable in terms of how it shapes and is shaped by human activity,

* trace the impact of technology on human strategies, collaboration, and activities,

* identify how users adapt and reshape artifacts and their own strategies to accommodate the constraints of their activities and goals,

* document how technology change actually transformed what it means to act in some field practice.

A few examples of this type of research include:

* Hutchins (1992) on the interplay of navigation tools and collaborative activity in maritime navigation,

* Ehn (1989) on how the same technological power can be used in more technology driven or more work oriented ways,

* Hutchins (1995b) on how simple artifacts influence the cognitive activities of the flight crew during a descent in commercial transport aircraft,

* Cook and Woods (1996) on how physicians adapted to the introduction of integrated computer systems for patient monitoring in the operating room.

* Fischer and his colleagues work (e.g., Fischer and Reeves, 1995; Fischer et al., 1991) on developing the critiquing style of cooperative interaction.

These kinds of studies:

* use actual situations as natural laboratories or as models for more laboratory based settings,

* are based on field oriented research techniques from direct observation, building corpuses of critical incidents, ethnography, and observing activities during simulated problems,

* tend to use protocol analysis to analyze the interplay of people and technology in the observed situations,

* allow the investigators to shape the conditions of observation through scenario design and through artifact based methods -- where prototypes function as experimental probes,

* sometimes trace over time how groups of users change strategies and other activities in response to technology change.

How do we develop useful models that include contextual factors as fundamental parameters? One approach is simply to model contexts explicitly, shifting the figure/ground relationship, so that what was previously background becomes a part of the representation which can be explicitly manipulated. Often this provides useful insights and new developments, but in many ways it is a brute force method to deal with context. It anticipates what aspects of context will be relevant and moves them into the foreground. However, there are limits to what can be anticipated, and unanticipated situations are the norm, not the exception.

There is a need to develop a coherent and applicable context-bound theory that can serve as a basis for human-centered design. For example, there are approaches (such as phenomenology, work analysis, ethnography) that attempt to deal explicitly with the interplay of context and action, for human activity in general. There are beginnings of design theories that are based on this work (e.g., Winograd and Flores, 1987; Hutchins, 1995a).

Inevitably, relevant theories will be ontological, not formal in nature. That is, they will not provide a set of equations and formulas to be systematically applied, but will give a conceptual framework and orientation within which to develop systems.

2.1.3 Supporting human-centered design and innovation

The classical modes of theory application are based on formal methods that take research-derived knowledge (e.g., laws expressed in mathematical relationships) and apply them to the solution of practical problems.

In human-centered areas of inquiry, formal theories tend to be applicable only to narrowly circumscribed phenomena, while the practical problems depend on grappling with much wider and less well-defined issues. This does not mean, however, that every problem is tackled from scratch with no means at hand for applying previously developed knowledge. In design disciplines, principles, examples, and experience are applied in regular ways to solve specific problems and develop new designs.

In the design of interactive computer systems, we are still at an early stage in formulating the knowledge/practice base that can allow us to learn from the iterative design process and generalize beyond the specific artifact. We do not yet have a good understanding of how the use of systematic (research-derived) knowledge fits in with creativity and user experience in the design process.

A priority for research activities is to develop methods where `generic' research (grounded in, but not limited to, a specific situation) can be leveraged to provide better design guidance, reducing the need for costly iterative design and test. To link research and design through methodologies that work in practice, we need to balance the quest for rigor and generality with the exigencies of the practical world. There can be a spectrum of methodologies, from fundamental frameworks for problem setting to more specialized methodologies for dealing with specific problem types.

An example of this kind of link between research and design is occurring in a part of the work on computer-supported cooperative work (Grudin, 1994) where principles derived from studies of human-human collaboration, such as the common ground (Clark and Brennan, 1991), provide guidance about how to design displays of the current state and ongoing activities to support human-computer collaborations (e.g., explanations) and technology-mediated remote human-human collaborations.

Research relevant to design must be sensitive to but not overwhelmed by the constraints on actual development (e.g., production pressures). For example, why do products demonstrate a "creeping featurism" that implicitly but inevitably produces operational complexities? Since the computer medium is multi-function, it is easy to create more options, modes, menus, and displays--software can make the same keys do different things in different combinations or modes, or provide soft keys, or add new options to a menu structure; the CRT or other visual display unit (VDU) allows one to add new displays which can be selected if needed to appear on the same physical viewport. What factors push developers to proliferate modes, to proliferate displays hidden behind the narrow viewport, to assign multiple functions to controls, to devise complex and arbitrary sequences of operation--in other words, to create devices with classic deficiencies that produce new cognitive burdens for users, more operational complexities, and new forms of error and failure?

The issue of creeping featurism also illustrates the need to broaden the view of what is being designed. If a company designs a new navigation system for aircraft, the design needs to extend to corresponding changes in air traffic control and management roles and systems, maintenance systems, training systems, and more. In the user's world, devices are not used independently, but are part of an interacting network of equipment with interdependencies that are visible only through considering the field of activity as an integrated whole and are not visible when looking at any one of the designs in isolation.

Activity-based design moves away from the design of a particular software application or an individual computerized box toward the design of a working environment for the people active in that context. Simply physically combining multiple devices and sources of data in a single multi-function computer systems is rarely sufficient to produce an activity-centered design and often creates new problems such as workload bottlenecks if developers do not have a good model of the integrated activities that can go on in that context (e.g., Cook and Woods, 1996). Such integrated design is both technically and organizationally difficult, since it requires a view that does not divide neatly along the component lines of the computing systems or of the organizations that produce, disperse, and maintain them.

Activity-centered design considers the unity of different aspects of human cognition, attention, and collaboration that come together in a particular context. As a simple example, a design for how an individual's attention will be distributed and shifted among several tasks may cut across different systems (from the machine perspective), active machine agents and passive forms of visualization, and the interactions across multiple people in an open or more private workspace. All of these elements influence whether people can focus on the right data at the right time in a changing environment depending on an unified understanding of attentional skill, visualizations that support control of attention, training experiences that enhance attentional skills, active alerting or intelligent systems that cooperate smoothly rather than over-interrupt or interrupt in the wrong contexts, cooperative activity across multiple people depending on shared views and open workspaces.

Research on integration requires innovation both in the models that are used to understand and anticipate what human activity will be supported, and in the practical methods for design and development that allow for taking a human-centered view that cuts across traditional system boundaries.

Another broad issue that affects the quality of the computer systems that we design and build is how to close the gap between user-centered intentions and the technology-driven nature of actual design practice (Grudin, 1996). It is relatively easy to subscribe to the ideals of human-centered design, and all designers in some sense believe they are taking a 'human-centered' approach. But somehow the exigencies of design environments (organizational pressures, time pressures, economic pressures), and the overconfidence of designers in their own intuitions of 'what the user needs' often get in the way.

Design is a continual cycle, in which each level of design becomes an opportunity for testing the concepts and mechanisms that went into it.

Even commercial products represent temporary commitments in a larger cycle of feedback, evaluation, re-conceptualization, and design evolution. There are different degrees of design plasticity at different points. What can be changed early in the product definition stage will become frozen into place by the beta release. Different product types carry with them different implications for successful design cycles, as do the different academic and industrial settings in which design work is done.

Today's "seat of the pants" intuitions about the cycle of design, prototyping, and testing, can be augmented with more systematic understanding of the nature of this process, enabling human-centered design to be more effective and closer to the needs it is attempting to address (Poltrock and Grudin, 1994). Thinking from a scientific/empirical perspective, we need to understand how to use artifact-based methods where prototypes function as a vehicle for learning -- as a tool for discovery. Each design is an experimental probe in the space of possible designs (Carroll, Kellogg, and Rosson, 1991; Woods, in press). Along these lines, what role does the formulation and application of specific hypotheses play in the design of innovative products/systems?

A particular problem that arises at the intersection of design and research is- the "envisioned world problem." The design cycle as described in the previous paragraph implicitly assumes a fixed background -- the needs and practices of potential user -- which is "probed" by the experiments that are constituted by new designs. But in many cases, the introduction of new technology will transform the nature of practice. New technology introduces new error forms; new representations change the cognitive activities needed to accomplish tasks and enable the development of new strategies; new technology creates new tasks and roles for the different people at different levels of a system. In other words, new technology is not simply an experiment, but is an experimental intervention into fields of ongoing activity.

The introduction of new technology changes the nature of the task, not always in ways that are anticipated, and not always for the better. This has implications for how analyses of existing work practice are used to inform design, and implies a need for post-product release field work and other techniques to assess the actual impact of new systems on field practice (see for examples, De Keyser, 1992; Jordan and Henderson, 1995; Robert et al., 1996; Tschudy et al., 1996). Research needs to address several related questions:

* How can data collected at one time be applicable to design activities that will produce a world different from the one studied?

* How does one envision or predict the relation of technology, cognition and collaboration in a domain that doesn't yet exist or is in a process of becoming?

* How can we predict the changing nature of expertise and new forms of failure as the workplace or field of activity changes?

2.2 Developing Cognitive And Social Technologies to Complement Computational Technologies

In talking about the design of technologies, people often focus on the material or capabilities side: a technology is a collection of devices, features and options which displays certain autonomous capabilities. Technology driven research attempts to expand the power of those technologies. One of the wide interpretations of human-centered is based on expanding the power of technologies that happen to be concerned with how computers interact with people (e.g., the acceptability, naturalness or intelligibility of machine generated speech).

But the strong interpretation of human-centered helps us see there are also "cognitive and social technologies" based on how technology shapes the cognitive and collaborative activities of practitioners (e.g., Winograd and Flores, 1986; Agre, 1995). These are often better thought of as "methodologies," techniques or ways of getting things, or as "conceptual properties" which are exhibited by manipulating properties of objects. Strong human centered research is concerned with understanding these cognitive and social technologies which are expressed in the design of technological objects in relation to a field of human activity.

For example, based on analysis of how the human perceptual system functions (how we know where to look next in a changing natural environment) and based on innovative prototyping of designs, a generic but context-bound concept of "focus plus context" has been extracted (e.g., Lamping, Rao, and Pirolli, 1995; Woods and Watts, in press). This concept has been shown to aid navigation in a large network of data or displays in multiple investigations in different fields of activities. Another example derived from research on human-human communication is the common ground concept (Clark and Brennan, 1991). This concept has been shown to be a basic aspect of cooperative work and has led to the design concept of a visible shared frame of reference that integrates current state and ongoing activities as part of an open workspace. It has also been used to integrate state data and output from intelligent advisory systems to create more effective forms of explanation for real time environments.

In both of these examples concepts about how artifacts shape cognition and collaboration have been developed. They illustrate the character of cognitive and social technologies. The development of these ideas opportunistically involved:

* observing people use artifacts in real contexts,

* extracting a common pattern of experiences or a phenomena including typical breakdowns,

* noticing how concepts from other fields (perception and linguistics) were relevant,

* innovating and creating artifacts,

* insight to extract a generic concept that goes beyond the properties of particular technologies and particular settings, but a concept that is useful to guide the use of technology in design for particular settings,

* artifact based investigations.

It is also interesting to note that the concepts are not about the technologies themselves (for example, they are not framed in terms of what it takes to build a computer system). Instead, they express general characteristics about an activity that occurs in many settings and that inherently involves technological artifacts. These examples illustrate the target of strong human-centered research and design, even though one cannot provide a list of such topics that should be investigated, since such insights are part of the research process itself.

2.3 Measures / Quality Metrics

In order for a discipline of human-centered design to become the basis for a community of practice, there need to be consensual understandings of what constitutes success. If we have no way to measure whether a design result or a design process satisfies the criteria, than there can be no body of agreement about what is good and what should be done.

On the other hand, it is all too easy to misinterpret ease of measurement as indicating the practical value of a measure. Many of the central elements of human-centered design, such as appropriateness to context, do not lend themselves to easy quantitative measurement.

Situated Measures

In finding ways to relate measures to overall qualitative assessments, research will be required to develop new kinds of situated measures. These measures will need to be sensitive to the context of use and to the context of the design process. They may differ for different points in the maturity cycle (e.g., assessing novelty vs. comparing commodities); for different software categories and genres; for different audience priorities (e.g., ease of use vs. functionality), and so on.

Resource Tradeoffs

Measures also need to be related to resource tradeoffs. The question is not just how good a design is (with respect to some method for assessment), but how that goodness fits into a cost structure: Given limited design funds, what would you invest in that would make the most difference? This integration of assessment measures into cost structures has been explored in economics, and needs to be extended to make contact with measures of software design.

Predictive Measures

The value of measurements is in their use. Post hoc use provides certain benefits, but even more benefit can come from predictive use. The goal is to shortcut the trial and error process, to be able to anticipate results in certain dimensions without going through the expense of full design and implementation. Developing predictive measures in the domain of human-centered software design is difficult. What kinds of predictions can you make? What can be simulated? How much can we shortcut the trial and error process? It is clear that progress depends on building up the empirical base on "artifacts, their uses and effects."

Complexity Measures

Many of the problems with current computer systems, as described in section 2, are exacerbated by complexity: the complexity of software; the complexity of tasks; and the complexity of the overall environment in which the human interacts with computer-based systems. In order to design for situations with inherent complexity, it will be important to have better measures of the complexity of an environment or task. Complexity has been notoriously hard to measure in all but the most formalized domains (such as algorithmic complexity), and it is a challenge to devise measures that will have meaningful application to system design.

2.4 Examples of Contexts and Challenges in Human-Centered Design and Research

One of the fundamental tenets of human-centered research is that the research needs to be grounded in actual problem situations and contexts, or else it will abstract away from context in a way that severely limits its applicability to any situation. This means that in order to achieve the theoretical research goals, we need to work in the setting of designing specific technologies.

The following are examples of design projects that could serve as a focus for theoretical development in human-centered design. They are by no means the only such examples, and they will overlap with projects that are done from more technology-driven starting points.

2.4.1 Human-centered system integration

Earlier we mentioned the problem of fragmentation for users of computer systems. As uses and user communities grow, there is an increasing burden on each user to master and move between multiple interactive modes and contexts in order to cope with the complexities of their computing environment.

Some designers have begun to explore a kind of system integration that is centered on the user's experience, rather than on the underlying functionality of the systems. The goal is to provide a consistent context that cuts across

activities, applications, settings, sessions and the like, in order to bring order and uniformity to the user's world. As a simple example, I might want to take a figure from that paper I saw on the web yesterday and combine it with some text from an email I received last week, to include into the document I am working on today. From a user's point of view, these are pieces of material associated with particular events and topics in my environment. In order to do this task with existing systems, I need to have mastered a complex set of operations for hotlists, file transfers, cut and paste of different media, etc. In a user-centered integration, I would deal with them in terms of my field of activities, rather than in terms of multiple applications and commands.

Providing such an environment organized around user activities requires advances in modeling those user activities and linking such modeling results to computational mechanisms.

A key aspect of human-centered integration is the ability to provide coaching to users that is based on their previous activities and knowledge. Rather than a generalized help command, a coaching system can provide guidance in context -- both the context of activity at the moment, and the larger context of the user's previous activities, preferences, specialized tools and tailoring, etc. One obvious example of this is for users with disabilities, whose interactions with every system they touch will be shaped in parallel ways by the particular limitations on their use, whether it be lack of sight, inability to use a keyboard, or cognitive dysfunctions.

The work on user activity integration will require (and will inspire) research on shared context, control, intervention, attention and other phenomena of how people interact with the worlds they inhabit.

2.4.2 Integrated collaborative tools

With the recent -- and quite sudden -- emergence of mass-appeal Internet-centered applications, it has become glaringly obvious that the computer is not a machine whose main purpose is to get a computing task done. The computer, with its attendant peripherals and networks, is a machine that provides new ways for people to communicate with other people. The excitement that infuses computing today comes from the exploration of new capacities to manipulate and communicate all kinds of information in all kinds of media, reaching new audiences in ways that would have been unthinkable before the networked computer.

Communication is more than just getting items from one machine (or one person) to another. It is based on the fundamental nature of language as a way of coordinating action and sharing meaning. Supporting human communication well requires more than just having high bandwidth means of moving multimedia data. It requires analysis and understanding of the communicative functions, and a corresponding direction of innovation in the design.

To pick a simple example, consider the distribution of information on the world-wide web. Technology-driven research can speed up the transmission, provide for key-word search, and the like. But what about support for considerations of value? What kind of mechanism is required for a reader to know what a given page means -- in what context was it produced, for what intended audience, with what purpose in mind, with what degree of integrity? These social dimensions are not reducible to HTML headers, but are of crucial importance to the practical use of information on the web. Simple labeling schemes, such as PICS are an example of technologies being proposed to address this problem. It is clear that the hard issues that PICS and its successors must address are those in the human domain.

PICS (and the whole internet, for that matter), can be thought of as a kind of collaboration tool. But it is a fragmented collaboration tool without unifying modes and operations that cut across the different technologies. The development of integrated collaborative tools will draw on the existing work on Computer-Supported Collaborative Work (CSCW) and will require new research that provides foundations for dealing with questions of trust, authority, and the ways in which activities by one party make claims on the attention of another.

2.4.3 Tools for "co-bots"

The shift toward a 'human-centered' approach changes the questions asked from how to compute better solutions to how to determine what assistance is

useful, and how to situate it and deliver it in the interface. Given the difficulties of developing broad artificial intelligence, one of the key challenges to the design of human-centered intelligent systems will be to structure multi-agent teams (that may include multiple persons and intelligent machine agents) to maximize the opportunity for correct problem solving and decision-making. This requires the development of joint person-machine architectures that can effectively handle unanticipated situations.

We need to highlight the importance of considering the performance of the distributed joint person-machine system in designing and evaluating intelligent aids. The challenge for human-centered research in support of this technology is to develop our understandings of:

* intervention modes in co-situated human-computer action

* cooperation, compliance, communication

* distribution of control/autonomy

* appropriate dimensions of co-adaptation (how the machine adapts to the human behavior and the human adapts to the machine behavior)

There is large base of good work to draw on in this area (cf. e.g., Roth et al., in press; Billings, 1996).

3. MODELS FOR RESEARCH AND EDUCATION

The preceding sections propose research that will require innovative design of initiatives and projects. These often will require forms of interdisciplinary knowledge and synthesis. The following are examples of types of research initiatives that are likely to be a part of a strong human-centered research agenda.

3.1 Testbed-Style Research Projects within Situated Contexts

There is often a dichotomy between research activities and development. Research is done in the laboratory, driven by a conceptual framework, while development is done in industry, driven by commercial practicality. Some projects, such as the current Digital Libraries Initiative of NSF, ARPA, and NASA, have developed a model that provides a link between these two often-isolated ends. The research is directed to long-term generalizable results, while being grounded in a specific situation of use and users. The teams that work on the projects include computational researchers, designers, and social science researchers, who jointly analyze the needs, identify the problems, and set directions for design.

Problems come up in this style of work: resource conflicts between long term goals and immediate implementation needs; difficulties in cross-disciplinary discussion, especially during the phases when it can have most effect on the designs; and even lack of mutual respect among the disciplines. However there are also important benefits that will pay off in the ability to develop theories and methods for human-centered design. The model needs further refinement and exploration in future projects.

3.2 Human-Centered Reflection on Design Activities

In addition to working with projects that are created from the beginning to provide a testbed for human-centered design (as described in Section 2.5.1), it is possible to build human-centered observation into other design-oriented research projects, where the goal is to observe, not guide, the design process and its results. A researcher concerned with questions about artifacts their use and effects can do prospective analyses, observations, and measurements of a project in a research lab or in industry. The researcher would document the dynamics of the process and analyze it in terms of the criteria for human-centered design (both in how it satisfies those criteria and how it violates them).

3.3 Case-based Research

The integration of research and design in a single project is possible only in those limited cases where the funding and execution of the project is set up with a conscious effort to develop design knowledge. However there is also a wealth of experience in projects that have already carried out design activities, which can be researched and analyzed on a post hoc basis. This analysis can incorporate various methods of analysis, including ethnographic description, quantitative measures of resources and outcomes, and conceptual analysis of the kind that drives case studies in other professional disciplines, such as business, law, and architecture.

In software design, there is precious little analysis of past cases, either successful or unsuccessful. The large-scale feedback loop on design is not working effectively, and most designers start with little or no awareness of lessons learned from previous efforts. In a young beginning field, this is understandable. But as computers move into their second half century, it is increasingly important to create ways of capturing and reusing knowledge from the collective experience of the field.

Research could both identify relevant cases and their features, and explore the ways in which the analysis of exemplars serves a role similar to (though not in the same style) the use of theory in other areas of technology. A long term goal is to build a collection rich in diversity and make it widely available. Working with a shared body of examples, researchers in the profession as a whole can identify the relevant consistencies and differences and develop a working vocabulary that serves in the practice of design.

This activity can include funding of individual case-based research projects, and also group efforts such as workshops and distributed material collection and analysis (using the web).

3.4. Researcher Training in Conjunction with Apprenticeship

In disciplines where the development of abstract knowledge is detached from questions of practical application (e.g., theoretical physics), the appropriate setting for training researches is the academic laboratory. But in design, as in other professions such as medicine, the theory and practice are intertwined in a way that requires extensive experience in practice settings, combined with theoretical teaching. This is true not just for students who will become clinicians, but also for those who will participate in medical research -- the clinical aspects of training provide an understanding of the practice context which is critical to understanding medicine as a whole.

Software design (and, in particular, human-centered software design) requires the connection of theory and practice, context and abstraction. It cannot be learned in academic isolation. Students need to be able to connect to a real setting to understand the centrality of tradeoff problems in design. Many students gain practical experience by working for software companies during summers, part time, or before entering graduate school. But this experience is hit-and-miss. There is no structure to provide directed mentoring and integration of practical and conceptual learning.

Programs can be developed under government sponsorship which make use of cooperation with industry to provide a well-designed and effective interweaving of academic and applied work to train a new generation of researchers who will have their feet planted firmly on both sides. Many of the questions posed in the previous sections are difficult to solve, or even to approach, for researchers today who have not had this kind of experience. They will ultimately be solved by the students who have a design-centered training that brings together the "strong" aspects of human-centered design. The education, like the practice, needs to be problem driven, activity centered, and context bound.

4. RECOMMENDATIONS

The development of human-centered computing systems depends on empirical work, modeling, and design activity that is activity-centered, context-bound, and problem-driven. Such work will make use of the possibilities afforded by the growth of technology and will rechannel technology development into new directions and uses based on the data, models and innovation in design. This work is neither about people alone nor about technology alone. Rather it is based on examining the mutual shaping of technology developments and human activity -- research that cuts across traditional boundaries between disciplines.

Developing technology further in a context free manner (such as, higher resolution visual displays, large borderless display media, more natural sounding machine speech, and similar technology driven research questions in interconnectivity and other areas) can and does already go one. Usability specialists already work in software development organizations to polish and refine products based on user testing prior to final release. Social scientists already document the usually surprising changes in human activity produced by technology changes that have already occurred. No initiative in human-centered computing is needed if the resulting activities match those in Figure 1.

The strong interpretation of human-centered research and design can lead us to see new activities that can be fostered and grown (Figure 2). These activities are already underway driven by the commonplace failures of technology to meet user needs and the continuing allure of the new power and design freedom that technology creates. The people who do this do not fit traditional categories:

* they design and they collect data;

* they are knowledgable about and develop technology, yet they are deeply interested in understanding and influencing the impact of technology on human activity (things that make us smart).

* they work in the field influencing particular fields of activity, while they develop models, concepts and innovations that are relevant across fields of activity.

* in product development, they are concerned with work activities (what is usable and useful) but also with what is desirable and engaging or even playful to people.

While these activities have been going on and continue to expand, these efforts are fragile because they do not have direct institutional support and roles. Those located in design organizations can be overwhelmed by production pressures as the dynamism and pace of developing computer based products increases. Some in research environments find it difficult to connect to people in actual contexts and fields of activity (or are not rewarded for connecting to such contexts). Other researchers who work in particular "application" areas or industries must focus in on local improvements and moment to moment hot button issues. Progress on understanding artifacts, their uses and effects, demands that we find ways to relax some of these constraints to build a research base that is relevant to design and to specific context.

Institutional investments can reinforce and grow the base of strong human-centered research and design. The projects to be funded under human-centered initiatives should meet certain criteria. Projects should:

1. be activity-centered, context-bound, and problem-driven.

2. include empirical studies and model building about how technology change shapes human activity and how human activity shapes technology development.

3. develop generalizable phenomena, concepts and techniques that are relevant to specific contexts.

4. address issues at the intersections of

* empirical inquiry and design

* the lab and the field

* individual and social perspectives

* work activities and more playful, engaging activities

* application and theory

* technological areas of inquiry and behavioral/social science areas of inquiry.

5. link understanding human activity and the role of artifacts in human activity with the processes of creating designs as complementary and mutually informing activities, for example, through artifact based methods.

6. foster innovation about how to use technological possibilities in ways that are useful and desirable to people engaged in different types of activities.

7. develop human resources and expertise at the intersections of traditional areas of inquiry. One example is the need to cross-train people so they can link together design, user testing, field research techniques, cognitive sciences, and prototyping technologies.

These are broad criteria to guide future research but they focus attention on a family of activities that are needed to meet Norman's human centered motto and a family of activities that are not already being supported directly as part of the long term research infrastructure.

5. REFERENCES

Agre, P. (1995). From high tech to human tech: Empowerment, measurement, and social studies of computing, Computer Supported Cooperative Work 3(2), 162-195.

Billings, C. E. (1996). Aviation Automation: The Search For A Human-Centered Approach. Hillsdale, N.J.: Lawrence Erlbaum Associates.

Carroll, J. M., Kellogg, W. A. and Rosson, M. B. (1991). The Task-Artifact Cycle. In J. M. Carroll (ed.) Designing Interaction: Psychology at the Human-Computer Interface, Cambridge University Press, (p. 74-102).

Clark, H. H. and Brennan, S. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine, and S. D. Teasley (Eds.) Perspectives on Socially Shared Cognition (pp. 127-149). Washington DC.: American Psychological Association.

Cook, R. I. and Woods, D. D. (1996). Adapting to new technology in the operating room. Human Factors, 38(4), 593-613.

De Keyser, V. (1992). Why field studies? In M.G. Helander & M. Nagamachi (eds.) Design for Manufacturability, London: Taylor & Francis.

Ehn, P. (1989). Work-Oriented Design of Computer Artifacts..

Fischer, G. and Reeves, B. (1995). Beyond Intelligent Interfaces: Exploring, Analyzing, and Creating Success Models of Cooperative Problem Solving. In R. Baecker, J. Grudin, W. Buxton and S. Greenberg (eds.), Readings in Human-Computer Interaction: Toward the Year 2000, Morgan Kaufmann, (p. 822-831).

Fischer, G., Lemke, A., Mastaglio, T. and Morch A. (1991). The Role of Critiquing in Cooperative Problem Solving. ACM Transactions on Information Sciences, 9(2), 123-151.

Flores, F., Graves, M., Hartfield, B. and Winograd, T. (1988). Computer systems and the design of organizational interaction. ACM Transactions on Office Information Systems, 6, 153-172.

Grudin, J. (1994). Groupware and Social Dynamics: Eight Challenges for Developers. Communications of the ACM 37(1): 92-105.

Grudin, J. (1996). The Organizational Contexts of Development and Use. Computing Surveys , 28(1): 169-171.

Guerlain, S., Smith, P.J., Obradovich, J.H., Rudmann, S., Strohm, P.,

Smith, J., & Svirbely, J. (1996). Dealing with brittleness in the design

of expert systems for immunohematology. Immunohematology, 12 (3), 101-107.

Hoffman, R. and Crandall, B. (in press). Critical Decision Method. Human Factors.

Hutchins, E. (1990). The technology of team navigation. In J. Galegher, R. Kraut, and C. Egido (Eds.), Intellectual teamwork: Social and technical bases of cooperative work. Hillsdale, NJ: Lawrence Erlbaum Associates.

Hutchins, E. (1995 a). Cognition in the Wild. MIT Press: Cambridge.

Hutchins, E. (1995 b). How a cockpit remembers its speeds. Cognitive Science, 19, 265--288.

Jordan, B. and Henderson, A. (1995). Interaction Analysis: Foundations and Practice. The Journal for the Learning Sciences , 4(1), 39-103.

Lamping, J., Rao, R. & Pirolli, P. (1995). A focus+context technique based on hyperbolic geometry for visualizing large hierarchies. In CHI 95 ACM Conference on Human Factors in Computing Systems, New York: ACM Press.

Nielsen, J. Interface Design for Sun's WWW Site. http://www.sun.com/sun-on-net/uidesign

Norman, D. A. and Draper, S. (1986). User-Centered System Design, Erlbaum: Hillsdale NJ.

Norman, D. A. (1988). The Psychology of Everyday Things. Basic Books: New York.

Norman, D. A. (1993). Things That Make Us Smart. Reading, Addison-Wesley: MA.

Poltrock, S. E. and Grudin, J. (1994). Organizational Obstacles to Interface Design and Development: Two Participant Observer Studies. ACM Transactions on Computer-Human Interaction 1(1): 52-80.

Robert, J. M., Pavard, B. & Decortis, F. (1996). Guidebook for User Needs

Analysis. Transport Telematics - DG13, version 1, septembre 1996.

Roth, E. M. Malin, J. and Schreckenghost, D. (in press). Intelligent Interfaces. In M. Helander et al., editors, Handbook of Human-Computer Interaction, second edition, North-Holland, New York.

Sanders, E. B.-N. (1992). Converging persepctives: Product development research for the 1990's. Design Management Journal, Fall.

Sarter, N., Woods, D. D. and Billings, C. (in press). Automation Surprises. In G. Salvendy, (ed.), Handbook of Human Factors/Ergonomics, second edition, Wiley, New York.

Tschudy, M., Dykstra-Erickson, E. and Hollway, M. (1996).PictureCARD: A Storytelling Tool for Task Anlysis. In Proceedings of Participatory Design `96.

Winograd, T. and Flores, F. (1986). Understanding computers and cognition. Reading, MA: Addison-Wesley.

Winograd, T. (1987). Three responses to situation theory. Technical Report CSLI-87-106, Center for the Study of Language and Information, Stanford University.

Woods, D. D. Designs are hypotheses about how artifacts shape cognition and collaboration. Ergonomics, in press.

Woods, D. D. and J.C. Watts, J. C. (in press). How Not To Have To Navigate Through Too Many Displays. In M. Helander et al., editors, Handbook of Human-Computer Interaction, second edition, North-Holland, New York.

Woods, D. D., Johannesen, L., Cook, R. I. and Sarter, N. (1994). Behind Human Error: Cognitive Systems, Computers and Hindsight. Crew Systems Ergonomic Information and Analysis Center, WPAFB, Dayton OH, 1994.

Zhang, J. and Norman, D. A. (1994). Representations in distributed cognitive tasks. Cognitive Science, 18: 87--122.