AccessMyLibrary provides FREE access to millions of articles from top publications available through your library.
Can social theory be of help in designing machines for human work practices? With only a few connections to the burgeoning field of constructivist science and technology studies (STS),(1) authors on the crossroads between the social and computer sciences have started to explore how "better" technologies could be designed by drawing upon theories from sociology, anthropology, and social philosophy. These constitute attempts to cross the "great divide" between social scientists and computer engineers--to traffic across and maybe even populate this border zone that seemed so inhospitable (Bowker et al. 1997). Of course, STS crosses this divide by studying technology. Yet, here the waffle remains mostly unidirectional: STS researchers venture into the lands of engineers, but the latter are not very interested in joining them on the return trip. To come full circle, actually to employ social theory in design, is a fascinating additional step. The homes of those passing through here are diverse: the recently emerged field of computer-supported cooperative work (CSCW) is explicitly dedicated to this bridge-building. Others are united under the rubric participatory design. Individuals and small research groups are scattered across departments of artificial intelligence, computer science, software engineering, sociology, anthropology, cognitive science, and so forth.
This article has a double purpose: to bring these developments to the attention of STS researchers and, vice versa, to see what STS insights could contribute to them. It is an analysis of the ways "humans," "machines," and their interrelation in work practices are conceptualized in these different approaches. Specifically, I am interested here in what Annemarie Mol and Jessica Mesman (1996) have called the "politics of theory": What are the consequences of these different conceptualizations for design and what is their political leverage? How do different ways of depicting "technology," "work practice," and their interrelations yield different politics of technology? The question of "politics of technology," here, is a double one. First of all, it recognizes that technology is an important, shaping element in modern worlds, and it asks what type of worlds (specific) technologies help bring into being. Assuming that technologies participate in the ordering of social collectives (such as communities, nations, work practices), we ask just what orderings a technology seems to bring about. Second, the question of the politics of technology is also a question of how the development of technologies can be affected so that more desirable or justly ordered social collectives can be brought into being. Is this a matter of the democratization of technological development, and if so, how to achieve such a feat? Are there other ways of "doing" politics when technologies are involved?
The next section briefly outlines what is taking place at the crossroads I have mentioned. Subsequently, I analyze the way "technology" and "work practice" are put to work theoretically within a specific region of this field by those authors for whom "better" technologies refer to hopes for more democratic and worker-oriented workplaces. This will not be an innocent representation of the diverse approaches populating that section of the border zone. Rather, at the risk of being unfair to the nuanced differences in positions of individual authors, I will examine the analytical and political power of this approach which, I will argue, thrives upon the assertion of a crucial, ontological difference between the realms of "technology" and "human work." In the last section I will critically analyze the "politics" of this approach and contrast it to recent developments within STS that criticize this very distinction.(2)
Putting Social Theory to Work in Design: Discovering the User
Since the field is large and scattered, it is impossible to give a complete overview. To give a flavor of some of the core developments and issues that mark the field, this section draws a simplified sketch of how, and with what, the divide came to be populated. This sketch is based on several accounts, each with a different origin. Although they draw on different strands of social theory, and have different goals, they are interrelated and have all contributed to the discovery of the user.(3)
"Traditional systems design" is the starting (and counter) point of most stories. Design here starts with system requirements: a list of functional demands the information technology (IT) should meet. The list should be detailed, exhaustive, and clear-cut, so that computer scientists can focus fully on their real task of designing an efficient system that fulfills these requirements efficiently, smoothly, and aesthetically. This usually implies making a model of the work practice that the system should function in: the flow of work, the sequencing of tasks, the hierarchy of responsibility and control, and so forth. The acquisition of the requirements is seen to be a preliminary step, a prerequisite for the real work to start. It consists primarily of asking those who have ordered the system what exactly they want the system to do.
Those who order new information technologies, however, are often not the same people who work with the technologies, and often those who put together the requirements are not the same people who do the actual design. According to the accounts, then, traditional system design is characterized by a large gap between those who design the technology and those who actually use it. The lack of contact between users and designers is reinforced by their prevalent institutional and cultural separation in different worlds (companies having their own, isolated IT departments full of "techies"). Designers, then, complain that they do not know what users want, and users complain about systems that do not fit their work practices (Greenbaum and Kyng 1991).
In the 1970s, companies such as IBM started experimenting with structured meetings with users as a more efficient means to generate system requirements. Involving users was a matter of speeding up the design process and enhancing efficiency: only by optimizing the requirement-gathering process would the recurrent problem of suboptimal systems be solved. In addition, cognitive psychologists started to investigate human-computer interaction. By studying how humans model tasks, for example, computer interfaces could be made to fit workers' thinking processes and to optimize the "cognitive coupling" between the worker and the machine (Bannon 1997; Agre 1995).
The coming of the "flexible corporation" in the 1980s furthered this discovery of the user. Management theories stressed that organizations that wanted to survive the increasingly competitive and fast-changing markets of the late twentieth century had to be client-oriented and had to "empower" their workers. Direct and in-depth knowledge of the customer was seen to be crucial. The company's employees needed to be freed from overly structured hierarchies and had to be broadly informed and skilled to fulfill the increasingly varying tasks that the new organization imposed on them. Competitiveness required paying much attention to the user (whether client or employee) in the system design process. Only then could better IT products be developed and the efficiency of the system design process increased.
While these developments put the "user" on many research, design, and corporate agendas, others interpreted the gap between designers and users in a different, much more critical way. The so-called "Scandinavian approach" which emerged in the 1970s and 1980s emphasized the political nature of the gap between the users and the designers (for a recent overview of this approach, see Greenbaum and Kyng 1991). The Scandinavian researchers argued that traditional systems design was thoroughly management centered and strengthened the already existing tendencies to increase managerial control over workers by increasing the division of labor and deskilling them. Building upon the work of Noble and on the Labor Process Theory of Braverman, they argued that the rationalistic, engineering methods of traditional systems design embodied the interests of management by cutting up complex tasks in standardized subtasks and trying to explicate all such tasks as sets of general rules. Designed in such a way, information technology could not but work against workers' interests. In the Scandinavian approach, then, "involving the user" was a thoroughly political strategy. Creating "alternative technology," centered around the worker's skills and interests, was a move in the class struggle. Information technology had to "empower" the users and erode class divisions, rather than controlling, deskilling, and ultimately dehumanizing the workers. Empowerment depended on the early involvement of end-users in the design process. Design of a new technology had to rely on studies of the users' tacit skills and on mapping and strengthening their interests.
Several founding works for the field of CSCW merge the political drive of the Scandinavian approach with sociological and epistemological critiques of traditional systems design. In their Understanding Computers and Design, Terry Winograd (an acclaimed AI scientist)(4) and Fernando Flores (a former Minister in the Chilean Allende government) present a radical epistemological alternative for "the rationalistic tradition" which they see as pervading cognitive science, much organizational theory, and traditional design practices (1986). Attempting to replace a tradition characterized by an atomistic, mechanistic worldview, they draw upon Heidegger, Maturana, and Searle to sketch a more holistic, hermeneutic alternative. In their book, they present a technology designed according to this alternative worldview: the COORDINATOR[TM], which has become one of the best known (and most contested) CSCW applications.(5) The COORDINATOR builds on Searle's speech act theory by incorporating its fundamental idea that language is not primarily a description of the world (as the rationalist tradition would have it), but "a form of human social action." Language acts create commitments, whose precise meaning and appropriateness are highly context-dependent (p. 76). The program is basically an enhanced e-mail application that attempts to facilitate communication in organizations by making users aware of which commitments are made to whom, of their nature and subsequent development. Winograd and Flores argue that by explicating the "basic conversational building blocks" within "networks of recurrent conversations," tools such as the COORDINATOR can help avoid breakdowns and misunderstandings in ongoing conversations. This is a means to "improve productivity" of organizations by "developing [the employees'] communicative competence" rather than by attempting to make "intelligent" systems that impose rationalistic simplifications on the ongoing work (pp. 137-38, 159-62).
Whereas Winograd and Flores launch an all-encompassing, epistemologicai critique, Lucy Suchman's Plans and Situated Actions is a much more fine-grained study of human-machine interaction (1987). Suchman works at Xerox PARC, one of the first companies to employ anthropologists in design. Her highly influential book launched the ethnomethodological concept of "situated action" as a core notion of CSCW. According to Suchman, traditional systems design perpetuates the gap between systems and users because its notion of "purposeful action" is overly restrictive. Most cognitive scientists, AI researchers, and computer scientists believe that human action consists of the execution of mental plans; yet, designing interactive computer systems on the basis of this assumption results in recurrent, important problems. "Plans," according to Suchman, "are themselves located in the larger context of some ongoing practical activity." They are "resources for situated action, but do not in any strong sense determine its course." This course itself unfolds only in the doing, in constant interaction with the actual, concrete, and contingent circumstances that make up the situation (pp. 49, 52). In the words of Mike Robinson, Suchman has formulated an "impossibility theorem": "there can be no a priori or algorithmic connection between any particular plan and any specific action" (1991a, 15). Since most current computer systems disregard this impossibility theorem, they continually run into trouble. Displays of "intelligent" copiers, for example, often disregard that statements can be interpreted in different ways in different situations. Similarly, copiers that attempt to "sense" what the user is doing (by sensing whether a drawer is opened, or whether the document cover is closed) often too easily attribute a clear, static intent to such user-actions--as if users indeed followed clear-cut "plans" and were not in fact tinkering toward a practical solution by ad hoc tryouts and seeing where these lead to (Suchman 1987, 118-70).
Suchman was not alone in wanting to study "cognition" as a social phenomenon rather than as a process taking place within an individual's mind. Emphasizing that cognition is distributed "over mind, body, activity and culturally organized settings" (Lave 1988, 1; and see Hutchins 1995), several authors called for a sociological/anthropological reappraisal of human-computer interaction (Bannon 1991; Luff and Heath 1993). The gap, in other words, could not be bridged without a thorough reconsideration of what actually happened with and around computers in modem workplaces.
One rich source for this reconsideration was the sociology of work that had been developed by symbolic interactionist researchers, and in particular by Anselm Strauss (Strauss et al. 1963, 1985). In a series of articles, Star brought this approach to bear on the problem of designing information technology for workplaces.(6) In a thesis resembling Suchman's "impossibility theorem," Star undermines the idea that work practices can be meaningfully modeled by means of predetermined, formal work flows and task descriptions. Complex social organizations, Star argues, are characterized by ongoing negotiations about the nature of the tasks and the relationships between individuals in the organization, by ad hoc reactions to upcoming contingencies, by distributed decision making, by "multiple viewpoints" and "inconsistent and evolving knowledge bases" (Gerson and Star 1986). What keeps such "open systems" (Hewitt 1986) going is the often invisible work of articulation:
all the tasks involved in assembling, scheduling, monitoring,
and coordinating all of the steps necessary to complete a production
task. This means carrying through a course of action despite
local contingencies, unanticipated glitches, incommensurable
opinions and beliefs, or inadequate knowledge of local
circumstances.... [Since] no formal description of a system (or
plan for its work) can ... be complete. every real world system thus
requires articulation to deal with the unanticipated contingencies that
arise. Articulation resolves these inconsistencies by packaging a
compromise that "gets the job done," that is, that closes the
system locally and temporarily so that work can
go on. (Gerson and Star 1986, …