Computer supported collaborative work perspective on collective intelligence

From Handbook of Collective Intelligence

Jump to: navigation, search

Ellis et al. (1991) [1] define CSCW tools or groupware as "computer-based systems that support groups of people engaged in a common task or goal and that provide an interface to a shared environment."

Contents

What is collective intelligence?

Englebart and Ruilifson (1998) [2] suggest that a collective intelligence would:

  • be able to make sense the environment
  • make decisions with some logic
  • marshal resources and solve problems
  • learn; in particular, they emphasize that collective intelligences can improve on the way they improve.

Examples of collective intelligence

Engelbart's vision

Englebart and Ruilifson (1998) [3], in reviewing their earlier work, suggest that a collective intelligence should possess:

  1. a dynamic knowledge repository
  2. a state-of-the-art summary or handbook of the repository.

They further argue that a collective intelligence should have:

  1. a foundational layer of basic human capabilities, which comes from the humans' genetic make-up, which might partially determine dimensions such as motivation and intrinsic drive (they have not mentioned non-human capabilities, but these seem like an extension of what they have in mind)
  2. a tool system, which consists of facilities, media, tools (and presumably the training to use the tool system)
  3. a human system, which comprises the paradigms, organization, procedures, attitudes, knowledge, learning of the humans.

The tool and human systems need to be integrated, and to co-evolve. The challenge is that the evolution of tool systems tend to out-pace that of human systems.

According to Engelbart (1995)[4], the integration could come from exploiting a collection of capabilities among entities. Such integrated entities, he calls NICs (Networked Improvement Communities). These are "cooperative alliances of organizations, employing advanced networked computer tools and methods to develop and apply new collective knowledge." Their systems would be then:

  • Tool systems. This, Engelbart calls CoDIAK, for Concurrent Development, Integration, and Application of Knowledge. The goal is to "get better at getting better." Another example is his OHS, for Open Hyperdocument System. In a recent paper, Bieber et al. (2002) [5] propose a Collaborative Knowledge Evolution Support System (CKESS), another realization of the idea.
  • Human systems. Englebart and Ruilifson (1998) [6] also conjecture that human systems should also be improved exogenously, by recruiting, forming, and training high-performance teams. These they call HPASTs, for high-performance augmentation support teams. They seem to be best introduced into organizations as support teams, providing services to incumbent teams. A more specific example is what Engelbart calls HPSTs, for high-performance scholarship teams, which are HPASTs for an academic environment. For example, a key output of these are dynamically updated textbooks.

Hyperscope

Hyperscope is a reincarnation of the 1960's oN-Line System (NLS) (the commercial version is called AUGMENT), using JavaScript Dojo, with a client-side implementation with Ajax and DHTML.

Hyperscope's designers maintain several key ideas [7]:

  1. Expert-oriented interfaces. The analogy the designers use is that while tricycles are easy to ride on, it is bicycles that most people will want after the initial trial. So it is important to build bicycles.
  1. Addressability is more than linking. Addresses are generalized ways to get to a point in a document, combining today's scrolls bars, hypertext links, and views.
  1. Speed. Hyperscope allows users to use faster key strokes, instead of slower scolling.
  1. Interoperability.

Color Fields

"This group has a simple concept: take photos of a frame-filling field of A SINGLE COLOR and post that. It can be textured, have imperfections, etc...but the frame should be filled with one color. They should be abstract enough that the color is clearly the focus, and the most important thing in the photo." [8]

Place and space

Dourish (2006) [9] observe that even spaces ("geometrical arrangements that might structure, constrain, and enable certain forms of movement and interaction") and places ("ways in which setting acquire recognizable persistent social meaning in the course of interaction") are distinct.

An interesting example for collective behavior is Lehn et al. (2001) [10], who observe that spaces are defined not so much by physical boundaries, but by collective human behavior.

Ling and Yttri (2002) [11] argue that wireless mobile technologies give rise to new forms of coordination, which (to us) might qualitatively change the nature of collective intelligence. They describe two new types of coordination among mobile phone users in Norway:

  • Micro-coordination. This includes changes such as:
    • Redirecting trips: asking someone to stop by the grocery store to pick up milk
    • Softening time: calling ahead to say one will be late for a meeting
    • Progressively arranging a meeting: two parties agree to go to a meet at location, but arrange for the exact lunch restaurant only when they have started on their journeys to that location
    • Delaying meeting until meeting time: calling someone at the approximate place of meeting and raising one hand to say: "can you see the guy with a hand raised?"
  • Hypercoordination: to connect the "last 100 yards" (usually due to constraints such as parent-imposed curfews among teenagers).
    • Expressive, not just instrumental, use of phones
    • Self-presenting uses of phones as artifacts that define who the user is (e.g., the size, age, edition, and price of the phone, how it is carried, where it is used).

Ito and Okade (2005) [12] report that Japanese teenagers do not consider themselves late if they are participating in a meeting virtually.

Typology of CSCW tools

Coordination theory

Malone and Crowston (1994) [13] classify CSCW tools by the type of coordination process they address:

  • managing shared resources: Coordinator by Winograd and Flores (1986) [14] and the Information Lens by Malone et al. (1986) [15].
  • managing producer/consumer relationships: POLYMER by Croft and Lefkowitz (1988) [16], used to support office work
  • managing simultaneity: Visual Scheduler by Beard et al. (1990) [17]
  • managing task/subtask relationships: POLYMER by Croft and Lefkowitz (1988) [18]
  • making decisions in groups: gIBIS by Conklin and Begeman (1988) [19], SIBYL by Lee (1990) [20], Xerox PARC's Colab (Stefik et al., 1987 [21]) and Cruiser (Root, 1988 [22])
  • communicating: Xerox PARC's Colab (Stefik et al., 1987 [23]) Information Lens by Malone et al. (1986) [24], and Quilt (Fish, 1988 [25]).

Dourish (2006) [26] considers the distinction of space versus place.

Ellis et al. (1991) [27] propose a few typologies of CSCW tools:

  • Common task versus shared environment. For example, time-sharing systems are low on common task and shared environment. Electronic classrooms are high on both.
  • Time versus space. For example, there are systems that support interactions at different times at the same place.
  • Application-level taxonomy. This gathers systems according to their primary use. The authors observe that there are:
    • messaging systems, like the Information Lens (Malone et al., 1986 [28])
    • multi-user editors, like ForComment (Opper, 1988 [29]), Shared Books (Lewis and Hodges, 1988 [30]), Quilt (Fish, 1988 [31]), and the Collaborative Editing System (Grief et al., 1988 [32]).
    • group decision support systems and electronic meeting rooms, like the Arizona PlexCenter Planning and Decision Support Lab (Applegate et al., 1986 [33]).
    • computer conferencing, like the Xerox PARC's Colab (Stefik et al., 1987 [34]) and Cruiser (Root, 1988 [35])
    • intelligent agents, like LIZA (Gibbs, 1989 [36])
    • coordination systems (such as process programming and conversation-oriented models), like the Coordinator (Flores, 1988 [37]).
  • Perspectives. Examples of these are:
    • Distributed systems
    • Communications
    • Human-computer interaction
    • Artificial intelligence
    • Social theory

The authors also discuss some common issues arising from the introduction of groupware:

  • Increases information access
  • Encourages parallel work
  • Makes discussion more difficulty, because most tools are not fully duplex
  • Makes group focus more difficult
  • Cuts down on social interaction

Ellis and Wainer (1994) [38] classify CSCW tools (groupware, in their terminology) according to:

  • ontology, the objects and the operations on them. For example, CA-ForComments uses objects such as comment, revision, and dialog, while PREP uses a generalized comment object.
    • They also make the operational distinction between intended semantics (e.g., CA-ForComments' comment object is intended to contain comments) and operational semantics (e.g., CA-ForComments' revision object is restricted so that it can only be attached to other objects).
    • Another distinction is operational versus (read) access rights.
  • coordination, the control and data flow. They observe that activities (sequence of operations on objects, such as filling a form) are different from actions (such as clicking on a submit button; there are start and terminate actions, which they call s- and t-actions, which change the stage of collaboration). Procedures are sequenced sets of activities, and endeavors are instances of procedures.
    • Another aspect of this dimension is inspection, in which the system provides information to users. There are four types of inspections: participant-based, endeavor-based, total, and second-order.
    • Then there are different kinds of concurrency: sequential, parallel, and additive (tasks in an endeavor that can be simultaneously active but have no modify rights)
    • Currency is how updated the system is. This could be automatic, manual, and fully concurrent (two or more simultaneous tasks have modify rights to the same objects).
  • user interface. The authors propose these different aspects of the interface:
    • views of objects
    • views of participants
    • views of contexts

Activity theory

There is a long-standing debate in sociology on whether the individual or a group of individuals should be the primary unit of study; see section on Social psychology perspective on collective intelligence. Focusing on the individual runs the risk of not fully considering the social context. Focusing on the group runs the risk of not fully accounting for individual agency. Activity theory proposes that an intermediate unit of analysis: the activity; for a contemporary description of the properties, origins, and mediators of activities, see Cluts (2003) [39].

Kuutti and Arvonen (1992) [40] propose that a typology of CSCW tools can be obtained by examining two dimensions (see their Figure 3):

  1. The area of support. This is one of instrument, rules, division of labor, subject, object, and community
  2. The role of a user towards the support system in an activity. This is one of pre-determined, active, or expansive.

Bardram (1998) [41]

Bikson and Eveland (1996) [42]


Others

Weiseth et al. (2006) [43] propose a "wheel of collaboration tools" based on three layers:

  1. Collaboration interface, which includes the type of:
    1. Physical workspace
    2. Devices
    3. Portals
  2. Collaboration functions, which include type of:
    1. Coordination, such as mutual adjustment, planning (these first two as in March and Simon (1958) [44]) and standardization (see Thompson, 1967 [45]).
    2. Production, such as capturing, authoring, mailing, publishing, and search and retrieval
    3. Decision-making, such as query, survey, reporting, evaluation and analysis, and choice.
  3. Content management and process integration, which include the type of:
    1. Content model management, such as the management of metadata, taxonomies, templates
    2. Content lifecycle management, such as version control
    3. Process integration, such as workflow and change notification.

The authors use this framework to identify the functions present and absent in the collaboration tool developed in-house at Statoil, called Collaboration@Statoil or C@S.

Origins and mechanisms in CSCW

Social interactions

Girgensohn and Lee (2002) [46]

What people do in online social communities

Lampe et al. (2006) [47] conduct a survey of Michigan State University students on their use of Facebook. They conclude that students use Facebook more to find out more about people they already know offline, rather than to meet new people.

Why people contribute

Herlocker et al. (2004) [48] examine recommender systems and classify reasons for why we contribute recommendations:

  1. Enhance quality of what we get. For example, many recommender systems rely on matching our interests against others of similar interests. Therefore, putting as much about ourselves—say by making rating books online—improves the accuracy of recommendations given to us.
  2. Express ourselves. There are issues around whether expression requires an audience, and how anonymity figures.
  3. Help or influence others.

Group size

Forlines et al. (2006)[49] document that group size reduces visual errors when the task is visual search for prohibited items on x-ray monitors.

Shared information

Wilson et al. (2006) document the changes when a large shared display of hand-over information in a medical ward is introduced. Shared information is now pushed, not pulled from hand-written summaries. This in turn leads to greater scrutiny of the information by senior staff. As the authors indicate, this might lead it to being "an idealized an idealized record of the work rather than the useful, but less formal, artifact to support the work."

Munkvold et al. (2006) [50] also study the introducing electronic summaries of handover among nurses, and find that the summarizes induces greater redundancy.

Tee et al. (2006) [51] report a software tool which allows group members to see each others' artifacts in the latter's surroundings, which helps in group coordination. For example, it is easier to find out if someone is willing to be interrupted.

Distributed cognition

Palen and Aalokke (2006) [52] conduct an ethnographic study of elders to show that they set and up rely on artifacts in their homes, such as pill boxes and piano benches, to remind them of when to take their medication. Healthcare workers also participate in this "distributed cognition" involving artifacts.

Guilds

Nardi and Harris (2006) [53] undertake an ethnographic study of World of Warcraft (WoW) players. They argue that learning is enhanced because it is not just a cognitive, but also a social, activity. Also, WoW provides a structure in which players can first engage in lightweight collaborations before they sign up for "knots" (groups of strangers formed for temporary purposes). They contend that, contrary to Putnam (2001) [54], massively multiplayer online games (MMOGs) can provide Gemeinschaft-like community relationships that are social and productive.

Vocabulary sharing

Shilad et al. (2006) [55] investigate how people develop, share, and use tags. They find that these processes depend on our current investment in tags already being used, the degree we want to conform with the tagging community. The community's tags in turn depend on what we use and the tag selection algorithm. They also find that subjective tags are used most for self-expression, factual tags for learning and finding, and personal tags for organizing.

Lee (2006) [56], in a study of del.icio.us tags, find that users who perceive greater degrees of social presence are more likely to annotate their bookmarks with information that could facilitate the sharing and discovery of bookmarks for other del.icio.us users.

See also work by Almulla and Szuba (1999) [57] and Weiss (2005) [58].

Aggregation

Aggregating recommendations

Typology

Herlocker et al. (2004) [59] classify systems according to their uses:

  1. Filter out unwanted material. For example, Goldberg et al. (1992) [60] describe Tapestry, an Xerox PARC collaborative filtering system in which more eager readers of email annotate messages, and these annotations form the basis for filtering for them and other readers. Resnick et al. (1994) [61] describe GroupLens, in which raters rate net-news messages, and servers called Better Bit Bureaus disseminate these ratings and aggregate them for other readers. Wexelblat and Maes (1999) [62] describe Footprints, in which users can use each others' previous search paths to help in their own searches.
  2. Find some good material. Shardanand and Maes (1995) [63] describe Ringo, which makes personalized recommendations for music albums and artists, based on what past, similar users determine are good albums and artists. Hill et al. (1995) [64] describe a similar system, but for video selection.
  3. Find all good material. Herlocker et al. (2004) [65] suggest that lawyers, for example, would have this goal if they are building up their cases.
  4. Recommend a sequence. Herlocker et al. (2004) [66] cite Yahoo's Launch radio site, which recommends songs in a specific sequence.
  5. Pre-purchase learning. Herlocker et al. (2004) [67] suggest that many non-purchasers in MovieLens (see Miller et al., 2003 [68]) and Amazon.com benefit from recommendations. A twist on this is when users search first for a credible expert whom they could trust when they actually purchase (see Bos et al., 2002 [69]. There are two kinds of learning that are interesting:
    1. Novelty, such as when a recommendation is about a new movie by our favorite movie director.
    2. Serendipity. This is a recommendation that the user might not otherwise have discovered, so an example is a recommendation about a movie that we like even though it is not by our favorite director.

McNee et al. (2006) [70] argue that recommender systems today often do not build confidence and are not knowledgeable. But most important, they are not tailored well enough to users' needs. They claim that a recommendation has two dimensions—dialog (such as correctness, transparency, salience) and personality (such as boldness, affirmation, adaptability)—and must match these against the user's need (such as concreteness, expectation).

Challenges in evaluation

The evaluation of recommender systems might inform the evaluation of aggregation mechanisms in collective intelligence.

For Herlocker et al. (2004) [71], the challenges arise because:

  1. an algorithm might fit some data sets better than others. For example, a system that recommends word processing commands have users who might already know 5 to 10% of the commands, but another system that recommends books on Amazon.com is unlikely to find users with that level of familiarity.
  2. evaluation goals differ--e.g., accuracy, coverage, intuitive explanations to users.

Algorithms

McNee et al. (2006) [72] test four recommender algorithms, using research paper recommendations as a setting:

  1. User-based collaborative filtering, in which we aggregate the weighted-average opinions of some k nearest neighbors (determined by some a priori criteria); see Resnick et al. (1994) [73]
  2. Naïve Bayesian, in which co-cited papers are treated as training examples; see Breese et al. (1998) [74]
  3. Probabilistic Latent Semantic Indexing. This is similar to the Bayesian method. One difference is that it uses the EM-algorithim (expectations maximization). Another difference is that it runs its model not on individual papers, but on clusters of papers.
  4. Content-based filtering with term frequency/inverse document frequency. Porter (1997) [75] proposes this method, which uses content analyses of candidate papers to recommend.

The authors find that Bayes and Probabilistic Indexing algorithms produce outlier results that generate negative reactions based on a survey of 138 users. They conclude that it is hard to generate good recommendations, and very critical not to generate bad ones.

Metrics for evaluating algorithms

Herlocker et al. (2004) [76] propose the following metrics:

  1. Predictive accuracy. This measures how close predicted ratings are to those that would have been provided by users themselves. There include variants such as mean absolute error, mean squared error, root mean squared error, and normalized mean squared error. These are easy to understand and their statistical properties are well known.
  2. Classification accuracy. This measures how type 1 and 2 errors—i.e, whether a recommender system recommends something when it should not and whether it misses recommending something when it should.
  3. Precision and recall measures. Precision is the ratio of selected items that are relevant. Recall is the ratio of relevant items that are selected.
  4. Relative Operating Characteristic (ROC) curves. This measures the how discriminating the algorithm is in detecting signal from noise; see Ogilvie and Creelman (1968) [77].
  5. Rank ordering measures. This compares the recommended ordering of items with what users would have done on their own.
  6. Half-life utility metric. This measures the difference between our utility, modeled with an exponential decay (half-life) as we browse down an ordered list, with that recommended; see Breese et al. (1998) [78] and Heckerman et al. (2001) [79].
  7. Normalized distance-based performance measure (NDPM); see Yao (1995) [80]. This is similar in spirit to the common Spearman and Kendall tau rank correlation coefficients, but it accounts better for tie situations.

McCarthy and Anagnost (1998) [81]

Aggregating user inputs

The CSCW and information retrieval literatures provide a literature on how a user (or all users collectively) might aggregate inputs from other users.

Agent-based aggregation

Kozierok and Maes (1993) [82] and Malone et al. (1986) [83] describe agents with which users can use to filter out unwanted user inputs (in their settings, user inputs are email messages).

Kahn and Cerf (1988) [84] call their agents knowbots and Wiederhold (1992) [85] calls his mediators.

User modeling

Allen (1991) [86] and Karlgren (1994) [87] describe model users' utility and characteristics to guide the aggregation of inputs from other users.

Aggregating preferences

McCarthy and Anagnost (1998) [88] describe MusicFX, a system for aggregating preferences of people for music played at the Anderson Consulting fitness exchange. Users rate their preferences for different music genres, and the system totals the ratings. To wide the selection (the researchers believe that variety is otherwise not captured), the system selects songs from among say, the top 20 genres. The paper reports that 71% of the 71 responses (25% response rate) feel the music selection is better.

Want et al. (1995) [89] describe PARCTAB, a system of hand-held computers used for ubiqitous computing at Xerox PARC. One of the ways the system is used is for audience members in a seminar to provide live collective feedback to speakers on the quality and pace of the latter's presentations, as well as to control room lighting and temperature.

Elrod et al. (1993)

What factors facilitate collective work?

Geographic dispersion and organization structure

Hinds and Mcgrath (2006) [90], after studying 33 R&D teams, find that although flatter organizational structures are more coordination-effective for collocated teams, they are less so for geographically dispersed teams.

Congruence between task dependence and coordination activity

Cataldo et al. (2006) [91] use archival data to measure task dependence, and find that the congruence between dependence and the type of coordination activity facilitates productivity. Specifically, they consider 4 types of dependencies, corresponding to 4 types of congruences:

  1. Geographic
  2. Structural
  3. Modification request (MR). Their setting is a software firm in the data storage industry.
  4. Internet Relay Chat (IRC)

They also find that while structural congruence decreases through releases of the software, MR and IRC congruences increase. They interpret these changes as people's increasingly finding that MR and IRC congruence becoming more effective.

Network centrality

Hossain et al. (2006) [92] find that actors who are central are the best facilitators of coordination productivity.

Improvement infrastructure

Englebart and Ruilifson (1998) [93] conjecture that the tool system needs to be structure to improve the human system. Advancements in the tools, called outposts, need to be introduced as experiments, and in configurations, not as isolated gadgets.

Awareness

Dourish and Bellotti (1992) [94] examine how mechanisms to share awareness in Quilt (see Fish et al., 1988 [95]), PREP (see Hiltz and Turoff, 1993 [96]), GROVE (see Conklin, 1987 [97]), and ShrEdit (see Olson et al., 1992 [98]) enhance collaborative work. These mechanisms may be classified as:

  • Informational. For example, Quilt's authoring system is annotated with email messages.
  • Role-restrictive. For example, the author role might allow editing, while a reviewer role does not.

What factors inhibit collective intelligence?

Grudin (1988) [99] list some lessons from CSCW implementation that might be instructive for collective intelligence systems:

  1. Divergence in incentives between users and maintainers. Perhaps more accurately, for everyone, the benefit must outweigh the cost—e.g., Carasik and Grantham (1998) [100] attribute the failure of implementing the Coordinator at Pacific Bell to the high cost of training to the benefits that could be derived.
  2. Breakdown of intuitive decision-making. Decision makers may over- or under-state the cost-benefit ratio for segments of user populations that they are not familiar with.
  3. Underestimation of difficulty in evaluating CSCW.

See also Edigo (1988) [101]

Implementation issues

Bikson and Eveland (1996) [102] describe CSCW implementation issues which might also be pertinent to implementation of collective intelligence systems. Among their key findings of their qualitative study at the World Bank:

  1. Change in social and technical systems (the former like work groups, jobs, interdependencies; the latter like hardware, software, networks)
  2. Implementation has as strong an influence as technology on outcomes
  3. Outcomes evolve over time.

Enhancing social interaction

Kollock (1998) [103] and Lee et al. (2001) [104] are examples of studies in how we can increase the level of social interaction in online communities.

Girgensohn and Lee (2002) [105] more recently describes the features used in the CHIplace (website for 2002 ACM CHI conference) and Portkey (website for IBM interns) websites:

  • Establishing common ground: folklore and trivia about CHI's history, pre-publication of research papers for discussion, tips provided by predecessor interns, photos provided by current interns
  • Increasing awareness: listing of what has changed, chronological display of threads, selection of today's active forums
  • Enhancing social interaction mechanisms: polls, writing and voting of trivia, discussion forums
  • Making "place"; see Harrison and Dourish (1996) [106] and Kollock (1998) [107]: discussion forums which become places for discussants to set norms about what should be posted

Technologies

Social browsing

Root (1988) [108]

P2P

Androutsellis-Theotokis and Spinellis (2004) [109] survey the P2P landscape.

Oster (2006) [110] describe how they exploit P2P technology traditionally used for distributing files, to support collaborative editing of content.

Maintaining consistency

Sun and Sun (2006) [111] summarize the state of the art in "group undo" via the dominant paradigm of operational transformation (see, for example, Prakash and Knister (2004) [112]), based on the theory of causality (see early paper by Lamport, 1978 [113]). The consensus is that the theory is inadequate; see Sun and Ellis (1998) [114]. For example, concurrency is not a sufficient condition for operational transformation. We need the two operations to be in the same document state, too. This leads to a number of fixes and assumptions (such as the Reversibility Property, Convergence Property, and some Inverse Properties). Sun and Sun propose that a new context-based operational transformation algorithm that does not require these assumptions.

Ignat and Norrie (2006) [115] is a recent proposal that summarizes the state of the art. Their setting is a collaborative graphing, and their proposal involves a serialization based on nodes (tasks to be done) and edges (ordering of the tasks).

It has been suggested that Metcalfe's law —the value of a telecommunications network is proportional to the square of the number of users of the system (n2). — applies to such software.

  • Voting methods

Voting has many uses in collaboration software. Condorcet voting offers input from multiple experts or perspectives and may reduce intransitivity problems in decision making. In recommendation systems, rating or voting on many items can be used to formulate profiles for highly successful recommendations; and in document collaboration, such as Wikipedia, voting methods help to guide the creation of new pages.

  • Mass collaboration as distinct from cooperation

Mass collaboration differs from mass cooperation in that the creative acts taking place requires the joint development of shared understandings. Conversely, group members involved in a cooperation needn't engage in a joint negotiation of understanding, they may simply execute instructions willingly.

  • Dodds et al. (2003) [116]

Evaluating CSCW

In some ways, we can learn lesson from evaluations of CSCW. Neal et al. (2004) [117] argue that evaluation is hard because:

  1. Difficult collection of observational data. However, collecting some form of archival data is probably manageable.
  2. Many complex variables. Besides the traditional human-computer interaction issues, CSCW issues involve many social and organizational factors that need to be controlled for. A natural experiment is also hard with more people involved.
  3. Validation of re-engineered group work. CSCW applications need much longer trial time to get to a stable state. If we were to study the impact of CSCW, it is harder to see when the trial ends and the stable state begins. And as usual, the Hawthorne effect, in which subjects change their behavior because they know they are under observation, is accentuated with CSCW than with single-user systems.

They propose an evaluation framework that suggests that evaluation be done along or account for the following dimensions:

  • Contextual factors. This comprises activities (see activity theory in Nardi (1995) [118]) and entites-relationships.
  • The degree of work coupling, such as loosely or tightly coupled (see Interdependencies)
  • Communication, ranging from light-weight interaction, information sharing, coordination to collaboration and cooperation; see Borghoff and Schlichter (2000) [119]
  • Coordination, which they view as an overhead or a process loss. For example, Brooks (1995) [120] observes that effort increases by n(n-1)/2 for each task that is separately coordinated.
  • Common ground, based on Clark's (1996) [121] idea that common knowledge among group members must exist for people to do joint work
  • Activity awareness, including social awareness, workspace awareness, situation awareness.

See also

GroupLens Project [122]

CSCW, an annotated bibliography, Greenberg (1991) [123]

A more recent survey, by Jacovi et al. (2006) [124]

Horn et al. (2006), Six degrees of Jonathan Grudin: a social network analysis of the evolution and impact of CSCW research. [125]

Ellis et al. (1991), Groupware: some issues and experiences, [126]

Kraemer and King (1988), Computer-based systems for cooperative work and group decision making [127]

Neale et al. (2004), Evaluating computer-supported cooperative work: models and frameworks [128]

Kling (1980), Social Analyses of Computing: Theoretical Perspectives in Recent Empirical Research [129]

Borghoff and Schlichter (2000), Computer-Supported Cooperative Work: Introduction to Distributed Applications [130]

Personal tools