Organization theory perspective on collective intelligence

From Handbook of Collective Intelligence

Jump to: navigation, search



Differentiation and integration

One of the key issues addressed in organization theory is how best to decompose management problems and then ensure that the decomposed sub-problems are solved in an aligned manner; see March and Simon (1958) [1]. For example, Lawrence and Lorsch (1967) [2] and [3] study organizations in 3 environments (plastics, food, and containers) with differing degrees of change. They find that organizations with differentiated sub-systems (sales, research, production) perform better in changing environments, only if these differentiated sub-systems are also integrated (e.g., with corporate head office, special integration teams). Other studies consider the different types of interdependencies between the sub-systems; see Malone and Crowston (1994) [4]:

  • pooled, in which activities share common resources
  • sequential
  • reciprocal; see Thompson (1967) [5].

Differentiation and integration under the influence of technology

The literature examines several factors that impact team performance when technology—especially information technology—is or can be used.

Co-located and distributed teams

All else being equal, the consensus is generally that geographically distributed teams; see Hinds and Kiesler (2002) [6]. Many reasons have been offered for why this might be so:

  • Cramton (2001) [7] suggests that difficulty in sharing mutual knowledge is a factor. She further classifies this failure into 5 types:
    • failure to communicate and retain contextual information
    • unevenly distributed information
    • difficulty in communicating and understanding the salience of information
    • differences in speed of access to information
    • and difficulty interpreting the meaning of silence.
  • Distance also makes conflict more severe. Hinds and Mortensen (2005) [8] study 22 collocated and 21 distributed teams and find that shared identity and shared context moderate the intensity of conflict.
  • Cramton (2002) [9] also suggests that attribution is more biased in distributed work.

Various coordination mechanisms have been suggested to address the above issues. For example, Grinter et al. (1999) [10] examine 4 methods product development organizations in Lucent use to integrate multi-site teams (all of which suffer 2 problems: consequences of unequal distribution of project mass, and finding expertise):

  • locate each expertise at only one site. The benefits include having scale with a pool of expertise, better load balancing and development of that expertise. The cost is that we now have to manage projects across sites.
  • partition product development according to product structure, and locate the components at different sites. The benefit is the independence in operating environments. The cost is in integration testing of the components.
  • partition process steps. The benefit is closer proximity to customers. There is also better use of resources, such as test labs. The cost is in managing temporal dependencies and in handoffs.
  • customize products that are produced out of one baseline model made at one central site. This also enhances proximity to customers. There is also good division of labor for code ownership. The cost is the need to build trust across product makers, and compatibility issues, since the customizing sites tends to need tools that match those at the baseline site. There is also the need for coordinating processes; and the authors find a lack of documentation that hampers this process.

Social structure

Coleman (1988) [11] propose that social networks enhance trust and rapport, and clarify and support obligations and expectations, information channels, and social norms. These lead to smoother coordination and more productive collective action; see Reagans and Zuckerman (2001) [12].

Organization structure

This has many aspects. In one, it is about the interdependencies of the work involved; see the section on Coordination theory. For example, Thompson (1967) [13] proposes that work flow can be independent, sequential, reciprocal, and team-based, each reflecting greater interdependency; see also Galbraith (1972) [14], March and Simon (1958) [15].

Van de Ven et al. (1976) [16] propose a different classification of coordination mechanisms, into impersonal, personal and group modes. They suggest that the effectiveness of each mode depends on task uncertainty, interdependence, and unit size; and show empirical support from a study involving 197 work units within a large employment security agency. This is a new twist on Weber's (1947) [17] observation that hierarchies in bureaucracies are efficient in certain settings, and Simon's (1969) [18] view that informal organizations also evolve to be hierarchical.

Burns and Stalker's (1971) [19] argue that organic non-hierarchical, informal structures are more suitable for innovation in unstable and dynamic environments, because they address the informational and social requirements. Mechanistic structures are more suitable otherwise. Allen (1977) [20] provides empirical support for this with a study of R&D teams.

Hinds and McGrath (2006) [21] argue the opposite might be true for distributed rather than collocated teams: in a study involving 455 individuals in 49 teams within a multinational firm, they find that hierarchical (albeit informal ones) do better than network structures. The reason is that network or organic structures are "difficult to address through electronically mediated exchange"; see also Hackman and Morris (1978) [22] and Nohria and Eccles (1992) [23]. Ahuja and Carley (1999) [24] find the same in a ethnographic 4-month study of a virtual organization.

Another research stream is to find ways to modularize work, to reduce their interdependency. Baldwin and Clark (1999) [25] suggest porting ideas about modularity in computer software engineering to design and manufacturing work. They also caution that modularization of work also requires managers who are comfortable controlling less, and whose knowledge is more focused on their modules. Olson and Olson (2000) [26] also conclude that groups that are loosely coupled do better. Galbraith (1973) [27] contend that modularity reduces the need for information sharing. See also Krishnan et al. (1997) [28]. Modularity, of course, has its own issues. For example, Grinter et al. (1999) [29] note that in their setting, it contributes to more handoffs, leads to later detection of software bugs, results in divergent and isolated views among collaborators, and under-leverages expertise of distant group members.

Jarvenpaa and Leidner (1999) [30] suggest another way, to build "swift trust," although such trust can be fragile and temporal.

Communications structure

It is arguably the consensus that dense communications structures enhance group productivity; see the pioneering work by Back (1974) [31]. One mechanism through which structures enhance productivity is by improving group identification and trust; see Portes and Sensenbrenner (1993) [32] on communications structures that do not have structural holes.

Leavitt (1951) [33], in a study of 100 MIT students, find that communications patterns explain differences in group outcomes such as accuracy, total activity, satisfaction of group members, emergence of a leader, and organization structure.

Of course, dense communications structures are costly to maintain. And technology-mediated structures are associated with less communication (Kraut and Streeter, 1995 [34]), less satisfactory dialog (Kiesler and Cummins, 2002 [35]), and less spontaneity (Hinds and Mortensen, 2005 [36]).

Both theoretically and empirically, it is hard to pin down communications structure as an explanatory variable for team performance under technological influence, because of endogeneity issues. For example, Van den Bulte and Moenaert (1998) [37] find that R&D teams that are socially tight increase the density of their communications. Furthermore, there are interactions with organization structure. Guetzkow and Simon (1955) [38] report a study of 56 groups whose communications structure tend toward a hub-and-spoke organization structure.

Another issue with communications structures is that they may not be address the type of complex and unplanned communications needed in dynamic settings; see Back (1974) [39]. So communications might be an issue in team performance only up to a limit; see Hinds and McGrath (2006) [40].

Tushman (1979) [41] argues that the communications-to-performance link is also moderated by task characteristics, environment, and interdependence.

Coordination theory

Malone and Crowston (1994) [42] argue that different coordination mechanisms in turn fit the above interdependencies; see also March and Simon (1958) [43]; Galbraith (1973) [44]; Mintzberg (1976) [45]:

  • standardization
  • direct supervision
  • mutual adjustment

Hutchins (1991) [46] document how officers on the USS Palau work together to steer the carrier towards San Diego. He argues that the outcome of some tasks (such as finding the mathematical solution to triangulation) is optimal and emerges "without reflection from adaptations by individuals to what appear to them as local task demands. It is argued that while the participants may have represented and thus learned the solution after it came into being, the solution was clearly discovered by the organization itself before it was discovered by any of the participants."

Typology of interdependencies

Malone and Crowston (1994) [47] document different types of interdependencies and the known ways for managing them. The following largely follows their table 1:

Sharing resources

Some standard methods from operations research are first come/first serve and priority order scheduling. Resources may also be shared through budgets. Budgets might be explained by power differences; see the classic study by Pfeffer and Salancik (1974) [48] on university departments; and might explain power differences when executives attract resources; see Barnard (1954) [49]. Another perspective on resource sharing is to consider the organization structure set up to allocate resources. Williamson (1973) [50] proposes that, when transaction costs between agents are high enough, it is better to allocate resources within hierarchies rather than in a market structure. More recently, the issue of decision rights, including the right to allocate and use resources, is addressed within the paradigm of property rights; see Grossman and Hart (1986) [51]. The idea is that if it is difficult to evaluate the investment return from allocating resource usage to a party, then it is efficient for that party to simply own the resource.

Specialized version of this dependency might be the sharing (or linking) of reputations between business units and the sharing of characteristics in design (e.g., the coordination of colors of various sub-components).

Coordinating timing

Examples of these include producer-consumer relationships, in which one task has to be done before another. This could also be viewed as a simultaneity relationships, in which the end of one task has to match the beginning of another. The key idea seems to be that coordination can be achieved either by

  • Enhancing buffer management

Quality when there is pre-requisite relationships, such as the famous Toyota Production System approach to pulling a chord when there is a problem anywhere in a production line; se Ohno (1988) [52].

  • Enhancing forecasting

Specialized versions of this idea include enhancing visibility along a supply chain with technologies such as RFID (see Bose and Pal (2005) [53]), participatory design (Schuler and Namioka, 1993 [54]), concurrent engineering (e.g., see survey in Jo et al. (1991) [55]), and better planning and scheduling. In operations research, the last usually involves PERT charts and other critical path methods, and the use of inventory to account for lead time and production smoothing; see Wagner (1969) [56].

  • Maintaining flexibility and standardizationThis includes postponing the customization of production; see Brown et al. (2000) [57] and standardization (Hoyt, 1919 [58]).

It is also intriguing to consider what actually happens when the above techniques do not work well. Over- and under-stocking, incorrect delivery, lost units, the bullwhip effect (Lee et al. (1997) [59]), and other real phenomena all conspire to make managing pre-requisite relationships hard. Indeed, much of operations management is to figure out ways to enhance the management of this dependency:


The early exposition of this is by Simon (1947) [60], in which he introduces ideas such as means-ends analysis, and the accomplishment of goals under bounded rationality.

A key consideration of whether goals and the accompanying business organizations should be decomposed is whether there is economies of scale and scope; see Chandler (1962) [61]. Another consideration is whether people can be incentives to achieve goals that have been farmed out to them (see Agency Theory). Yet another consideration is to consider whether and how managers actually develop and decompose goals; Mintzberg (1976) [62].

Coordination difficulties

Conflicting goals

Impact of information technology on collective work

Malone et al. (1987) [63] argue that the reduction in coordination costs by information technology affects how we work in three stages:

  1. First-order effect: we substitute human coordination (say in the form of hierarchies) more automated forms of coordination
  2. Second-order effect: we next coordinate more intensively than before, presumably to greater collective performance
  3. Third-order effect: we begin to use more coordination-intensive structures enabled by low coordination costs. An example is the use of adhocracies. As we coordinate on a much wider scale, we can now work with people who are even strangers. So recommender systems and other agencies that support the use of trust and credibility might emerge.

Gurbaxani and Whang (1991) [64] propose that information technology may lead to greater centralization as it reduces the cost of making decisions. This is countered by an opposing force that leads to greater decentralization, as technology reduces the cost of agency.

Involving customers and users

See also the section on Human computation.

Crowdsourcing, democratization, open sourcing

Howe (2006) [65] Sometimes called Collective Customer Commitment (CCC), this could range from corporations involving customers in the organizations' work, such as product design, to individuals farming out micro-tasks, such as taking stock photography.

The Syracuse University FLOSS (Free/Libre Open Source Software) Project [66] reports a number of findings, such as:

  • Shared mental models, such as interpretative schemes, resources, and norms, are present in FLOSS developers; see Scozzi et al. (2006) [67]
  • Leadership emergent in open source development. Open source teams do have core leaders: the Gnome project has 11 leaders (Koch and Sneider, 2002 [68]), the Apache httpd project has about 8 (Mockus et al., 2002 [69]), and SourceForge has about 8 (Crowston et. al., 2006 [70]). While traditional leadership studies emphasize traits, context, or the relationship between leaders and followers (Crowston et al., 2006 [71]), open source teams might draw upon other theories such:
    • leaderless self-managing teams—e.g., Manz and Sims (1980) [72]
    • shared leadership—e.g., Spillane (1999) [73]
    • Behaviorial theories of leadership—e.g., Denison (1995) [74], and in particular, functional behavioral theories such as Bass (1980) [75]. The latter suggests a two order theory, of first-order (related to actually doing the groups work) and second-order (related to maintaining the group's atmosphere) behavior. Crowston et. al., 2006 [76] propose that effective teams have shared first-order leadership and centralized second-order leadership.

In a survey of 684 developers of 287 open source projects, emotional and intellectual stimulation, user need, and learning are primary motivations for contributions, and not the prospect of better jobs or career advancement; see Lakhani and Wolf (2005) [77] and other papers in Feller et al. (2005) [78].

Von Hippel (2005) [79] suggests that companies could do well leveraging the energies of lead users, who are often important sources of innovation. About 10 to 40 percent of the users (lead users) in the products he studies (such as windsurfing boards, cyclist's drinking bottles, some kinds of medical equipment) engage in design and modification. He also proposes that the divergence of interest between lead users, who want their needs to be precisely met, and customizing manufacturers, who want to cater to as low a denominator as possible, is an example of agency costs that explains why customization by manufacturers is rarer than it is needed by lead users. In a related study, Lilien et al. (2002)[80] suggest that 3M estimates that its sales revenues could go up by 8 times if it were to switch from traditional to open sources of innovation.


  1. Knowledge for problem-solving is widely distributed in society; see Hayek (1945) [81]. In particular, this knowledge is sticky—that is, costly to acquire, transfer, or use in a new location (von Hippel, 1994 [82])—so there is a good chance that knowledge relevant to solve a problem may occur in geographic lumps, some within the organization of interest and some outside. See also Raymond (1999) [83].
  2. Agency issue between manufacturer (who prefers to make as general a product as possible) and user (who prefers as customized a product as possible) leads to suboptimality for both; von Hippel (2005) [84].
  3. Users often enjoy product customization anyway, see von Hippel (2005) [85].
  4. Users have incentive to reveal their innovations, because they--more than free riders--gain from improvements upon what they reveal (von Hippel and von Krogh (2003) [86]). This is especially so when the innovation can be developed modularly, such as in open software projects; see Baldwin and Clark (2003) [87].


  1. User innovations may be "minor", but Hollander (1965) [88] suggests that important technical progress comes from cumulative minor innovations. Eighty percent of the cost reduction in Du Pont's Rayon manufacturing comes from minor technical changes.


  • A Swarm of Angels [89] is a project to utilize a swarm of subscribers (Angels) to help fund, make, contribute, and distribute, a £1 million feature film using the Internet and all digital technologies. It aims to recruit earlier development community members with the right expertise into paid project members, film crew, and production staff.
  • ArmchairGM [90] is a promising extension of wiki with explicit user evaluation.
  • Assignment Zero [91], a "pro-am" collaboration launched by Wired and NewAssignment, allows citizen journalists to work with professional editors on a story, with the team's research available for re-use.
  • BSD [92] project started in 1970s, became freely distributable in 1991. However, a lawsuit and an injunction from AT&T put its legal status in question until 1994 and reduced its impact on open-source development.
  • Cairns [93] is graphical groupware designed to help those working collaboratively to evaluate and compare their own experiences and to search and learn from the experience of others. Cairns can help you to evaluate, design and manage your group project.
  • Cambrian House [94] applies a crowdsourcing model to identify and develop profitable software ideas. Using a simple voting model, they attempt to find sticky software ideas that can be developed using a combination of internal and crowdsourced skills and effort.
  • Citizendium [95] is intended to avoid the errors, vandalism, and lack of accountability of Wikipedia. Citizendium's volunteer contributors will be expected to provide their real names. Experts in given fields will be asked to check articles for accuracy. See article in USA Today.
  • Current TV [96], VH1's Web Junk 20 [97] and its sister company iFilm [98]: broadcast programming done by users.
  • [99] is a web service that helps people organize, annotate, and share their bookmarks. It was created in late 2003, and mostly notable for its well developed social tagging system and, recently, tools for building content-centered social network. Users can subscribe to tags of other users, add people to their network, who tags pages of interest to them. The site uses the number of people who bookmarked a certain page as a measure of popularity and features the most popular pages to make even more people aware of them.
  • Digg [100] is a news discovery service that integrates submissions and evaluations of many people to discover the most interesting news stories. Interestingness is measured by the frequency estimate of positive user feedback (diggs) that story recieved shortly after submission. The service was launched in 2004 in the US and received probably the most attention among services of this kind.
  • Dmoz [101] (initially gnuhoo) was launched in June 1998 as a web directory edited by volunteers rather than employees. The initial name was misleading in its gnu- part, because the similarity to GNU went only half-way: the project invited contributions of volunteers like in GNU, but, unlike GNU, the end results, both software and content, were proprietary to the company that provided the platform for collaboration. However, Dmoz might be the first in the long line of commercial web 2.0 projects that derive their profit from contributions of volunteers and don't give their contributors a free license to use the created content.
  • Donationcoder [102] is a donationware development website based on the concept of user innovation and crowdsourcing launched in March 2005. Users suggest what kind of software they would like to see implemented and how much they would be willing to donate to developers who decide to implement this functionality. This helps to identify the most useful projects, i.e. in which many users are interested. At some point, the total donation amount from all interested users may become sufficient to implement the software. This micro-finance practice was mostly limited to small projects. The idea was quickly adopted by others, for example,
  • Electric Sheep [103] abstract art made by a cyborg mind composed of 350,000 computers and people.
  • Eureka Medical [104] connecting innovation-seeking companies with medical inventors to develop medical solutions.
  • ESP Game [105], people collaborating in labeling images.
  • Free Knowledge Exchange [106] project is a knowledge market that combines intelligent abilities of many people to identify and solve their problems using evolutionary computation. The goal is to create an open community and a kind of collective intelligence that effectively helps every participant to be more successful in solving everyday problems. It was launched as a research project in May 1998 in Russia and may be the first online project to specifically explore collective intelligence by outsourcing intelligent operations to a large number of people (essentially what was later called "crowdsourcing"). The mechanism of this service was published in 2000-2002 in several research papers. Since then, the elements of this service were adopted by others. The most notable similar service is Yahoo! Answers, launched by Yahoo! in December 2005 and currently the biggest service of this kind by the number of users.
  • Global Public Health Information Network [107] combines computer-based search, translation and filtering software with human investigation committee to identify early warning signs of new epidemics and public health crises.
  • GNU [108] project was started by Richard Stallman in 1984 with the goal to make software freely available to people. Free software allows many people to collaborate in software development: learn, reuse, modify, recombine, and adapt software to their needs. It enables open user innovation. But in order to achieve this goal, two major difficulties had to be dealt with: technical and legal. A free software development environment was necessary to produce free software. A major legal innovation was needed to provide a legal basis for such an activity. GNU project provided free tools to developers and created several content licensing mechanisms providing a basis for collaborative creation of software and other works of authorship.
  • Google [109] is a web search engine that evaluates web content based on aggregated implicit human evaluations contained in web references. It was created in September 1997 by Larry Page and Sergey Brin. The evaluation Google produces is known as the Google PageRank. It is based on the Markov chain model. The PageRank is essentially the share of the time that a web user will spend on the page by randomly following web references for a sufficiently long time.
  • [110] a Harvard-MIT initiative using automated computer programs to scan news articles and produce a map of disease alerts
    • sources of disease alerts include ProMED, World Health Organization, Google News
    • Anyone can zoom in with the map to view specific areas, or opt to view disease alerts by category or by date
    • May not be perfect as it depends on automated indexing of news, but a great start
  • IdeaConnection [111] is a website that facilitates collaborative problem solving, creativity and idea exchange. People can use the service to solve problems, seek solutions, and create ideas and products. People can also utilize the site to buy, sell or license inventions, innovations, new products, ideas, intellectual property and patents.
  • InnoCentive [112], NineSigma [113], YourEncore [114]. NineSigma and InnoCentive are similar. The latter is started by Eli Lilly in 2001 and posts engineering and science problems by subscribing corporations such as Boeing, DuPont, and Procter & Gamble. The cash rewards for solutions range from thousands to hundreds of thousand dollars. It claims that as of 2006, 30% of the problems posted have been solved. YourEncore works with retired scientists and engineers. An important difference between set ups like these and our idea of collective intelligence is that these have individual problem solvers. However, it might be interesting to see what ideas about these settings might also apply when there are multiple solvers. Some of the lessons from these settings are in Lakhani et al. (2007) [115] who, in examining participants in Innocentive, observe that participants are more likely to actually solve the required problems when: (1) the prize is bigger, (2) participants are more intrinsically motivated, (3) participants have more free time, (4) participants are non-experts in the field, and (5) participants are not participating due to career concerns, social motivations, or to beat others.
  • InnovationExchange [116] is somewhat similar to InnoCentive, however it also aims to facilitate collaboration between solvers and offers community rating of ideas, as a means to help client companies sift ideas and examine only the best candidate solutions.
  • Intermix [117] project created by Roger Eaton in 1988 introduced the idea of collective communication where people can write messages on any topic of concern and then rate each other's messages for interest and agreement, so messages important for community are identified and can be acted upon. A group can maintain a dialog with its leaders using these collective messages. It can seek advice from an individual outside the group, try to influence someone, and direct the activities of volunteers or hired agents.
  • iStockPhoto [118] is a website with over 22,000 amateur photographers who upload and distribute stock photographs. Because it is not burdened by the expenses of a professional organization like Getty Images it is able to sell photos for a lower price. It was recently purchased by Getty Images.
  • [119] provides microfinancing to entrepreneurs in the under-developed/developing world by connecting them to lenders from the communities in the rest of the world, basically you and I. An agent or partner qualifies these applicants, then uploads their loan application to's website, and anyone in the rest of the world may participate by selecting applicants to lend money to. An agent in Africa uses the collective intelligence of the village to qualify an applicant, by holding a town hall style meeting, where the people who know the applicant best provide input to the agent on whether the borrower will be good at the business and whether the applicant is a good business risk.
  • Knowledge-iN [120] is a knowledge market service by Naver combined with a search engine. The Knowedge-iN service was launched in October 2002 in Korea, helping Naver to become a top web portal in Korea. The main difference from Free Knowledge Exchange seems to be the combination of Knowledge-iN with a search engine and a social network.
  • Lima Refinery [121] improvement story - Example of integrating Interpersonal and Distributed forms of CI
  • Linux [122] project was started by Linus Torvalds in 1991 to create a free operating system uninhibited by legal issues and produced it before the lawsuit about BSD was settled. As a result, GNU/Linux and not BSD became the major force behind the open-source movement. Linux development fully explored advantages of open innovation in software development and inspired many later projects.
  • LinkedIn [123] initially provided social networking connecting individuals with past professional and personal contacts; has now evolved to all questions to be asked through your social network or distributed group identities
  • LiveOps [124] says it has 10,000 free agents signed up to become virtual call centers for corporations of many different industries.
  • Marketocracy [125] gathers top stock market investors around the world in head to head competition so they can run real mutual funds around these soon-to-be-discovered investment super-stars.
  • Mechanical Turk [126] is an web site where people can advertise and perform "HITs" (human intelligence tasks) that are difficult for computers, but easy for humans (e.g. determining if there is a pizza parlor in a photograph). People get paid for performing HITs.
  • Mountain Bike [127] See also [128]. The development of the mountain bike in on Mount Tam and the diffusion to the Crested Butte-Aspen community is often cited as an example of the open-source approach to developing a non-software product.
  • OASIS [129] is an Open Standards development organization, OASIS (Organization for the Advancement of Structured Information Standards) is a not-for-profit, international consortium that drives the development, convergence, and adoption of e-business standards. The consortium produces more Web services standards than any other organization along with standards for security, e-business, and standardization efforts in the public sector and for application-specific markets. Founded in 1993, OASIS has more than 5,000 participants representing over 600 organizations and individual members in 100 countries.
  • PicksPal [130], like Marketocracy, get predictions from many to select the few who are good predictors.
  • Protégé [131] is an open source ontology editor and knowledge-base framework developed (since 1985) by Stanford Medical informatics at the Stanford School of Medicine
  • Public Insight Journalism [132], a project at American Public Media to cover the news by tapping the collective and specific intelligence of the public. Gets the newsroom beyond the usual sources, uncovers unexpected expertise, stories and new angles.
  • Sandia Lab [133] did an experiment on whether groups or individuals are better at solving "wicked" problems. They found that "people working as individuals were at least as effective and possibly more so than those brainstorming in a group over the web when trying to solve ‘wicked,’ tangled problems, both in terms of quality and quantity."
  • Sermo [134]] closed community of health professionals (71,000+ U.S. physicians)
    • Individuals vetted as real physicians before gaining access; once online, operate via a pseudonym
    • Can ask any question to the community and answer any question; community of peers rank value of questions and answers
    • Sermo also provides incentives to post; several say they participate for fun or to learn
  • SitePoint [135] holds contests in design work.
  • Slashdot [136] comment moderation and meta-moderation system powered by its readers (1999). "Imagine this would work like each comment would have some sort of score. Comments could be given points or have points removed based on how many people vote somehow." (source). "Think of a news site like Slashdot without a guy like me, or a group of guys at the center. One where the best comments become the articles on the homepage." (source).
  • SourceForge [137] is the leading open source software community environment and repository
  • StumbleUpon [138] is a web discovery service that integrates submissions and evaluations of many people to help them to discover quality content: news, photos, multimedia. Unlike most of the prior work in collaborative filtering using content relevance, StumbleUpon uses a more balanced approach combining peer endorsement with conceptual relevance. It also uses evolutionary computation to better match content to the interests of its users. This project was launched in February 2002 in Canada.
  • The AC/UNU Millennium Project [139] is a global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, NGOs, and universities. The Millennium Project commenced its work in 1992, and is currently managing a coherent and cumulative process that collects and assesses judgements from its 28 geographically dispersed nodes and hundreds of participant to produce the annual "State of the Future", "Futures Research Methodology" series, and other special studies and reports.
  • The Goldcorp Challenge [140] is an example of how a traditional company in the mining industry used a crowdsource to identify likely veins of gold on its Red Lake Property. The challenge was won by Fractal Graphics and Taylor-Wall and Associates of Australia by identifying 110 drilling targets, 50% of which were new to the company.
  • The Ontolog Community of Practice [141] is an open, international, virtual community of practice devoted to advancing the field of ontological engineering and semantic technologies. Established in 2002, Ontolog (a.k.a. Ontolog Forum) advocates the adoption of ontologies (and ontological engineering methodologies) into mainstream applications and international standards.
  • Threadless [142] , an Internet-based clothing retailer that sells t-shirts which have been designed by and rated by its users.
  • TWiki [143] is an open source wiki software targeted at enterprise collaboration. The project was started in July 1998 by Peter Thoeny. It is notable for introducing RCS-based revision control to wiki in October 1998. Revision control turned out to be a very important addition to wiki, as it provided for selection among different revisions and easy elimination of unhelpful edits. TWiki also introduced the concept of structured wikis.
  • USENET (USEr NETwork) was created in 1979 by Tom Truscott and Jim Ellis. USENET is a distributed message repository and communication medium, where messages are tagged by their authors using the names of the existing newsgroups. Being a precursor of the modern tagging system, USENET has the extra advantage of being distributed, however creating a new tag/newsgroup is not as easy as in modern tagging systems.
  • Wikinancial [144] is a website that allows an online community of users to share their stock picks for free. The goal of the site is to let investors backtest their strategies, and quotes are taken from Yahoo! Finance. Wikinancial ranks both members’ portfolios and stocks by their performance.
  • Wikipedia [145] is a project creating a free encyclopedia that anyone can edit as a wiki. It was launched on January 15, 2001 using UseModWiki software that was augmented with revision control and concurrent editing capability (between December 9, 2000 and February 1, 2001). It is not clear if these features were requested by founders of Wikipedia or their appearance was a lucky coincidence, but they turn out to be crucial to the success of Wikipedia. Apart from this, Wikipedia project introduced many novel features into wiki technology, among them: embedded images (January 2002), social tagging and parameterized templates (August 2004), permalinks (October 2005), undo revison feature (January 2007). Wikipedia releases its content under GNU Free Document License (GFDL), that was drafted just one year before Wikipedia launch.
  • Windparken [146] is being used in the Netherlands to plan wind turbines. The wiki, extended with a Google Maps plugin, presents maps with proposed wind turbine locations. The goal is to decide on locations for 6000 3-MegaWatt turbines, enough to provide for all electricity in the Netherlands.
  • [147] allows anyone on the internet to report if they are feeling sick and describe basic symptoms and geolocation
    • Represents a form of radical transparency, anyone can provide information or view the information on the site
    • May not represent a perfect idea (but instead be an open invitation to hypochondriacs), yet this does show how the internet can include public participation in providing information
  • Yahoo! Answers [148], launched by Yahoo! in December 2005, is mainly notable due to its popularity and successful deployment worldwide. Yahoo! Answers was built as an English analog of the Korean Knowledge-iN project by Naver and very similar to it in many aspects, except for the name and language. Despite the lack of technological innovation, this is one of the recent successes of Yahoo, very well executed and received the most attention recently among this kind of services. It might be now the second most useful reference resource after the Wikipedia and well integrated into Yahoo! search.
  • YRUHRN [149] used Amazon Mechanical Turk and other means of crowdsourcing to compile content for a book published just 30 days after the project was started.

Knowledge management

Nonaka and Konno (1998) [150] introduce the concept of "Ba" (from the Japanese, for "place"), which is a shared space for emerging relationships. They propose that knowledge, in contrast to information, cannot be separated from the context - it is embedded in Ba. They also explain how the concept of Ba is used in 3 companies to enhance knowledge creation.

Relevant perspectives from organization studies

Blau's (1970) [151] crucial variable is organizational size which leads to structural differentiation (such as increasing numbers of subdivisions) and coordination problems within the organization. Blau's theory refers to organizations with paid employees and largely ignores technology, environmental factors, or individual psychology within organizations. Blau develops his theoretical framework with reference to quantitative research of structural differentiation in 53 governmental organizations

  1. Blau's general propositions:
    1. Increasing organizational size generates differentiation at decelerating rates. As the size of an organization increases, its marginal influence on differentiation decreases.
      1. large size promotes structural differentiation
      2. large size promotes differentiation along several different lines
      3. the rate of differentiation declines with expanding size
      4. the subunits into which an organization is differentiated become internally differentiated in a parallel manner (differentiation is uniform throughout the organization)
    2. The larger an organization is, the larger the structural components of all kinds
    3. Proportionate size of the average structural component, as distinguished from its absolute size, decreases with increases in organizational size
    4. The larger the organization is, the wider the span of supervisory control (the greater the number of people under a manager's supervision)
    5. Organizations exhibit an economy of scale in management
    6. The economy of scale in administrative overhead declines with increasing organizational size
  2. Structural differentiation in organization enlarges the administrative component. The more differentiated the formal structure, the more administrative personnel of all kinds should be found in an organization of a given size, and the narrower the span of control of first-line supervisors and higher managers.
    1. The larger size of an organization raises the ratio of administrative personnel
    2. Direct effects of large organizational size lowering the admin ratio exceed indirect effects related to structural differentiation
    3. Differentiation of large organizations into subunits stems the decline in the decrease in the proportion of managerial personnel with increasing size
  3. Barriers to effective collaboration

A main barrier to collaboration may be the difficulty in achieving agreement when diverse viewpoints exist. This can make effective decision-making more difficult. Even if collaboration members do manage to agree they are very likely to be agreeing from a different perspective. This is often called a cultural boundary. For example:

  1. A culture where rank or job title is important makes it hard for a lower rank person who may be more qualified than their superior for the job it had to collaborate. The lower rank person is told what to do. This is not collaboration
  2. "stranger danger"; which can be expressed as a reluctance to share with others unknown to you
  3. "needle in a haystack"; people believe that others may have already solved your problem but how do you find them
  4. "hoarding"; where people do not want to share knowledge because they see hoarding as a source of power
  5. "Not Invented Here"; the avoidance of previously performed research or knowledge that was not originally developed within the group/institution.

See also

Robert's Rules of Order [152]

The Resilient Organization [153]

Personal tools