Computer science and artificial intelligence perspective on collective intelligence

From Handbook of Collective Intelligence

Jump to: navigation, search

Template:Brainstorm

Contents

Mechanisms in collective intelligence

Although many of these ideas are developed in computer science, some believe these can also apply in human and human-non-human environments, which Gelernter and Carriero (1992) [1] call Turingware.

Sharing resources

Dijkstra (1968) [2] and Hoare (1974) [3] develop the Brinch-Hansen (1972) [4] concept of a monitor to synchronize tasks scheduled for jobs in an operating system. The implementation is in the form of semaphores and critical regions for mutual exclusion.

Managing process exceptions

Lampson and Sturgis (1979) [5] propose the idea of an atomic action, in which either all steps or no step in a process are executed.

Decomposing tasks

Carriero and Gelernter (1989) [6] and Gelernter and Carriero (1992) [7] propose Linda, a parallel programming language, in which there are passive tuples (representing real-world entities such as patients in a medical ward or neurons in a brain) and active ones (representing scheduling managers in the medical ward or "simulation engines" that fire neurons). In this framework, tasks are assigned based on task type, subparts of the output, or by processor availability.

Communicating

Hewitt (1977) [8] Early parallel programs use a message-passing paradigm, in which agents are decentralized and message-passing among them can result in familiar control structures such as iteration and recursion.

Erman's (1976) [9] HEARSAY and Erman et al.'s (1980) [10] HEARSAY-II systems use a blackboard architecture in which agents post and read messages.

A third approach is to use a "partial global plan" to distributed hypothesis formation [11] and distributed problem solving (see Durfee and Lesser,1988 [12]). The approach "interleaves coordination with action and allows systems to make effective decisions despite incomplete and possibly obsolete information about network activity."

Swarm intelligence

A number of robotics researchers have developed ideas on what robotic intelligence could learn from the behavior of seemingly unintelligent animal behavior.

In a dissenting view, Martin et al. (2002) [13] model the behavior of ants that seem to build cemeteries without planning. They conclude that the behavior is more a statistical artifact, rather than collective intelligence.

Measuring collective intelligence and the factors that affect it

Almulla and Szuba (1998) [14] propose measuring collective intelligence with a IQS (for intelligence quotient of a social structure) based on the probability that a group of human or non-human agents could solve given problems. They simulate the group with PROLOG engines.

Distributed problem solving

Davis and Smith (1988) [15] propose a Contract Nets protocol with which clients and servers could communicate with each other so that the system as a whole can efficiently allocate servers to clients.

Kornfeld and Hewitt (1981) [16] propose that a community of programs called sprites can be used to allocate resources. Sprites can stifle other sprites if the latter's work is now known to be unnecessary. They show that sharing intermediate results and managing computation in this way improves time performance, and call this the combinatorial implosion effect.

Social networks

Second-life communities

www.platial.com can be thought of a www.wikipedia.org for places. Users annotate and submit photos about places of interest.

Contributing

Geo-coding

Google Earth supports the keyhold markup language (KML) for users to geo-code information onto the latitude-longitude grid. Other users can access published KML information.

For example, the U.S. Holocaust Memorial Museum works with Google to geo-code 1,600 Darfur villages ravaged by violence [17]. This allows us to better connect to global tragedies, and increase empathy in finding solutions. Interestingly, U.S. government restrictions on software exports means the Sudanese themselves cannot download Google Earth; see Roush (2007) [18].

Philosophy of mind

There is an emerging consensus among cognitive scientists and artificial intelligence researchers that cognitive processes are computational; see Fodor (2001) [19].

Consciousness and understanding

No one seems to understand how, from our billions of wet neurons, there is emergent consciousness and understanding.

Philosophers such as Searle (1980) [20] argues that the computational processes do not produce understanding. His argument is a thought experiment that modifies the Turing test (Turing, 1950 [21]). If Searle, who does not understand Chinese, were locked in a room with a program that passes a Turing test in Chinese. Searle acts as the executioner of the program. So the room acts as if it understands Chinese, but nothing in there does.

Dennett (1971) [22] proposes that wet neurons do lead to consciousness, even though no one knows how. He suggests that this could be because, unlike say a "hundred kilos of yeast," neurons in the brain perform different tasks.

Of course, it is reasonable to suppose that consciousness is outside the purview of artificial intelligence. But for us, it is interesting to see how a collective intelligence can inform us on this issue.

Creativity

Another area that proved difficult is creativity. Fodor (1983) [23] observes that analogical reasoning is probably one of the most important components of intelligence, but is one in which nobody understands well. Again, it is interesting to see how a collective intelligence can inform us on this issue.

Human computation

The idea is that some parts of computing can better done by humans (e.g., visual recognition of images) and some by computer. An important variant today motivates the public to contribute to the collective system with games. See the section on Crowdsourcing

Mechanisms

Semi-formal systems

Malone et al. (1986) [24], in introducing the idea of semi-formal systems, argue that the goal for artificial intelligence could be re-framed in exactly that way, in which work (their example is information filtering and retrieval) can be seamlessly done by both humans and computers.

Intrinsic motivation with games

Malone (1980) [25] addresses two questions: what makes computer games fun and what do this mean for general heuristics to make learning fun? He examine games like Darts and Breakout and conclude that work and learning could be made more fun if they incorporate:

  • Challenge: the goal is clear, with varying levels of difficulty and performance feedback
  • Fantasy: the interface is emotionally appealing and there are metaphors with the physical world
  • Curiosity: there is an optimal level of informational complexity (audiovisual to reduce complexity, some randomness, some humor), and capitalizes on users' need to have "well-formed" knowledge structures.

Incentives

Chklovski (2005) [26] argues that incentives are necessary, but intense incentives could push participants so aggressively that they start to think creatively of gaming the system. He claims that his "1000 paraphrase" game avoids this because, "rather than reward for pure volume, the approach rewards for correctly guessing obfuscated previously known answers."

Example applications

Some pioneering examples are below.

Open Mind Project

Stock's (1999) Open Mind Project [27] aims to have netizens contribute data used by algorithms developed by developers. In Open Mind Speech, the goal is to collect speech data from netizens.

Image inspection

  • Neufeld et al. (2003) adopt the paradigm of a partnership between artificial and human intelligences, in solving the problem of detecting edges.
  • Search for Jim Gray [28]. When Jim Gray and his boat Tenacious disappear off the California coast in January 2007, an online effort is undertaken to inspect over 560,000 satellite images for the boat and person. Unfortunately, the effort is not successful in locating either.
  • ESP Game [29] (see von Ahn and Dabbish (2004) [30]), Google Image Labeler. The goal is to make image tagging fun enough that netizens tag them willingly. Game players compete in pairs, each of which gets points for every image whenever the pair guesses—synchronously or not—a common tag for that image. For each image, tags that have already been produced by a pre-set large number of agreeing pairs are marked as taboo, so that future pairs will have to produce new tags for the image. An image is considered fully tagged when it is judged no longer fun to tag or a pre-set number of taboo tags has been obtained. After 4 months since its launch in August 9, 2003, the game attracted 13,630 people, producing 3.89 tags per minute.
  • Phetch; see von Ahn et al. (2006) [31]. The goal is to produce sentence labels rather than word labels. A player (Describer) describes an image and others (Seekers) use the description to search for and guess the Describer’s image (wrong guess earn penalty points, the right guess earns both the Seeker and Describer points, and allows the Seeker to become the next Describer).
  • Peekaboom; see von Ahn et al. (2006) [32]). A player (Boom) tags an image with a label and then asks another player (Peek) to guess the word by looking at pixels gradually revealed by Boom. Peek-Boom pairs compete with each other. Correct guess earn the pair points. Since Boom wants Peek to guess as quickly as possible, she will reveal the most essential parts of the image first. This process demarcates objects (e.g., a cow) from the rest of an image, thereby tagging metadata (pixel locations) to object labels.
  • LabelMe; see Russell et al. (2005) [33]) also asks users to annotate images. It does not have the game element of PeekABoom but claims to have higher quality annotations of daily scenes rather than small pictures.

Collecting user inputs

  • Open Directory Project [34]. The Open Directory Project is the largest, most comprehensive human-edited directory of the Web. It is constructed and maintained by a vast, global community of volunteer editors.
  • WIKIPROTEINS [35] A collaborative website focusing on proteins and their role in biology and medicine. See [[36]]
  • Wikipedia [37]. Collects facts (not for sale). A review by the New Yorker is in [[38]].
  • DreamsTime [39], iStockPhoto [40], ShutterStock [41]. Collects photos (for sale).
  • Friendly Knowledge Capture at MIT Media Lab [42]. This consists of several subprojects:
    • Open Mind Common Sense [43]. Singh et al. (2002) [44] describe how over 14,500 netizens participate in entering 700,000 sentences into a ConceptNet [45]. Open Mind Commons [46] is a recent reincarnation.
    • Open Mind Experiences [47]. Push and Barry (2003) [48] describe an evolution of the Open Mind Common Sense project, in which the objective is now to capture scripts rather than assertions.
    • LifeNet Acquisition System [49]. This is an application of the corpus from the Open Mind Common Sense project. It leverages a game experience for netizens to asset propositional statements as true or false. The interface is called StoryNet [50].
  • Verbosity; see von Ahn et al. (2006) [51]. This is set up as a game, in which a Narrator gets a word from the system and sends hints to Guesser about what that word is. Both players form a pair to compete with other pairs, and once a Guesser guesses right, both Narrator and Guesser get a point. The key is that the hints issued by Narrators are sentences that are true of the word.
  • Learner; reported in Chklovski (2003) [52], and Learner2 are similar systems without the game element. The first gathers 3,400 volunteers to enter 217,971 assertions about daily life.
  • Others: Internet Movie Database [53].

Secure computing

In general, this area incorporates "AI-complete" problems to boost security.

Gentry et al. (2005) [54] propose that the payment for human computation can be an online currency.

Golle and Mironov (2001) [55] provide a framework in which the reward for computation (such as human computation) can be used as payment currency.

Collaborative filtering

Spamnet [56] and Razor [57] use human votes to determine if an email message or source is spam. Zhou et al. (2003) [58], in an experiment involving 9,589 non-spam and 14,925 spam messages, find that the error rate of this human-computer system detecting false positives is negligible, at 10 to the power of 8.

Distributed tasks

  • The Distributed Proofreader Project [59] is originally set up to reduce OCR errors in book scanned under Project Gutenberg [60], but is now a stand-alone entity. Volunteers offer to correct these books in several passes, matching OCR pages against scanned pages. There is no commitment on the part of volunteers.
  • Yahoo! Attribute extraction, categorization, entity resolution (ACE) applications; see Su et al. (2007) [61]. The authors describe System M, in which users can submit tasks to be completed by other users. They find that, in an experiment involving 81 questions selected from Yahoo! Answers, 45% of the answers provided by users are useful.
  • Amazon.com's Mechanical Turk. This is an exchange in which requesters post micro-tasks that pay small amounts when completed by others. Interestingly, it is claimed that " half of the people signed up are trying to pull a scam," Howe (2006) [62].
  • SETI@HOME [63].
  • The Great Internet Mersenne Prime Search [64]

Synthesizing and visualizing collective inputs

  • History Flow; see Viegas et al. (2004) [65]. This explores social visualization, providing a way to visualize collaborations and conflicts in collective work. The authors look at wikipedia in particular, and conclude that wikipedia works because:
    • Watchlists provide a mechanism for rapid response to what watchers believe is vandalism
    • Talk pages allow meta-discussions off the main content pages
    • The neutral point of view stance helps clarify what is acceptable and what is not.
  • Photosynth

See also

Jagannathan and Dodhiawala (1986), Distributed artificial intelligence: an annotated bibliography [66]

Chaib-Draa et al. (1992), Distributed artificial intelligence: an annotated bibliography [67]

Open Mind Project [68]

Personal tools