The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.
Early work
Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of media—from futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatres—over the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York World’s Fair by Fred Waller and Ralph Walker, originated in Waller’s studies of vision and depth perception. Waller’s work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Trainer—an example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.
Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a “cinema of the future.” By late 1960, Heilig had built an individual console with a variety of inputs—stereoscopic images, motion chair, audio, temperature changes, odours, and blown air—that he patented in 1962 as the Sensorama Simulator, designed to “stimulate the senses of an individual to simulate an actual experience realistically.” During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted “stereoscopic 3-D TV display” that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.
The seeds for virtual reality were planted in several computing fields during the 1950s and ’60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called “light guns”). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.
During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a “man-computer symbiosis” and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.
Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MIT’s Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPA’s premier research centres. In 1965 Sutherland outlined the characteristics of what he called the “ultimate display” and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Roberts’s Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbart’s invention of a new input device, the computer mouse.
Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilot’s head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called “augmented reality” because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearer’s ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewer’s immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.
Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a “cinema of the future.” By late 1960, Heilig had built an individual console with a variety of inputs—stereoscopic images, motion chair, audio, temperature changes, odours, and blown air—that he patented in 1962 as the Sensorama Simulator, designed to “stimulate the senses of an individual to simulate an actual experience realistically.” During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted “stereoscopic 3-D TV display” that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.
The seeds for virtual reality were planted in several computing fields during the 1950s and ’60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called “light guns”). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.
During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a “man-computer symbiosis” and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.
Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MIT’s Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPA’s premier research centres. In 1965 Sutherland outlined the characteristics of what he called the “ultimate display” and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Roberts’s Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbart’s invention of a new input device, the computer mouse.
Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilot’s head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called “augmented reality” because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearer’s ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewer’s immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.
Education and training
An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the “blue box” design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions.
Later versions added a “cyclorama” scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainers—still, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainee’s actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.
Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through “concepts which never before had any visual representation.” In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporation’s VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilot’s eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.
Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulator—better known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air force’s Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being “portrayed in a way that takes advantage of the human’s natural perceptual mechanisms.” Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmet’s tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Licklider’s vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.
Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with “haptic” (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, “grasping” the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brooks’s laboratory extended the use of virtual reality to radiology and ultrasound imaging.
Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and ’80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherland’s “window into a virtual world,” with the added dimension of a level of sensory feedback that could match a surgeon’s fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.
Later versions added a “cyclorama” scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainers—still, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainee’s actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.
Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through “concepts which never before had any visual representation.” In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporation’s VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilot’s eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.
Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulator—better known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air force’s Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being “portrayed in a way that takes advantage of the human’s natural perceptual mechanisms.” Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmet’s tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Licklider’s vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.
Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with “haptic” (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, “grasping” the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brooks’s laboratory extended the use of virtual reality to radiology and ultrasound imaging.
Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and ’80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherland’s “window into a virtual world,” with the added dimension of a level of sensory feedback that could match a surgeon’s fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.
Entertainment
As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Krueger’s work, especially his VIDEOPLACE system, processed interactions between a participant’s digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the user’s actions in the real world and translate them into interactions with the system’s virtual objects in various preprogrammed ways. Different modes of interaction with names like “finger painting” and “digital drawing” suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computer’s generated world to the computer perceiving the user’s actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.
The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worlds—hence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a user’s hand and finger movements to a VR system, which then translates the wearer’s gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.
Zimmerman’s glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing “air guitar”—in particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmerman’s interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.
By 1985, Fisher had also left Atari to join NASA’s Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive “virtual environment workstations” to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air force’s Darth Vader helmets, Fisher’s group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstation’s use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.
The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (“Reality Built for Two”). VPL declared June 7, 1989, “Virtual Reality Day.” On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPL’s RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Lanier’s new term virtual reality as a realization of “cyberspace,” a concept introduced in science fiction writer William Gibson’s Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).
Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to Lanier—HMD, data gloves, multimodal sensory input, and so forth—failed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were “location-based entertainment” systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneyland’s Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucas’s Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic characters—notably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers’ experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest “indoor interactive theme park” (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).
In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. military’s SIMNET system of networked training simulators, BattleTech centres put players in individual “pods,” essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The player’s immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.
While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainment’s Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielberg’s Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcade’s R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Reality’s VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.
The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worlds—hence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a user’s hand and finger movements to a VR system, which then translates the wearer’s gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.
Zimmerman’s glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing “air guitar”—in particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmerman’s interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.
By 1985, Fisher had also left Atari to join NASA’s Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive “virtual environment workstations” to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air force’s Darth Vader helmets, Fisher’s group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstation’s use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.
The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (“Reality Built for Two”). VPL declared June 7, 1989, “Virtual Reality Day.” On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPL’s RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Lanier’s new term virtual reality as a realization of “cyberspace,” a concept introduced in science fiction writer William Gibson’s Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).
Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to Lanier—HMD, data gloves, multimodal sensory input, and so forth—failed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were “location-based entertainment” systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneyland’s Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucas’s Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic characters—notably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers’ experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest “indoor interactive theme park” (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).
In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. military’s SIMNET system of networked training simulators, BattleTech centres put players in individual “pods,” essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The player’s immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.
While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainment’s Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielberg’s Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcade’s R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Reality’s VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.
Living in virtual worlds
By the beginning of 1993, VPL had closed its doors and pundits were beginning to write of the demise of virtual reality. Despite the collapse of efforts to market VR workstations in the configuration stabilized at VPL and NASA, virtual world, augmented reality, and telepresence technologies were successfully launched throughout the 1990s and into the 21st century as platforms for creative work, research spaces, games, training environments, and social spaces. Military and medical needs also continued to drive these technologies through the 1990s, often in partnership with academic institutions or entertainment companies. With the rise of the Internet, attention shifted to the application of networking technology to these projects, bringing a vital social dimension to virtual worlds. People were learning to live in virtual spaces.
The designers of NASA’s Visual Environment Display workstation cited the goal of putting viewers inside an image; this meant figuratively putting users inside a computer by literally putting them inside an assemblage of input and output devices. By the mid-1990s, Mark Weiser at Xerox PARC had begun to articulate a research program that instead sought to introduce computers into the human world. In an article titled “
A large group of systems involved projecting images in physical spaces more natural than a VR workstation. In 1992 researchers from the University of Illinois at Chicago presented the first Cave Automatic Virtual Environment (CAVE). CAVE was a VR theatre, a cube with 10-foot-square walls onto which images were projected so that users were surrounded by sights and sounds. One or more people wearing lightweight stereoscopic glasses walked freely in the room, their head and eye movements tracked to adjust the imagery, and they interacted with 3-D virtual objects by manipulating a wand-like device with three buttons. The natural field of vision of anyone in a CAVE was filled with imagery, adding to the sense of immersion, but the environment allowed greater freedom of movement than VR workstations, and several people could share the space and discuss what they saw.
Other examples of more natural virtual spaces included the Virtual Reality Responsive Workbench, developed in the mid-1990s by the U.S. Naval Research Laboratory and collaborating institutions. This system projected stereoscopic 3-D images onto a horizontal tabletop display viewed through shutter glasses. With data gloves and a stylus, researchers could interact with the displayed image, which might represent data or a human body for scientific or medical applications. The shift to projected VR environments in artistic and scientific work put aside the bulky VR helmets of the 1980s in favour of lightweight eyeglasses, wearable sensors, and greater freedom of movement.
Another important application of VR during the 1990s was social interaction in virtual worlds. Military simulation and multiplayer networked gaming led the way. Indeed, the first concerted efforts by the military to tap the potential of computer-based war gaming and simulation had taken shape in the mid-1970s. During the 1980s, the increasing expense of traditional (live) exercises focused attention on the resource efficiency of computer-based simulations. The most important networked virtual environment to come out of this era was the DARPA-funded Simulator Networking (SIMNET) project, begun in 1983 under the direction of Jack Thorpe. SIMNET was a network of simulators (armoured vehicles and helicopters, initially) that were linked together for collective training. It differed from previous stand-alone simulator systems in two important respects. First, because the training objectives included command and control, the design focused on effect rather than physical fidelity; psychological or operational aspects of battle, for example, required only selective verisimilitude in cabinet design or computer-generated imagery. Second, by linking together simulators, SIMNET created a network not just of physical connections but also of social interactions between players. Aspects of the virtual world emerged from social interactions between participants that had not been explicitly programmed into the computer-generated environment. These interactions between participants were usually of greater relevance to collective training than anything an individual simulator station could provide. In gaming terms, player-versus-player interactions became as important as player-versus-environment interactions.
SIMNET was followed by a series of increasingly sophisticated networked simulations and projects.
Important moments included The Battle of 73 Easting (1992), a fully 3-D simulation based on SIMNET of a key armoured battle in the Persian Gulf War; the approval of a standard protocol for Distributed Interactive Simulation in 1993; and the U.S. Army’s Synthetic Theater of War demonstration project (1997), a large-scale distributed simulation of a complete theatre battle capable of involving thousands of participants.
The other important source of populated virtual worlds was computer games. Early games such as Spacewar! (1962) and Adventure (c. 1975; see Zork) were played via time-shared computers, then over modems, and eventually on networks. Some were based on multiplayer role-playing in the virtual worlds depicted in the game, such as Mines of Moria (c. 1974) from the University of Illinois’s Project Plato and the original “multiuser dungeon,” or MUD, developed by Richard Bartle and Roy Trubshaw at the University of Essex, England, in 1979, which combined Adventure-like exploration of virtual spaces with social interaction. MUDs were shared environments that supported social interaction and performance as well as competitive play among a community of players, many of whom stayed with the game for years. Hundreds of themed multiplayer MUDs, MOOs (object-oriented MUDs), and bulletin-board-system games, or BBS games, provided persistent virtual spaces through the 1980s and ’90s. By the mid-1990s, advances in networking technology and graphics combined to open the door to graphical MUDs and “massively multiplayer” games, such as Ultima Online, Everquest, and Asheron’s Call, set in virtual worlds populated by thousands of players at a time.
Competitive networked games also provided virtual spaces for interaction between players. In 1993 id Software introduced DOOM, which defined the game genre known as the first-person shooter and established competitive multiplayer gaming as the leading-edge category of games on personal computers. The programming team, led by John Carmack, took advantage of accelerated 3-D graphics hardware to enable rapid movement through an open virtual space as seen from the perspective of each player. DOOM’s fast peer-to-peer networking was perfect for multiplayer gaming, and id’s John Romero devised the “death match” as a mode of fast, violent, and competitive gameplay. The U.S. military also adapted the first-person shooter for training purposes, beginning with a modified version of DOOM, known as Marine Doom, used by the Marine Corps and leading to the adoption of the Unreal game engine for the U.S. Army’s official game, America’s Army (2002), developed by the Modeling, Simulation, and Virtual Environments Institute of the Naval Postgraduate School in Monterey, California. First-person shooters, squad-based tactical games, and real-time strategy games are now routinely developed in parallel military and commercial versions, and these immersive, interactive, real-time training simulations have become a form of mainstream entertainment.
The designers of NASA’s Visual Environment Display workstation cited the goal of putting viewers inside an image; this meant figuratively putting users inside a computer by literally putting them inside an assemblage of input and output devices. By the mid-1990s, Mark Weiser at Xerox PARC had begun to articulate a research program that instead sought to introduce computers into the human world. In an article titled “
The Computer for the 21st Century,” published in Scientific American (1991), Weiser introduced the concept of ubiquitous computing. Arguing that “the most profound technologies are those that disappear” by weaving “themselves into the fabric of everyday life until they are indistinguishable from it,” he proposed that future computing devices would outnumber people—embedded in real environments, worn on bodies, and communicating with each other through personal virtual agents. These computers would be so natural that human users would not need to think about them, thus inaugurating an era of “calm technology.” If Weiser’s ubiquitous computing is thought of as complementary rather than opposed to VR, one can see traces of his ideas in a variety of post-VR systems.
A large group of systems involved projecting images in physical spaces more natural than a VR workstation. In 1992 researchers from the University of Illinois at Chicago presented the first Cave Automatic Virtual Environment (CAVE). CAVE was a VR theatre, a cube with 10-foot-square walls onto which images were projected so that users were surrounded by sights and sounds. One or more people wearing lightweight stereoscopic glasses walked freely in the room, their head and eye movements tracked to adjust the imagery, and they interacted with 3-D virtual objects by manipulating a wand-like device with three buttons. The natural field of vision of anyone in a CAVE was filled with imagery, adding to the sense of immersion, but the environment allowed greater freedom of movement than VR workstations, and several people could share the space and discuss what they saw.
Other examples of more natural virtual spaces included the Virtual Reality Responsive Workbench, developed in the mid-1990s by the U.S. Naval Research Laboratory and collaborating institutions. This system projected stereoscopic 3-D images onto a horizontal tabletop display viewed through shutter glasses. With data gloves and a stylus, researchers could interact with the displayed image, which might represent data or a human body for scientific or medical applications. The shift to projected VR environments in artistic and scientific work put aside the bulky VR helmets of the 1980s in favour of lightweight eyeglasses, wearable sensors, and greater freedom of movement.
Another important application of VR during the 1990s was social interaction in virtual worlds. Military simulation and multiplayer networked gaming led the way. Indeed, the first concerted efforts by the military to tap the potential of computer-based war gaming and simulation had taken shape in the mid-1970s. During the 1980s, the increasing expense of traditional (live) exercises focused attention on the resource efficiency of computer-based simulations. The most important networked virtual environment to come out of this era was the DARPA-funded Simulator Networking (SIMNET) project, begun in 1983 under the direction of Jack Thorpe. SIMNET was a network of simulators (armoured vehicles and helicopters, initially) that were linked together for collective training. It differed from previous stand-alone simulator systems in two important respects. First, because the training objectives included command and control, the design focused on effect rather than physical fidelity; psychological or operational aspects of battle, for example, required only selective verisimilitude in cabinet design or computer-generated imagery. Second, by linking together simulators, SIMNET created a network not just of physical connections but also of social interactions between players. Aspects of the virtual world emerged from social interactions between participants that had not been explicitly programmed into the computer-generated environment. These interactions between participants were usually of greater relevance to collective training than anything an individual simulator station could provide. In gaming terms, player-versus-player interactions became as important as player-versus-environment interactions.
SIMNET was followed by a series of increasingly sophisticated networked simulations and projects.
Important moments included The Battle of 73 Easting (1992), a fully 3-D simulation based on SIMNET of a key armoured battle in the Persian Gulf War; the approval of a standard protocol for Distributed Interactive Simulation in 1993; and the U.S. Army’s Synthetic Theater of War demonstration project (1997), a large-scale distributed simulation of a complete theatre battle capable of involving thousands of participants.
The other important source of populated virtual worlds was computer games. Early games such as Spacewar! (1962) and Adventure (c. 1975; see Zork) were played via time-shared computers, then over modems, and eventually on networks. Some were based on multiplayer role-playing in the virtual worlds depicted in the game, such as Mines of Moria (c. 1974) from the University of Illinois’s Project Plato and the original “multiuser dungeon,” or MUD, developed by Richard Bartle and Roy Trubshaw at the University of Essex, England, in 1979, which combined Adventure-like exploration of virtual spaces with social interaction. MUDs were shared environments that supported social interaction and performance as well as competitive play among a community of players, many of whom stayed with the game for years. Hundreds of themed multiplayer MUDs, MOOs (object-oriented MUDs), and bulletin-board-system games, or BBS games, provided persistent virtual spaces through the 1980s and ’90s. By the mid-1990s, advances in networking technology and graphics combined to open the door to graphical MUDs and “massively multiplayer” games, such as Ultima Online, Everquest, and Asheron’s Call, set in virtual worlds populated by thousands of players at a time.
Competitive networked games also provided virtual spaces for interaction between players. In 1993 id Software introduced DOOM, which defined the game genre known as the first-person shooter and established competitive multiplayer gaming as the leading-edge category of games on personal computers. The programming team, led by John Carmack, took advantage of accelerated 3-D graphics hardware to enable rapid movement through an open virtual space as seen from the perspective of each player. DOOM’s fast peer-to-peer networking was perfect for multiplayer gaming, and id’s John Romero devised the “death match” as a mode of fast, violent, and competitive gameplay. The U.S. military also adapted the first-person shooter for training purposes, beginning with a modified version of DOOM, known as Marine Doom, used by the Marine Corps and leading to the adoption of the Unreal game engine for the U.S. Army’s official game, America’s Army (2002), developed by the Modeling, Simulation, and Virtual Environments Institute of the Naval Postgraduate School in Monterey, California. First-person shooters, squad-based tactical games, and real-time strategy games are now routinely developed in parallel military and commercial versions, and these immersive, interactive, real-time training simulations have become a form of mainstream entertainment.
No comments:
Post a Comment