haskins logo

 
Information related to avatars, virtual humans etc., is available at a number of sites, including:
Ananova

The world's first virtual newsreader on the internet has been launched in London. Computer-generated Ananova is programmed to deliver news 24 hours a day.

Here are some newspaper reports.

W Interactive
W Interactive provides interactive virtual characters for the Internet. They talk, smile, deliver messages, chat. They speak several languages. They can look like any person. You can choose from their gallery of ready-made characters or send them photos of a person. W Interactive Virtual Characters are based on their patent-pending WebFace technology. Research efforts related to Lifelike Computer Characters (Microsoft)

CMU Oz Project
The Oz Project at CMU is developing technology and art to help artists create high quality interactive drama, based in part on AI technologies. This especially means building believable agents in dramatically interesting micro-worlds.

Multimodal Speech Synthesis (KTH)
"Our approach to audio-visual speech synthesis is based on parametric descriptions of both the acoustic and visual speech modalities, in a text-to-speech framework. The visual speech synthesis uses 3D polygon models, that are parametrically articulated and deformed. Currently, we are working with two different parametric models for visual synthesis : "Holger", which is an extended version of a face model developed by F. Parke (1982), and "Olga", which was developed in the Olga-project. The auditory synthesis is based on a source-filter formant-based generation model. Parameter trajectories for both modalities are calculated by a text-to-speech rule system. In the near future, we are hoping to improve naturalness and intelligibility of the visual synthesis with the help of data obtained by optical analysis of a real speaker's articulation."

The Teleface Project(KTH)
"The Teleface project at the Department of Speech, Music and Hearing, KTH, aims at evaluating the possibilities of using synthetic visual speech in tools for hearing-impaired people. The project will include an effort to implement a demonstrator prototype of a telephone communication aid for the hard of hearing. This device will generate a synthetic face that articulates in synchrony with the telephone speech using only the information contained in the telephone speech signal."

Lig Computer Graphics Labs, EPFL
The Computer Graphics Lab (LIG) at the Swiss Federal Institute of Technology (EPFL) in Lausanne was founded in July 1988 by its director, Professor Daniel Thalmann. The laboratory is mainly involved in Computer Animation and Virtual Reality. Together with MIRALab (University of Geneva), LIG is especially well-known for the creation and animation of virtual actors like synthetic Marilyn Monroe.

MIRALab
A world-leader lab in virtual reality, computer animation and telepresence.
MIRALab was founded in 1989 as a research lab in the Center of Computer Science (CUI), of University of Geneva, by Prof. Nadia Magnenat Thalmann. The main activities of the lab are in the area of virtual reality and computer animation. The lab has attracted world-wide attention for its work in the area of modeling and animation of virtual humans.

Special Report: The First Virtual Humans Conference - Evolution in Cyberspace
Web Techniques online magazine (Nov. 1996, V. 1, #8) article, by Sue Wilcox, about the first Virtual Humans Conference, which was held in 1996 and dealt with the topic of "virtual humans": avatars, models, and integrated digital figures of human beings in virtual worlds and computer animations.

Best Behaviors (Digital Magic - Aug. 96)
Computer Graphics World online magazine (August 1996) article, by Barbara Robertson, about virtual humans.

NSF Challenge Grant : Creating Conversational Agents for Language Training: Technologies for the Next Generation of Interactive Systems
"The challenge addressed by this proposal is the creation of a realistic conversational agent for the domain of language training, and for spoken language training of the hearing impaired in particular. The conversational agent consists of four key technologies, two for output and two for input:
1.a 3D model of a talking face with accurate movement of the articulators (lips, tongue, etc.) and a variety of facial gestures and expressions,
2.natural text-to-speech based on concatenative synthesis which can quickly learn new voices and is capable of hyperarticulation of the kind used to provide feedback in language training,
3.visual speech recognition using an unobtrusive desk-top camera to aid speech recognition and to provide information about the articulators of the student, and
4.auditory speech recognition so the system can understand what is said and also so the system can detect mispronunciations.
Research advances are required in all areas to improve the accuracy and to tailor the technology to conversational agents in general and language training in particular. Participatory design experiments will be conducted with schools in the Portland area. The Tucker-Maxon Oral School will use the conversational agents to teach hearing-impaired students to use and understand auditory and visual speech. "

Intelligent Animated Agents for Interactive Language Training
"This report describes a [...] project [...] to develop interactive learning tools for language training with profoundly deaf children. The tools combine four key technologies: speech recognition, developed at the Oregon Graduate Institute; speech synthesis, developed at the University of Edinburgh and modified at OGI; facial animation, developed at University of California, Santa Cruz; and face tracking and speech reading, developed at Carnegie Mellon University. These technologies are being combined to create an intelligent conversational agent; a three-dimensional face that produces and understands auditory and visual speech. The agent has been incorporated into the CSLU Toolkit, a software environment for developing and researching spoken language systems. We describe our experiences in bringing interactive learning tools to classrooms at the Tucker-Maxon Oral School in Portland, Oregon, and the technological advances that are required for this project to succeed. "

Boston Dynamics Inc. (Marc Raibert)
"Marc Raibert, founder of Boston Dynamics, was formerly Professor of Electrical Engineering and Computer Science at MIT. In previous work, he developed laboratory robots that used control systems for balance and to coordinate their motions. These robots had legs on which they ran, jumped, traveled on simple paths, ran fast (13 mph), climbed a simple stairway, and did simple gymnastic maneuvers. Raibert's approach to automated computer characters is to adapt control systems from robotics, and to combine them with physics-based simulation, to allow the creatures to move with physical realism, without an animator specifying all the details. Boston Dynamics creates automated computer characters and engineering simulations for things that move."

Jack The Human Modeling and Simulation System
"Jack is a software package developed at the Center for Human Modeling and Simulation at the University of Pennsylvania, and is available from Transom Technologies Inc.

Jack provides a 3D interactive environment for controlling articulated figures. It features a detailed human model and includes realistic behavioral controls, anthropometric scaling, task animation and eJack is a software package developed at the Center for Human Modeling and Simulation at the University of Pennsylvania , and is available from Transom Technologies Inc.