Raymond Yun Fu ‘s Website

 

Real-Time Multimodal Human-Avatar Interaction

(This work was done when Yun Fu was a visiting student in Motorola Labs, IL, USA.)

Abstract

We present a novel Real-Time Multimodal Human Avatar Interaction (RTM-HAI) framework with vision-based Remote Animation Control (RAC). The framework is designed for both mobile and desktop avatar-based human-machine or human-human visual communications in real-world scenarios. Using 3-D components stored in Java Mobile 3-D (M3G) file format, the avatar models can be flexibly constructed and customized on the fly on any mobile devices or systems that support the M3G standard. For the RAC head tracker, we propose a 2-D real-time face detection/tracking strategy through an interactive loop, in which the detection and tracking complement each other for efficient and reliable face localization, tolerating extreme user movement. With the face location robustly tracked, the RAC head tracker selects a main user and estimates the user’s head rolling, tilting, yawing, scaling, horizontal, and vertical motion in order to generate avatar animation parameters. The animation parameters can be used either locally or remotely, and transmitted through socket over the network. In addition, it integrates audio-visual analysis and synthesis modules to realize multi-channel and runtime animations, visual TTS and real-time viseme detection and rendering. The framework is recognized as an effective design for future realistic industrial products of humanoid kiosk and human-to-human mobile communication.

Framework

Remote Animation Control

Avatar-based Human-Machine or Human-Human Visual Communications

Demos

Avatar TTS: [Henry] [Lily] Robust Head Tracker: [HeadTracking] RTM-HAI: [RTM-HAI]

Note: You need to download "ffdshow" and install to view the videos. You need to turn on your speaker or take on earphones to listen to the TTS of some videos. Some videos are compressed by "MPEG1" or "MPEG4". Your media player should have these codec. All the videos here are working and recorded in real-time.

References

[1] Yun Fu, Renxiang Li, Thomas S. Huang, and Mike Danielsen, “Real-Time Multimodal Human-Avatar Interaction,” IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 2007.

[2] Yun Fu, Renxiang Li, Thomas S. Huang, and Mike Danielsen, “Real-Time Humanoid Avatar For Multimodal Human-Machine Interaction,” 2007. IEEE International Conference on Multimedia & Expo (IEEE ICME’07), pp. 991-994, 2007.

Last Update: 09-06-2007, Copyright 2004~2008, Raymond Yun Fu, All Rights Reserved

Copyright Notice: This publication material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.