John Lin
 
 

 

 
Research
 
Research Interests
  • Vision-based HCI, visual tracking, gesture analysis
  • Graphics, texture synthesis, multiresolution meshes and morphing
  • Statistical learning, pattern recognition
  • Learning human motion, realistic animation
  • High dimensional data analysis
  • Real-time visual tracking system
Research Projects
3D Model-Based Hand Tracking
  • Model-based approach
  • Modeling articulate motion constraints
  • Tracking articulated motion
  • Tracking rigid motion
  • HCI applications

Model-Based Approach

The model-based approach for hand tracking uses a 3D hand model to estimate the current hand configurations. The idea is to compare image features produced by the model projection and that extracted from real hand images. The hand state is recovered from the model configuration that generates the best match. With a well initialized hand model, this approach can produce a very accurate estimate. In our experiments, we have tried a 2D patch model and a 3D cylinder model both in 3D space. [More]

 

Learning Articulate Motion Constraints

For the case of articulated hand tracking, one of the main problems assoicated with the model-based approach is the high degrees of freedom (DOF) involved in specifying a hand configuration. The hand shape is descirbed by its joint angles which has roughly 21 parameters. Therefore, to estimate the correct hand motion and configuration is equivalent to a search in the high dimensional space. This is an impossible task given the current computing technology. Fortunately, natural hand motion is highly constrained, and previous work has shown that by incorporating constraints obtained from biomedical studies, the computational complexity can be greatly reduced. However, there are many constraints that are hard to learn or represent in a closed form; thus, we propose to learn the constraint model using both semi-parametric and non-parametric approaches. From our initial empirical observations. we noticed that the trajectories of hand motions when moving between basis states are roughly linear. (HUMO00) [More]

 

Tracking Articulated Motion

The motivation for learning the motion constraints is to reduce the DOF involved in tracking finger motion. However, learning the compact parameterization of the constraint model is only half of the work. To successfully track the finger motion, we must address both of the two key issues:

  1. The representation of the feasible configuration space
  2. An efficient tracking algorithm associated with this space structure.

For the case of semi-parametric representation, we used the linear motion trajectory observed as the auxiliary importance function and implemented the sequential importance sampling filter, which can successfully track many common motions. For the case of non-parametric configuration space representation, we proposed a stochastic Nelder-Mead simplex search algorithm which is a general tracking algorithm that combines the top-down statistical approah and the bottom-up direct search approach. (ICCV01, FG04) [More]

 

Tracking Rigid Hand Motion

The complete hand motion is determined by its

  • global motion parameters: {R. t} in 3D space (6DOF)
  • local articulation parameters: joint angles (21 DOF)

Treating the hand as a rigid object, there are many algorithms that can be used to recover the global motion parameters of an articulated object. We have implemented both the statistical top-down approach using stochastic Nelder Mead simplex search, and the bottom-up direct search approach using algorithms such as ICP. The global and local parameters are estimated separately and then combined in an iterative manner until the final estimation converges. For the case of tracking specific gestures, we could also apply the learning approach to reduce the problem complexity by learning a more compact parameterization of the feasible space. (WMVC02, FG04) [More]

 

HCI Applications


 

Intern Projects - Real-Time Tracking Systems
  • Improved the performance of real-time AdaBoost face detector with particle filtering
    (Mitsubishi Electric Research Lab intern project, 2001)
  • Implemented a real-time hand detecting and tracking system
    (IBM T.J. Watson Research intern project, 2002)
 
Graphics Course Projects
Real-Time Rendering Systems

View-Dependent Terrain

A system is built for real-time interactive viewing of a complex terrtain data set. A view-dependent refinement scheme based on Lindstrom's work is implemented for generating polygons to render. The data used for this project is a Digital Elevation Map (DEM) of the Grand Canyon [4097 x 2049] produced by the USGS. The source for this dataset is the Georgia Tech Large Models Archive.

Sphere Hierarchies and Splatting

For this project, a QSplat -like system is built. A hierarchical spatial data structure is implemented and applied to the real-time splat rendering application. Both axis-aligned partition and oriented partition are explored. Additionally, view-dependent refinement is implmented for the rendering. This image shows the bounding spheres of each vertex at the most refined level

Video Textures

Based on the work of Arno Schödl, et al , a new video sequence can be synthesized from recorded video data base.

This 6 second sequence of never-ending-page-flipping action is generated from a data set of 20 frames.
( MPG, 885k )
Ray Tracing

Ray tracer

A ray tracer is implemented that supports refraction, algebraic surfaces, and constructive solid geometry.

Click here for a larger image

Procedural shading

This scene is constructed by procedural shading, which uses various combinations of the Perlin noise function.

Click here for a larger image

Particle system

A simple simulation of particle system
( AVI, 638k )

 


| Home | Resume | Research | Publications | Demos |