IDFL


Lester C. Loschky and George McConkie

E-mail: gmcconk@uiuc.edu
Gaze Contingent Displays: Maximizing Display Bandwidth Efficiency

Abstract
One way to economize on bandwidth in single-user head-mounted displays is to put high-resolution information only where the user is currently looking. This paper describes a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional displays. Based on the results of these studies, suggestions are made for the design of eye-contingent multi-resolutional displays.





IDFL
Lester C. Loschky
and George W. McConkie


E-mail: gmcconk@uiuc.edu
User Performance with Gaze Contingent Multiresolutional Displays

Abstract
One way to economize on bandwidth in single-user head-mounted displays is to put high-resolution information only where the user is currently looking. This paper summarizes results from a series of 6 studies investigating spatial, resolutional, and temporal parameters affecting perception and performance in such eye-contingent multi-resolutional dispalys. Based on the results of these studies, suggestions are made for the design of eye-contingent multi-resolutional displays.





IDFL
Lester C. Loschky, George W. McConkie, Jian Yang and Michael E. Miller

E-mail: gmcconk@uiuc.edu
Perceptual Effects of a Gaze-Contingent Multi-Resolution Display Based on a Model of Visual Sensitivity

Abstract
Many interactive single-user image display applications have prohibitively large bandwidth requirements. However, bandwidth can be greatly reduced by using gaze-contingent multi-resolution displays (GCMRDs) that put high-resolution only at the center of vision based on eye position. A study is described in which photographic GCMRD images were filtered as a function of contrast, spatial frequency, and retinal eccentricity, on the basis of a model of visual sensitivity. This model has previously only been tested using sinusoidal grating patches. The current study measured viewers' image quality judgments and their eye movement parameters, and found that photographic images filtered at a level predicted to be at or below perceptual threshold produced results statistically indistinguishable from that of a full high-resolution display.





IDFL
George McConkie and Lester Loschky

E-mail: gmcconk@uiuc.edu
Human Performance with a Gaze-Linked Multi-Resolutional Display

Abstract
One method of reducing bandwidth requirements for displays is to present high-resolution information only at the location to which the observer's gaze is directed. Two studies are reported investigating the size of the high-resolution 'window' required for such displays, and the degree to which information outside this window can be degraded, without affecting human performance.





IDFL
George McConkie and Darrell S. Rudmann

E-mail: gmcconk@uiuc.edu
Acquiring Spatial Knowledge from Varying Fields of View

Abstract
One effect of digitizing the Army is that commanders and their staff often view a large battlespace through a computer monitor that shows only part of the space at once. A study was conducted to examine the effect of field of view, or viewport, size on a person's ability to develop terrain. Smaller viewports increase error in finding previously seen objects and remembering where they are located, but do not affect simple memory for those objects.





IDFL
George W. McConkie and
Lester C. Loschky


E-mail: gmcconk@uiuc.edu
Attending to Objects in a Complex Display

Abstract
In the large virtual reality environments being developed for the military, personnel are faced with complex, dynamic displays containing many objects and regions. Observers must form a mental representation of this space, remembering the relative positions of important objects, in order to be able to locate information quickly when needed. They then must monitor changes in this configuration in order to track the evolution of a battle. We are studying the perceptual processes involved in accomplishing these tasks.





IDFL
E. M. Reingold, D. M. Stampe,
L. C. Loschky, and G. W. McConkie


E-mail: gmcconk@uiuc.edu
Variable Resolution Gaze-Contingent Display Applications: An Integrative Review.

Abstract
Gaze-contingent multiresolutional displays place high resolution information only in the area to which the user's gaze is directed. This portion of the display is referred to as the 'area of interest' (AOI). Image resolution and details outside the AOI are reduced lowering the resource requirements in demanding display and imaging applications such as flight simulators, teleoperation and remote vision, teleconferencing, telemedicine and medical training simulations. In this review we provide an integrative survey of the current literature on gaze-contingent multiresolutional displays across a variety of applications. We also summarize psychophysical research exploring relevant display parameters.





IDFL
Darrell S. Rudmann and George McConkie

E-mail: gmcconk@uiuc.edu
Eye Movements in Human-Computer Interaction

Abstract
The potential benefits for incorporating eye movements into the interaction between humans and computers are numerous. For example, knowing the location of a user's gaze may help a computer to interpret a user's request, aid natural language processing, speed up interaction by allowing the eyes to serve as a pointing device, and possibly enable a computer to ascertain some cognitive states of the user, such as confusion or fatigue. This paper details the problems encountered in previous attempts to use eye movements in human-computer interaction, and evaluates current technology for its ability to overcome these limitations. An assessment of the accuracy and reliability of the ISCAN eye-tracking system and Ascension pcBird is provided for two-dimensional displays. Recommendations are made for the design of eye-controlled display systems based on these technologies.





IDFL
Darrell S. Rudmann and George McConkie

E-mail: gmcconk@uiuc.edu
Acquiring Spatial Knowledge from Varying Field of View Sizes

Abstract
Computer displays, whether standard computer monitors or head-mounted virtual reality equipment, can present only a limited field of view of large-scale spaces to an observer, requiring the observer to recall the locations of objects that are not in view to form a mental representations of the space. Two studies examined the influence of the size of the field of view of a large-scale displayed space on the viewer's ability to form and make use of a mental representation of the space. Experiment 1 found that more restrictive, smaller fields of view produced less accurate memory for object locations, a deficiency participants were aware of, and that smaller fields of view made objects more difficult to find. However, Experiment 2 found these differences to be primarily due to performance issues when observers are using smaller fields of view rather than spatial memory. Implications for VE systems design are discussed.