The paper “Designing Effective Gaze Mechanisms
for Virtual Agents” found at https://graphics.cs.wisc.edu/Papers/2012/APMG12/APMG12.pdf,
was published through the University of Wisconsin-Madison. Sean Andrist and
Tomislav Pejsa were two second year graduate students who were led by
Proffessors Bilge Mutlu and Michael Gleicher. Dr. Mutlu directs the human
computer interaction lab while Dr. Gleicher directs the graphics lab. He
previously was a researcher at the Autodesk Vision Technology Center and
at the Apple Computer’s Advanced Technology Group.

Their model comprises of
six main components. Target, agent, and environmental parameters; head latency;
velocity profiles for head and eye motion; oculomotor range specs (OMR, which
makes sure the eyes don’t roll to the back of the head); head alignment preference;
and the vestibule-ocular reflex (VOR, which is where the eyes stop moving when
it reached its target). Their focus was in varying the head alignment
preference. If it was set to zero percent, the head would stop moving once the
eyes reached the target. On the other hand, if it was set to one hundred
percent, the head keeps moving until it is aligned to look the same direction
as the eyes. In order to make sure their model was accurate; they ran an experiment
to validate it by checking its communicative accuracy and perceived naturalness.
This experiment resulted in confirming that their model is just as accurate as
when humans are acting in it.
So, they used this model to develop an experiment that
was implemented through a custom framework built on top of the unity game
engine. The subjective and objective results prove their hypotheses and are
displayed in the graphs below.
When looking at related work, both
psychology and CHI literatures can be referenced. Some assumptions made were
based on past psychology papers, but some newer documents are better associated
with what this particular experiment is trying to accomplish. What makes this
study a novelty is that it identifies unique parameters that can be mapped to
specific outcomes. It essentially creates and uses low-level gaze variables for
a high-level outcome. These are some of the corresponding papers:
In Psychology:
- Effects of eye contact, posture and vocal inflection upon credibility and comprehension
- Communicative effects of gaze Behavior
- Effect of teacher’s gaze on children’s story recall
- Facilitative effects of gaze upon learning
- Does your gaze direction and head orientation shift my visual attention?
In CHI:
- Experimenting with the gaze of a conversational agent
- The impact of eye gaze on communication using humanoid avatars
- A storytelling robot: Modeling and evaluation of human-like gaze behavior.
- Modeling gaze behavior for conversational Agents
- Automated eye motion using texture synthesis
- Animating gaze shifts for virtual characters based on head movement propensity
In this particular experiment, the group took the whole system and divided it up into smaller parts as shown by the many graphs. They evaluated these parts through qualitative measures in both subjective and objective ways.
Overall, I think this project
revealed an important part of information that other researchers will be able
to use. However, this is just one small part of gaze creation. Since they only
focused on head alignment, further research can be made into target, agent, and
environmental parameters; head latency; and velocity profiles for head and eye
motion. I believe the evaluation that was performed was effective because they
were focusing on specifics instead of a general consensus; thus, the reason for
dividing it into many parts. Even though the idea of programming a certain gaze
pattern for an agent is not a novel idea, the way they approached singling out
one factor was helpful. This was an interesting study to read especially since
something similar was looked at when I worked under Dr. Murphy.