Computer science professor explores, explains possibilities
in eye-tracking technology
By Mary Kincy
In 2003, Dr. Oleg Komogortsev — then a graduate student at Kent State University — wanted very much to play the beta version of what would become one of the world’s most popular online role-playing games, World of Warcraft.
Access to the game was highly restricted, however, with invitations to the beta test site selling online for as much as $17,000. So Komogortsev put his computer science background and his interest in eye-tracking technology to work, sending an e-mail to the game’s developers in which he pledged to explore the possibility of applying eye-tracking to online role play.
The proposal earned Komogortsev an invitation to the game.
Nearly a decade later, Komogortsev’s interest in eye-tracking technology, especially as it applies to the ability to manipulate computer interfaces with one’s eyes, continues at Texas State, where he is an assistant professor in the Department of Computer Science.
The office where Komogortsev’s current research, much of it student-aided, takes place is unassuming. But a rectangular black box that sits near a computer monitor has the ability to collect images of a pair of eyes watching a blue ball bounce on a black background displayed on the monitor, analyzing the relative positions of the user’s pupil and cornea for each of a set number of points on the screen. Then, a simple interface uses that information to allow the user to open and select from a handful of folders containing images stored on the desktop.
Beyond the novelty, current applications for eye-tracking in computer navigation are scant, despite the fact that the first major paper linking eye-tracking technology to computer use is about two decades old — and despite a relevant application’s potential to help those unable to manipulate a mouse using their hands.
Komogortsev explains why: “If you’re talking about eye-tracking as an input modality, the problem is that the mouse works extremely well,” he says.
Also, the eye has its limitations, at least within the confines of the current modes of computer navigation, which require a distinct trigger — in most cases, a mouse click — to select an item on the screen.
“We are very good at getting information with our eyes, but we are not very consciously aware of our eye movements,” Komogortsev says. “They actually occur very frequently and involuntarily, so our brain gets interested in something and our brain tells the eye to go to that position.” He explains further: “The human visual system is very complex. We’re used to getting information from the eyes, but not using them to select something.”
Much of the difficulty, Komogortsev says, lies in determining what facet of the gaze will be the trigger that equates to the click of a mouse. A blink is not ideal, because it quickly tires the eyes.
“You can say that, basically, the eye-tracking community is waiting for a cure application to this problem,” Komogortsev says, adding a prescription for future development: “We need to think very differently about interface design. And therefore it puts additional burden on those people who make interfaces to make a decision about how this can work.”
He sees hope for the future, though — and it’s no further away than the nearest iPad.
“We transitioned from mouse to touch screen because somebody found a very good application — let’s take for example iPhone, iPad — where you can touch,” Komogortsev says. “Everything is much more intuitive by touching rather than having an additional device such as a mouse. As human beings we are very used to touching things. It’s very natural.”
So, too, is opening our eyes.