As Google revealed Project Gameface: The company was proud to introduce a hands-free, AI-powered gaming mouse that according to his announcement“allows humans to control a computer’s cursor through head movements and facial gestures.” While this may not be the first AI-based gaming tool, it was certainly one of the first to put AI into the hands of gamers, not theirs developers.
The project was inspired by Lancy Carr, a paraplegic video game streamer who uses a head-tracking mouse as part of his gaming setup. After its existing hardware was lost in a fire, Google stepped in and developed an open-source, highly configurable, low-cost alternative to expensive replacement hardware powered by machine learning. While the general existence of AI is proving controversial, we wanted to find out if AI, if put to good use, could be the future of gaming accessibility.
It is important to define AI and machine learning to clearly understand how they work in Gameface. When we use the terms “AI” and “machine learning,” we’re referring to both the same thing and different things.
“AI is a concept,” Laurence Moroney, AI Advocacy Lead at Google and one of the brains behind Gameface, tells WIRED. “Machine learning is a technique that you use to implement this concept.”
Machine learning falls under the term AI, along with implementations such as large language models. However, while well-known applications like OpenAI’s ChatGPT and StabilityAI Stable Diffusion are iterative, machine learning excels at learning and adapting without guidance, drawing inferences from readable patterns.
Moroney explains how this is applied to gameface in a number of machine learning models. “The first was to be able to tell where a face is in an image,” he says. “The second way was, once you had an image of a face, understand where obvious points (eyes, nose, ears, etc.) are.”
Then another model can map and decode gestures from these points and match them to mouse input.
It’s an explicitly supportive implementation of AI, unlike those often claimed to eliminate the need for human input. In fact, Moroney suggests that AI can best be used in this way to “augment our ability to do things that weren’t previously feasible.”
This feeling goes beyond Gameface’s potential to make games more accessible. AI, according to Moroney, can have a big impact on accessibility for gamers, but also on the way developers create accessibility solutions.
“Anything that allows developers to solve classes of problems that were previously unfeasible orders of magnitude more effective,” he says, “can only be beneficial in the area of accessibility or any other area.”
This is something developers are already beginning to understand. Artem Koblov, creative director of Perelesoqtells WIRED that he wants “more resources for solving routine tasks and not for creative inventions”.
In this way, AI can help with time-consuming technical processes. With the right applications, AI could create a leaner, more permissive development lifecycle, both helping to implement accessibility solutions mechanically and giving developers more time to think about them.
“As a developer, you want to have as many tools as possible to make your job easier,” says Conor Bradley, creative director of Soft Leaf Studios. He notes advances in current accessibility AI implementations, including “real-time text-to-speech and speech-to-text generation, as well as speech and image recognition.” And he sees potential for future developments. “I can imagine that over time, more and more games will use these powerful AI tools to make our games more accessible.”