This AI model mimics how humans type on smartphones — typos included

May 12, 2021

Now there's an AI that can text like you do. (Unsplash/Isabell Winter)

A new artificial intelligence model realistically predicts how humans type on touchscreens — even making typos and correcting them — in a simulation that could inform typing aids for people with motor impairments or other disabilities.

In the study, researchers created a unified theory of typing on smartphones or tablets that takes human error into account. The full paper will be presented on Wednesday at the ACM Computer-Human Interaction conference.

"The researchers made me type this," reads a line of text in a video that illustrates the AI model. "Look ma no hands," reads another video. The simulation types on the 5.1-inch screen of a Samsung Galaxy S6 in a model that closely imitates the behavior of a human typist, who naturally errs and has a level of uncertainty about their next keystroke.

"The power of this approach is that these models don't get fatigued, complain that we are using them, or want money from participating in experiments. We are able to run hundreds of simulations for different 'what-if' scenarios," said lead author Jussi P. Jokinen, a researcher at Aalto University and the paper's lead author, in an interview with The Academic Times.

Jokinen's interest in human-computer interaction was first sparked by "Star Trek." "I was always fascinated with AI, from Data to the Androids, and asking what the conditions are that make intelligence possible," he explained. After working on a project to develop better touchscreens for the elderly, Jokinen wondered why everyone doesn't type at the same speed. He was also interested in how different individuals adapt to a new interface despite the inherent boundaries and biases of human cognition.

"We haven't evolved to type on touchscreens, but still we are able to adapt to these completely new ways of communicating, not just in lifetimes but in months or even days," he said. Previous research has shown that AI can significantly influence human thought and behavior, manipulating how humans vote and who they date. "It's an interesting co-evolution [with] smartphones and the quick information ecosystem in which we live," said Jokinen.

At the end of 2020, two out of three people around the world were smartphone users — that's 5.22 billion in total. Though smartphone ownership varies globally, there is an undeniable upward trend in highly developed countries. For example, 85% of Americans own a smartphone, compared to 35% just ten years ago.

Yet no existing user interface model can accurately predict how humans type. Previous models based on simplified formulas and rules have failed to predict errors or set aside time for proofreading. These behaviors are ingrained in touchscreen typing, as human typists are generally error-prone yet capable of rapidly correcting mistakes on the fly.

Existing models miss the dynamic feature of autocorrect as well. Most smartphone users enter text with some version of intelligent correction, which frees them to type faster and reduces the time they need to proofread. Although autocorrect programs clearly influence user behavior, their influence could not be properly studied — until now. "AI is really powerful at finding these patterns and learning to exploit patterns of decision-making or visual patterns," Jokinen said.

"The architecture itself employs only psychologically valid ideas of how information is processed by the human mind, for instance, what area around the eye is visible to the cognitive system," he added. 

The team's model can map finger and eye movements, proofread text and predict errors in a comparable range to humans. For example, human text entry on a touchscreen can have an error rate ranging from 7% to 10.8%, which the model mimics. Words per minute and time spent between two consecutive keys were a few other variables tested. Out of the ten variables measured in total, eight were considered "good," as they fell within the range of human typists. The remaining variables were considered to be "acceptable," as they were outliers but still realistic when compared with human data.

"The way these components were brought together captures human cognition," said Jokinen. Human data for the present research comes from an earlier study that looked into how cellphone users divide their attention between their keyboard and the displayed text. In that study, 30 human participants were asked to transcribe sentences on a Samsung Galaxy S6. This touchscreen separates text in the uppermost part of the screen from a keyboard at the bottom, to better display a clear difference between finger and eye movements.

The model focuses on typing with one finger. In the study, the authors noted it can be expanded to typing with two thumbs, a more popular and faster way to enter text on touchscreens. In theory, it can also predict and model the typing patterns of someone with Parkinson's or multiple sclerosis, whose fingers are affected by tremors. In this way, the AI model might better accommodate those with motor impairments and make technology more inclusive.

Emotion and motivation in intelligence are other strands of research for Jokinen. He is developing models to demonstrate "emotivational" behavior in humans, or how humans adapt to a new environment or interface after receiving feedback, whether it be positive or negative. "If we encounter something that's really frustrating, humans have a process of trying to learn to do it better. That's evidence of our minds adapting," he said.

"Computer intelligence causes fear in some ways," added Jokinen. "With my kids, the question is; when will they get their first smartphone, and will it be required for them to participate in a social life? Of course, information is good for useful purposes … I would be really proud if my work allowed me to be a better typist."

The study, "Touchscreen typing as optimal supervisory control," presented May 12 in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, was authored by Jussi P.P. Jokinen, Aditya Acharya, Mohammad Uzair and Antti Oulasvirta, Aalto University; and Xinhui Jiang, Kochi University of Technology.


We use cookies to improve your experience on our site and to show you relevant advertising.