Traditionally, speech-language pathologists have relied on a patient’s sense of hearing to improve speech sounds. A team of researchers from UT Dallas is hoping to change that by creating a new high-tech tool that will allow patients to use their sense of sight to visualize the movements of the mouth during speech.

The Visual Speech Project is made up of researchers from the School of Behavioral and Brain Sciences, Erik Jonsson School of Engineering and Computer Science, and School of Arts and Humanities.

The team is working to create a realistic computer animation of a patient’s tongue and lip movements during speech production. The animation will allow patients to compare their own movements to those of an animated model, which in turn will help the patients see the changes they must make in order to produce a sound correctly.

“Speech movements of the tongue are hidden by the cheeks and lips and therefore difficult for a patient to truly visualize.”

Dr. Jennell Vick

“Speech movements of the tongue are hidden by the cheeks and lips and therefore difficult for a patient to truly visualize,” said Dr. Jennell Vick, post-doctoral fellow at the Callier Center for Communication Disorders. “Although our current technology shows these movements using dots in a three-dimensional grid, it’s not a very natural picture of what is actually happening in the mouth. The animation will allow both the patient and clinician to exploit the sense of sight.”

Dr. Thomas Campbell, professor and executive director of the Callier Center, and Dr. Rob Rennaker, associate professor in neural engineering, identified the problem as one that researchers from UT Dallas could solve as an integrated team across various schools.

Researchers at the Callier Center and in the Department of Computer Science are collecting base-line data on adult talkers with and without cerebral palsy. The researchers are able to measure both disordered and typical speech movements by placing small sensors on the participants’ tongues. The sensors’ movements are tracked in real time.

The data is then transferred to colleagues who translate and process the data into a format that can be used by the animators.

“We still have a lot of testing to do, especially when it comes to collecting the data and transforming it into animation in real time,” said Vick. “We also need to identify and test the different tongue and lip movements that are common with a variety of disorders.”

An expected benefit of using animation over the current technology is being able to exaggerate the speech movements in order to make the differences more obvious to the patient. As a result, the patient will be able to pinpoint the exact placement of the tongue and lips in order to make the correct sounds.

“This technology has the potential of improving the quality of life for a wide range of patients, including stroke victims, children with speech disorders and individuals learning a second language,” said Rennaker. “In recognition of the potential clinical impact, and as a model for collaboration across schools and centers at UT Dallas, Dr. Bruce Gnade, vice president for research, has provided the resources to make this project viable and competitive at the national level.”   

In addition to Campbell, Rennaker and Vick, the Visual Speech Project team includes Dr. Balakrishnan Prabhakaran, professor in ECS; Eric Farrar, assistant professor in ATEC; and Dr. Bill Katz, professor at the Callier Center.