0443 GMT September 16, 2019
These are just two of the questions Patricia Keating and Jody Kreiman, linguistics researchers at UCLA, are trying to answer in their most recent study, according to UPI.
Voice is not onstant. A person's voice can change depending on mood and emotional state, as well as other physiological factors. Despite this variability, the human brain can recognize individual voices.
Research suggests human listeners — and their brains — organize voice variability into a prototype for each speaker, a sort of average representation of what each person sounds like. This powerful organizational ability allows listeners to distinguish and recognize single syllables from different speakers.
Yet, scientists have struggled to establish exactly which acoustic qualities are most important in differentiating one prototype from another.
"Voice quality is going to wander," Keating said in a news release. "We are looking at the point when you stop sounding like yourself and start sounding like someone else."
As part of their search, Keating and Kreiman recorded the voices of 50 women, all native English speakers, reading five sentences twice on three different days. They analyzed each speaker's voice, measuring fundamental frequency, harmonic frequencies and noise levels. Each speaker's collection of sentences provided a quantitative average and range for the three acoustic factors.
The collection of data produced a sort of acoustic profile for each speaker's voice.
Researchers say future studies could use the profiles to build and test models designed to recognize the speaker of a randomly selected sentence recording. Current voice recognition models require vocal recordings of at least a minute in length.
What enables humans to recognize single syllables, and how can researchers bridge the gap between computer algorithms and the human brain? Keating and Kreiman hope to explore this question and others in follow up studies.