Using automatic speech recognition technology, the Synface software — short for synthetic face — displays an animated head “speaking” the words being said over the telephone.

The software “listens” to what is being said then displays it in real-time in a “virtual face” on a laptop screen.

The initiative is a joint effort between University College London (UCL) and research groups in The Netherlands, Sweden and the UK, including the charity the Royal National Institute for Deaf People (RNID).

Research into the concept began three and a half years ago.

RNID head of product development Neil Thomas told CNN the software enabled the listener to lip-read what was being said, just as they would in face-to-face conversation.

“Most people, particularly those who are hard of hearing, lip-read to communicate. When you’re on the telephone this becomes difficult because you can’t see the person who is speaking to you.”

Prototypes of the software are currently in field trials in the UK, Sweden and the Netherlands. RNID is overseeing trials in the UK, and Thomas said results showed 100 percent support for the concept.

He said those who have tested it found that the technology made them more confident at making phone calls.

“This technology helps confirm what they thought they were hearing. When a person loses their hearing one of the things that suffers is their confidence in making telephone calls.”

He said there was still some developments needed, including improving the level of speech recognition, for it to be suitable for everyday use.

He did not know how much it would cost when it became commercially available but those involved in creating it were keen to keep the cost down, he said.

There is a delay of 200 milliseconds between the person on the other end of the phone speaking and the receiver hearing the words.

This gives the software time to “listen” and display the face on the screen, though the delay is not noticeable and does not interfere with the flow of conversation, Thomas said.

More here.