Live
- Two Russian oil tankers with 29 on board damaged due to bad weather
- Telangana's Traditions Will Be Protected, Village by Village : BRS Leader MLC K. Kavitha
- Uganda to relocate 5,000 households from landslide-prone areas in eastern region
- Harish Rao Criticizes CM Revanth Reddy: "His Time is Over"
- Vijay Sethupathi Hails 'Vidudala-2' as a Theatrical Game-Changer
- Sahaj Yog: A Path to Inner Transformation and Harmony City takes giant strides
- Allu Arjun meets his uncle Nagababu at his residence
- J&K L-G felicitates Langar organisations & NGOs for contribution during Amarnath Yatra
- Hit by Covid, MP's Rakesh Mishra sees revival of his fortunes, courtesy PM SVANidhi scheme
- Trailblazing Yakshagana Artiste Leelavathi Baipaditthaya No More
Just In
In a major breakthrough in the field of speech recognition, Microsoft researchers have created a technology that accurately recognises the words in a conversation like humans do. The team from Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists.
New York:In a major breakthrough in the field of speech recognition, Microsoft researchers have created a technology that accurately recognises the words in a conversation like humans do.
The team from Microsoft Artificial Intelligence and Research reported a speech recognition system that makes the same or fewer errors than professional transcriptionists.
The researchers reported a word error rate (WER) of 5.9 percent, down from the 6.3 percent WER the team reported just last month.
The 5.9 percent error rate is about equal to that of people who were asked to transcribe the same conversation, and it's the lowest ever recorded against the industry standard "Switchboard" speech recognition task.
"We've reached human parity. This is an historic achievement," said Xuedong Huang, the company's chief speech scientist in a Microsoft blog post.
The milestone means that, for the first time, a computer can recognise the words in a conversation as well as a person would.
In doing so, the team has beat a goal they set less than a year ago - and greatly exceeded everyone else's expectations as well.
"Even five years ago, I wouldn't have thought we could have achieved this. I just wouldn't have thought it would be possible," said Harry Shum, executive vice president who heads the Microsoft Artificial Intelligence and Research group.
The research milestone comes after decades of research in speech recognition, beginning in the early 1970s with DARPA, the US agency tasked with making technology breakthroughs in the interest of national security.
"This accomplishment is the culmination of over 20 years of effort," said Geoffrey Zweig, who manages the Speech & Dialog research group.
The milestone will have broad implications for consumer and business products that can be significantly augmented by speech recognition. That includes consumer entertainment devices like the Xbox, accessibility tools such as instant speech-to-text transcription and personal digital assistants such as Cortana.
"This will make Cortana (Microsoft personal assistant) more powerful, making a truly intelligent assistant possible," Shum said.
To reach the human parity milestone, the team used Microsoft's Computational Network Toolkit (CNTK), a home-grown system for deep learning that the research team has made available on GitHub via an open source license.
CNTK's ability to quickly process deep learning algorithms across multiple computers running a specialised chip called a graphics processing unit vastly improved the speed at which the team was able to do research and, ultimately, reach human parity.
Moving forward, the researchers are working on ways to make sure that speech recognition works well in more real-life settings.
That includes places where there is a lot of background noise, such as at a party or while driving on the highway.
In the longer term, researchers will focus on ways to teach computers not just to transcribe the acoustic signals that come out of people's mouths, but instead to understand the words they are saying.
"The next frontier is to move from recognition to understanding," Zweig said.
© 2024 Hyderabad Media House Limited/The Hans India. All rights reserved. Powered by hocalwire.com