Microsoft Acquires Semantic Machines to Make Spoken AI More Human-Like
Microsoft Acquires Semantic Machines to Make Spoken AI More Human-Like
Microsoft became the first to add "full-duplex" voice sense to a conversational AI system for users to carry on a conversation naturally with its chatbot XiaoIce and AI-powered assistant Cortana.

While Google has been working on a life-like version of its spoken Artificial Intelligence (AI) technology, Microsoft has jumped on the bandwagon by acquiring US-based AI developer "Semantic Machines" to develop the technology closer to how humans speak. "With this acquisition, Microsoft plans to establish a conversational AI centre of excellence in Berkeley, California, to experiment and integrate 'natural language processing (NLP) technology' in its products like Cortana," David Ku, Vice President and Chief Technology Officer of AI and Research at Microsoft, wrote in a blog post.

Also Read: Samsung Galaxy J6 First Impressions Review: Samsung's Premium Features in a Budget

"For rich and effective communication, intelligent assistants need to be able to have a natural dialogue instead of just responding to commands," said Ku. Recently, Microsoft became the first to add "full-duplex" voice sense to a conversational AI system for users to carry on a conversation naturally with its chatbot XiaoIce and AI-powered assistant Cortana.

Also Read: Samsung Galaxy A6+ First Impressions Review: A Complete Package

"Full-duplex" is a technique to communicate in both directions simultaneously mostly like a telephone call with AI-based technology conversing on one side. A "Semantic Machines" core product, its "conversation engine" extracts its responses from natural voice or text input and then generates a self-updating learning framework for managing dialogue context and user goals.

"Today's commercial natural language systems like Siri, Cortana, and Google Now only understand commands, not conversations," said "Semantic Machines" in a post. "With our conversational AI, we aim to develop technology that goes beyond understanding commands, to understanding conversations," the company added. Earlier in May, Sundar Pichai, CEO, Google, introduced "Duplex" at Google I/O and demonstrated how AI system could book an appointment at a salon and a table at a restaurant where the Google Assistant sounded like a human.

It used Google DeepMind's new "WaveNet" audio-generation technique and other advances in Natural Language Processing (NLP) to replicate human speech patterns.

Also Watch: OnePlus 6 First Impressions Review: All You Might Need in 2018

What's your reaction?

Comments

https://hapka.info/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!