Intel is working on its own version of Siri, but is avoiding using the cloud because that will make the product much better.
Voice recognition itself has come a long way from the days of using something like IBM’s ViaVoice; we can now whip out our little pocket computers and it can purchase a pile of video games for us if we tell it to. If you bought into the Xbox One, you can sit on your couch — both hands wrist-deep in Cheetos dust — and tell your console to turn on the tonight’s NBA game or load up season three of Archer on Netflix without turning your gamepad orange. When voice recognition works, it’s amazingly practical. However, when it doesn’t work — as it often doesn’t — you’re left screaming at an inanimate object, wasting the time it would’ve taken you to just push some buttons yourself without commanding your device to do it for you. Siri works as well as consumer-level voice control ever has. Intel, however, realizes that Siri could work better, and it doesn’t involve expanding the recognition software’s vocabulary.
With Jarvis, a new headset being developed that might be named after Iron Man‘s butler, Intel plans to remove the cloud from the equation, cutting down the time it takes for the voice recognition software to make sense of your garbled commands. Major voice recognition platforms work in such a way where they take a compressed recording of your voice command and ship it off to a central server. The computers at these servers then translate the voice command into text or a command, then send it back to your device. Obviously, this can be slow depending upon outside factors that aren’t entirely controllable, such as the speed of the current data connection. By doing all of that processing on the client’s side, Intel can cut down on all the time it would take to ship the voice command off to the servers.
Partnered with an unnamed third party, Intel has created a wearable device that processes the voice commands without shipping anything off to those servers. At the moment, Jarvis comes in the form of a headset that sits inside your ear and wirelessly connects to your phone. Because of this local interaction, Jarvis does the unthinkable, and works even when there is no data connection available — something of which everyone who commutes on a subway dreams.
The more tech-hardened crowd will tell you that even when voice recognition works flawlessly, it still isn’t as convenient as using your fingers to push some buttons. That’s because most voice recognition doesn’t respond as fast as you can push a button, even though sound travels faster than your finger. So, the goal of voice recognition would be to have it act immediately after your command, and this is something that Intel hopes to solve with its local solution.
For now, Intel is working on selling the tech to mobile phone manufacturers, which is obviously where the technology would fit best, since the hardware in a brand new laptop would be powerful enough to handle the local recognition on its own. Of course, once the speed issue is settled, developers still have to create a piece of software that never misunderstands what we say, and is also capable of dealing with colloquial phrases.
source:extremetech.com
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.