I was co-organizer with AVIOS and prepared the program for the Mobile Voice Conference, held in San Francisco, March 12-14. This third year of the conference gained momentum from the launch of Apple’s Siri, which was mentioned frequently in talks. The Mobile Voice conference theme covered specific developments in the use of speech recognition and other speech technology in mobile phones, where the small devices are motivating increasing use of voice interaction, as well as the implications of the growing acceptance of speech technology in other areas (e.g., customer service), driven in part by its acceptance on mobile devices.
It is impossible to summarize a full conference briefly. This note is thus intended to convey my impressions of the key messages of the conference. Talk titles and speaker names are still available at www.mobilevoiceconference.com.
While the speech recognition in Apple’s Siri voice assistant has received the most attention, a key innovation in Apple’s Siri is its dealing with a user request as completely as possible. Siri shortcuts many navigation and data entry steps through tight integration with apps on the iPhone and with selected Web sites. Apple has the most elegant and integrated solution that uses this approach—in part because they tie it to a specific hardware platform and applications shipped with that platform—but other vendors such as Microsoft, Google, Nuance, and Vlingo have “voice assistant” models that move in this direction. If a text option were available (type what you would otherwise say when speaking isn’t an option), the “assistant” model can become the dominant user interface style, at least on mobile devices. (See “Where Apple missed the mark with Siri.”)
The evolution to an Assistant model is inevitable, and it could revolutionize mobile marketing and search. The change isn’t evolutionary—It’s a fundamental shift in control of user access to information and resources. Continue reading