QtSpeechRecognition API for Qt Using Pocketsphinx

October 18th, 2015

Qt Logo
It is really great to see the wide variety of APIs raising around Pocketsphinx, one recent new one is QtSpeechRecognition API implemented by Code-Q for assistive applications. This undertaking is quite ambitious, the main features include

  • Speech recognition engines are loaded as plug-ins.
  • Engine is controlled asynchronously, causing only minimal load to the
    application thread.
  • Built-in task queue makes plug-in development easier and forces
    unified behavior between engine integrations.
  • Engine integration handles the audio recording, making it easy to use
    from the application.
  • Application can create multiple grammars and switch between them.
  • Setting mute temporarily disables speech recognition, allowing
    co-operation with audio output (speech prompts or audio cues).
  • Includes integration to PocketSphinx engine (latest codebase) as a

You can discuss features and find more details on the following thread in Qt mailing list. You can find the sources in review in qtspeech project, branch wip/speech-recognition.

The implementation already includes pretty interesting features, for example it intelligently saves and restores CMN state for more robust recognition. So let us see how it goes.

New language model binary format

July 2nd, 2015

Expectations for the vocabulary size in LVCSR has grown dramatically in recent years. 150 thousand words is a must for modern speech recognizers while 10 years ago most system operated only with 60 thousand words. In several morphologically-rich languages like Russian the vocabulary of such size is critical for good OOV rate, but even in English it is important because of the variety of topics one can expect as an input. With such a large vocabulary ngram language models should store millions of ngrams and their weights, which requires memory efficient data structure that allows fast queries. Ngram language models are also widely used in NLP, machine translation, so this topic got a lot of attention in recent years. Several toolkits for language modeling like SRILM, IRSTLM, MITLM, BerkeleyLM implement special data structures to hold the language model.

CMUSphinx decoders use its own ngram model data structure that support files in ARPA and DMP format. While it has some fancy techniques like trie organization of ngrams, simple weight quantizing and sorted ngram arrays, there is a serious shortcoming. Word ID is limited with uint16 type, so maximum vocabulary is 65k words. Simply replacing the ID type could seriously increase currently used language models sizes. Moreover, current implementation is limited by a maximum ngram order of 3. So it was decided to implement a new state-of-art data structure. KenLM reverse trie data structure was selected as a base for CMUSphinx implementation. “Reverse” means that last word of ngram is looked up first:

In example above trigram “is one of” is queried. Each node contains “next pointer” to the beginning of successors list. Separate array for each ngram order is maintained. Each node contains quantized weights: probability and backoff. Nodes are stored in bit array, i.e. minimum amount of bits required to store next pointers and word IDs are used.

Those ideas where carefully implemented in both sphinxbase and sphinx4 and now the one can use language model of unlimited vocabulary size with improved memory consumption and query speed. It is still possible to use arpa and dmp models, but to enhance loading time, convert your model into trie binary format using sphinx_lm_convert:

sphinx_lm_convert -i your_model.lm.(dmp) -o your_model.lm.bin

Worth to mention that while it is possible to read DMP format, ability to generate DMP files is removed.

New generic 70k language model also landed in trunk. Check out its performance and report how it works on your tasks. It is expected to somewhat boost recognition accuracy decreasing OOV rate. For example in lecture transcription task OOV decreased by factor of 2.

Looking forward for your feedback: opinions, suggestions, bug reports. Say thanks to Vyacheslav Klimkov for the amazing implementation!

Virtual Assistants in Games

March 28th, 2015

There is a lot of discussion today where the very hot virtual assistant market will head. There are assistants to ask for the weather, assistants to ask about sports games and running assistants. Home assistants help you to turn off TV and watch the temperature. All those doesn’t seem too attractive. “Okay, Google, why Siri doesn’t talk to me anymore?”

One interesting application of speech recognition technology is games. It is much more to run through the dark dungeons casting light with something like “Ekto Lumeh” and calling for dragons.

Unlike real-word users players in games feel way more natural to speak with virtual characters and even forgive some recognition mistake. So it is definitely something that would be popular in a near future. In that sense it is interesting to consider In Verbis Virtus, a game created by a talented Italian studio Indomitus Games.

It is fun that game is implemented using CMUSphinx, you can read about implementation details here.