Robust Recognition of Cellular Telephone Speech by Adaptive Vector Quantization

Robust Recognition of Cellular Telephone Speech by Adaptive Vector Quantization

Title : Robust Recognition of Cellular Telephone Speech by Adaptive Vector Quantization
Authors :
Rajasekaran, Raja
Sonmez, Kemal M
Baras, John, S.

The performance degradation as a result of acoustical environment mismatch remains an important practical problem in speech recognition. The problem carries a greater significance in applications over telecommunication channels, especially with the wider use of personal communications systems such as cellular phones which invariably present challenging acoustical conditions. In this work, we introduce a vector quantization (VQ) based compensation technique which both makes use of a priori information about likely acoustical environments and adapts to the test environment to improve recognition. The technique is progressive and requires neither simultaneously recorded speech from the training and the testing environments nor EM-type batch iterations. Instead of using simultaneously recorded data, the integrity of the updated VQ codebooks with respect to acoustical classes is maintained by endowing the codebooks with a topology of reference environment. We report results on the McCaw Cellular Corpus where the technique decreases the word error for continuous ten digit recognition of cellular hands-free microphone speech with land line trained models from 23.8% to 13.6% and the speaker dependent voice calling sentence error from 16.5% to 10.6%.