Abstract |
In my talk I will present some initial results on applying speech
recognition technologies in the context of the OptiFox project which is
aimed at developing an intelligent, self-learning system for tuning
cochlear implants. In particular I will present the same approach to
the problem as I applied about 15 years ago, when building a system that
could `understand' cow's vocalizations (mooo's). Surprisingly, so far
this simple approach seems to be more promising than the standard
approach that is based on spectral or cepstral signal analysis, dynamic
time warping, hidden Markov models and language models. Also some
cognitive aspects of the problem (the role of language, auditory memory,
context) will be discussed.
|