The need for Automatic Speech Command Recognition (ASCR / ASR) on IoT devices is gaining traction because of the increased interest in non-touch-based applications. This article introduces a new lightweight convolutional neural network (CNN) for ASR on microcontrollers.

A new lightweight CNN model for Automatic Speech Command Recognition on Microcontrollers

Abstract

The need for Automatic Speech Command Recognition (ASCR / ASR) on IoT devices is gaining traction because of the increased interest in non-touch-based applications. This article introduces a new lightweight convolutional neural network (CNN) for ASR on microcontrollers. The proposed model is comparable to current state-of-the-art networks with a very low parameter count of less than 63k. The new model gives an accuracy of 96.13% on the Google Speech Commands V2 dataset. A comparative study of results on previous models on the same dataset is also presented.

1. Introduction

Currently, ASR is being done using human-computer interfaces like Google speech, Siri, Alexa which require a client-server mode for its operation as the neural network is very computationally expensive. So, for devices with no internet access, doing speech recognition becomes a non-viable option, as the device itself cannot run very computationally expensive networks. In this paper, we present a very lightweight neural network suitable for low-power devices like microcontrollers.

The network should satisfy the following constraints to run on microcontrollers:

  1. Memory Footprint: Very small memory footprint, ranging in the few 10s of KiloBytes
  2. Low Compute Power: Limited MCPS processor cores in the range of a few 10s of MHz to sub-200MHz.
  3. Offline: All processing should be done locally without cloud connectivity

The remaining contents of this paper are organized in the following sections: 2: Model Architecture, 3: Training Methodology, and 4: Results. 

2. Model Architecture

The model is composed using Keras with a Tensorflow backend. An audio file consists of a single word; hence, the current model can be thought of as a classification model.  Each audio file is a single channel mono wav file, sampled to 16000 Hz and provided as the input to the network. 40 band Mel Frequency Cepstral Coefficients (MFCC) features are extracted from the audio sample and fed into the network.  These MFCC features are fed into a custom convolutional neural network (CNN) for generating the classification results.

       Fig 1.0 High-Level Model Architecture

3. Experimental Results

3.1 Model Integration

For all the experiments, the github repository [3] was referred. To maintain uniformity of all experiments, all aspects of the repository except for our custom model was kept the same.

3.2 Experimental Setup

For the experiment, the dataset used is the Google Speech Commands (GSC) 12 class set with the following keywords: ‘_unknown_’, ‘left’, ‘on’, ‘stop’, ‘right’, ‘off’, ‘down’, ‘up’, ‘no’, ‘go’, ‘yes’, ‘_silence_’ [1]. All keywords are sampled at 16kHz and are of duration 1s.

The GSC V2 comprises 36 folders with the dataset split into train, validation, and test based on predefined percentages. 10% of the total dataset is split as a test and 10% as validation, the remaining 80% is categorized as train data. The keywords not belonging to the above-mentioned keyword list are classified as unknowns. The composition of the train and test set is as shown in the table below.

ClassCountsClassCounts
on3086on396
right3019right396
stop3111stop411
up2948up425
down3134down406
no3130no405
go3106go402
left3037left412
yes3228yes419
off2970off402
unknown6154unknown816
silence3077silence408


Table 1.0: a) train data counts per class b) test data counts per class

The background noise class present in the Google Speech Commands dataset is not considered for training as a class, but it is mixed with other speech signals to create augmented data.  Silence class is generated by multiplying a random file with zeros and the count of silence class is calculated as 10% of total files in any random folder.  All metrics and methods are in accordance with standard practices [1,2].  The table below is generated with reference from [1].

        Model     Accuracy      Model Size             (Kbits)
DNN90.63576
CNN+strd95.64232
CNN96.04848
GRU(S)96.34744
CRNN(S)96.53736
SVDF96.92832
DSCNN96.93920
TinySpeech-A94.3127
TinySpeech-B91.353
LMU196.91683
LMU295.9361
LMU395.0105
LMU492.749
IGN-CNN(our model)96.13490

Table 2.0 Accuracy Results for different networks, based on GSC V2 dataset

                 Fig 2.0 Scatterplot comparison of various networks: Accuracy vs Model size

4. Conclusion

A new, lightweight CNN-based model for ASR, optimized for embedded microcontroller devices, was developed. We have benchmarked the model against comparable models using the Google Speech Commands V2 dataset. The accuracy results and total model footprint are comparable to the prevalent state-of-the-art models. This model architecture has been deployed on multiple variants of low-cost microcontrollers from leading semiconductor manufacturers

5. References

  1. Peter Blouw, Gurshaant Malik, Benjamin Morcos, Aaron R. Voelker, and Chris Eliasmith “Hardware Aware Training for Efficient Keyword Spotting on General Purpose and Specialized Hardware”, https://arxiv.org/pdf/2009.04465.pdf
  2. Oleg Rybakov, Natasha Kononenko, Niranjan Subrahmanya, Mirko Visontai, Stella Laurenzo, “Streaming keyword spotting on mobile devices”, https://arxiv.org/pdf/2005.06720.pdf
  3. Reference code, https://github.com/google-research/google-research/tree/master/kws_streaming

1 thought on “A new lightweight CNN model for Automatic Speech Command Recognition on Microcontrollers”

Leave a Comment

Scroll to Top