Derived from Micro speech example for TensorFlow Lite for Microcontrollers.
- toolchain (IDE):
- toolchain (alternative: command-line tools):
- Fast Models:
- FVP model for Corstone-300 MPS3 with VSI (example for FVP with user Python script)
- FVP model for Corstone-300 MPS3 (example for FVP)
- Python 3 (example for FVP with user Python script)
- additional CMSIS-Packs (not indexed and therefore not available via Pack Installer):
Project directory: ./Platform_FVP_Corstone_SSE-300_Ethos-U55/
Example has the following targets:
Example
: FVP with Python interface (VSI) with audio test data provided through user Python script- Runs on the special Corstone SSE-300 Ethos-U55 FVP with Python interface providing Virtual Streaming Interface (VSI).
It is required to install the model executable and binaries in order to run this example.
Expected installation directory on Windows:C:\Program Files\Arm\VHT\models\Win64_VC2017
If installed in a different directory then this needs to be reflected:- in uVision project (Options for Target - Debug - Settings for Models Armv8-M Debugger) when running with MDK or
- in script
run_example.cmd
when running standalone
- Audio test data is provided by Python script
./VSI/audio/python/arm_vsi0.py
from WAVE filetest.wav
which contains keywords 'Yes' and 'No' alternating three times. - Build example with MDK using uVision project
microspeech.uvprojx
targetExample
or CMSIS-Build usingmicrospeech.Example.cprj
project. - Run example with MDK or standalone with script
run_example.cmd
. - When running the example the audio data is processed and any detected keywords are output to the terminal:
Heard yes (152) @1100ms Heard no (141) @5500ms Heard yes (147) @9100ms Heard no (148) @13600ms Heard yes (147) @17100ms Heard no (148) @21600ms
- Runs on the special Corstone SSE-300 Ethos-U55 FVP with Python interface providing Virtual Streaming Interface (VSI).
Example Test
: internal test for ExampleAudio Provider Mock
: public FVP with audio test data embedded in code- Runs on the public Corstone SSE-300 Ethos-U55 FVP.
- Audio test data is embedded in the test code and contains keywords 'Yes' and 'No' alternating indefinitely.
- Build example with MDK using uVision project
microspeech.uvprojx
targetAudio Provider Mock
or CMSIS-Build usingmicrospeech.Audio_Provider_Mock.cprj
project. - Run example with MDK or standalone.
- When running the example the audio data is processed and any detected keywords are output to the terminal:
Heard silence (149) @400ms Heard yes (158) @1200ms Heard no (143) @5600ms Heard yes (149) @9100ms Heard no (142) @13600ms Heard yes (149) @17100ms Heard no (142) @21600ms
Audio Provider Mock Test
: internal test for Audio Provider Mock
Project directory: ./Platform_MIMXRT1064-EVK/
This example shows how to run a voice recognition model that can recognize 2 keywords, "yes" and "no", from boards built-in microphone. Recognized keywords are written to the terminal.
- Build example with MDK using uVision project
microspeech.uvprojx
or CMSIS-Build usingmicrospeech.MIMXRT1064-EVK.cprj
project. - Program and run the example with MDK or use Drag-and-drop programming through the DAP-Link drive.
- Open the DAP-Link Virtual COM port in a terminal (baudrate = 115200) and monitor recognized keywords.