Results And Discussion


Currently, I have "deployed" the model by loading it and analyzing data streaming row by row from the serial port, in a Jupyter notebook. While this does work, I believe there is a slight delay introduced using this methodology. Running the model seems quite fast on my computer, so I believe a majority of the delay is in the process of rendering the results through python. Essentially, in the current setup, data is streamed from the serial port, a scaler is applied, the model analyzes the resulting data and returns its predicted categorization, then - to display the information - it is printed line by line in the Jupyter notebook, in the browser. I believe that last step could be significantly improved by removing Jupyter from the loop. It adds an unnecessary GUI interface that is nice for making documented code, but perhaps it is a little slower.

Lessons Learned

  • Microcontroller selection: While I do like the newer chip (the RP2040), TensorFlow Lite existing examples do not yet support that platform, and I couldn't find good enough reference documentation (or enough free time) to work out translating the libraries from SAMD21 to an alternate chip. (Full disclosure: Documentation does say it "should be compatible" with RP2040... but that has not been my experience so far. With more work I am sure I can resolve it 😌)