Feasibility of Lightweight LLMs or Chatbots on PicoCalc

I’ll start off by saying, I have about 6 years of Python/C++ experience & have built neural networks from the ground up in both. Althought I have limited experience tinkering with the Pico series, I’m just curious to know is this possible to do.

I’m exploring the idea of running an extremely lightweight language model or chatbot on the PicoCalc. Obviously, running a full LLM like GPT-2 is completely off the table due to memory and compute limits, but I’m considering two realistic alternatives: a hardcoded rule-based bot, and a Markov chain-based text generator. The rule-based bot would use predefined responses triggered by keyword or phrase matches (e.g., a simple finite state machine). It’s predictable, fast, and can run entirely in MicroPython, but it lacks flexibility or real language understanding.

The other approach is a Markov chain response engine, trained offline on a small dataset and stored on the Pico’s SD card. It would use n-grams (like bigrams or trigrams) to generate word sequences based on statistical patterns. While it can’t understand context, it can create more natural-feeling and varied responses than the rule-based bot. This would require more memory and some clever SD streaming, but it’s still feasible in MicroPython with optimization. Curious to hear if others have tried similar tricks, or pushed language generation on microcontrollers in creative ways.

What are some problems I might run into minus the hardware limitations.

3 Likes

I had an idea for something similar to make the picocalc-ulator more “autonomous” in problem solving. Probably in C/C++ for the best performance…

For example, one command suite to describe a context, then another to request data and run simulations. “transistor T1 connected to X and Y, power supply 1.2 V, what is the value of the resistance for Vout = Z, plot the value of the current as a function of… etc.”

1 Like

You could probably run a 0.1b LLM on the Luckfox Lyra with quantization. Maybe with some clever training data generation, you could make it more or less behave believably for a narrow task.

On the pico, you would have to resort to rule-based dialog. Maybe ALICE-based…

2 Likes

Or you could try fit another board with an NPU and RAM like the AX630C, they dont take up too much room :D and you get between 3 and 12 TOPS depending on quantization, its enough to run a .5 to 1.5B parameter model on.

3 Likes

I’ve never heard of the Lyra chip till I bought the picocalc, does it work out of the box or is there additional modification I’d have to make to the calc.

I was looking into the Coral USB accelerator & the m.2 one but as far as I know there’s no proper way to use them with the pico?

The AX630C like that in the M5Stack LLM module only needs 5 volts and serial RX/TX at 115200, the LLM setup can be offloaded to small esp32s3 Stamp.
Just a thought :smiley:

1 Like

No modifications needed just to get is running with a tiny Linux distribution. Having said that, if you want sound, that requires an internal soldering job. If you want networking, that requires a supported USB wifi dongle and an M1.25 USB adapter cable. If you go look for the “Luckfox Lyra on PicoCalc” thread in this forum, that will tell you everything you need to know.

2 Likes