Session 1: Understanding Memory Behavior in MicroPython
Synopsis
Explores heap usage, garbage collection, fragmentation risks, and why memory discipline is essential for long-running asynchronous systems.
Session Content
Session 1: Understanding Memory Behavior in MicroPython
Duration: ~45 minutes
Audience: Python developers with basic programming knowledge
Platform: Raspberry Pi Pico 2 W
IDE: Thonny
Language: MicroPython
Session Goals
By the end of this session, learners will be able to:
- Explain how memory is managed on the Raspberry Pi Pico 2 W in MicroPython
- Identify common causes of memory pressure and fragmentation
- Use MicroPython tools to inspect available memory
- Write memory-efficient code for embedded applications
- Apply practical techniques to reduce RAM usage in sensor and IoT programs
Prerequisites
- Raspberry Pi Pico 2 W with MicroPython installed
- USB cable
- Thonny IDE installed on a computer
- Basic Python syntax knowledge
- A few simple files on the Pico, if available
Development Environment Setup
1. Install and open Thonny
- Download and install Thonny
- Open Thonny and go to Tools > Options > Interpreter
- Select MicroPython (Raspberry Pi Pico)
2. Connect the Pico 2 W
- Plug the Pico 2 W into your computer via USB
- If prompted, select the correct port
- Confirm that the shell shows the MicroPython REPL prompt:
>>>
3. Verify MicroPython is running
Enter:
import sys
print(sys.implementation)
Expected output:
(name='micropython', version=(1, 24, 0), ...)
1. Theory: How Memory Works in MicroPython
MicroPython on embedded devices
MicroPython runs on microcontrollers with limited RAM compared to desktop computers. On the Pico 2 W, you must be careful about:
- Total available RAM
- Heap usage for Python objects
- Stack usage for function calls and temporary values
- Fragmentation caused by repeated allocation and deallocation
Key concepts
Heap
The heap is the memory area used for Python objects such as:
- strings
- lists
- dictionaries
- class instances
- bytearrays
- dynamically created objects
Stack
The stack is used for:
- function call frames
- local temporary data
- recursion-related overhead
Garbage collection
MicroPython uses garbage collection to free memory from objects that are no longer referenced.
You can trigger it manually:
import gc
gc.collect()
Why memory behavior matters
Embedded programs often run for long periods. Poor memory behavior can lead to:
- slow performance
- allocation failures
- crashes
- unstable sensor/communication loops
2. Inspecting Memory in MicroPython
Check free memory
Use gc.mem_free() to see how much heap memory is currently available.
import gc
gc.collect()
print("Free memory:", gc.mem_free(), "bytes")
print("Allocated memory:", gc.mem_alloc(), "bytes")
Example output:
Free memory: 183456 bytes
Allocated memory: 28224 bytes
What these values mean
- Free memory: available heap not currently used
- Allocated memory: currently in use by Python objects
3. Memory-Saving Best Practices
Prefer bytearray for raw data
If you need buffers for hardware or network operations, use bytearray rather than building many small Python objects.
buffer = bytearray(32)
print(buffer)
Avoid repeated string concatenation in loops
This creates many temporary objects.
Instead of:
text = ""
for i in range(10):
text += str(i) + ","
Prefer:
parts = []
for i in range(10):
parts.append(str(i))
text = ",".join(parts)
Reuse objects where possible
If a buffer or list can be reused, avoid recreating it each loop.
Delete references when no longer needed
When large objects are no longer needed:
del my_big_list
import gc
gc.collect()
Use compact data structures
- Use tuples for fixed collections
- Use lists only when you need mutability
- Avoid large dictionaries unless necessary
4. Hands-On Exercise 1: Measuring Memory Before and After Allocation
Objective
Observe how memory changes as Python objects are created and deleted.
Code
Create a new file in Thonny and upload/run it on the Pico:
# memory_probe.py
# Demonstrates memory usage changes in MicroPython
import gc
def show_memory(label):
"""Print current heap status with a label."""
gc.collect()
print(label)
print(" Free :", gc.mem_free(), "bytes")
print(" Used :", gc.mem_alloc(), "bytes")
print()
show_memory("Initial memory state")
# Create several objects
numbers = list(range(1000))
texts = ["sensor_" + str(i) for i in range(100)]
show_memory("After creating a list and text data")
# Remove references
del numbers
del texts
show_memory("After deleting objects and collecting garbage")
Expected output
Initial memory state
Free : 183456 bytes
Used : 28224 bytes
After creating a list and text data
Free : 171200 bytes
Used : 40480 bytes
After deleting objects and collecting garbage
Free : 183440 bytes
Used : 28240 bytes
Discussion points
- Why does free memory drop after creating objects?
- Why may memory not return exactly to the initial value?
- What does garbage collection do?
5. Hands-On Exercise 2: Comparing Memory-Friendly and Memory-Heavy Code
Objective
Compare two approaches to building data strings.
Code
# compare_string_building.py
# Shows the memory impact of string concatenation vs join()
import gc
def show_memory(label):
gc.collect()
print(label, "free =", gc.mem_free(), "used =", gc.mem_alloc())
def build_with_concat():
text = ""
for i in range(200):
text += str(i) + ","
return text
def build_with_join():
parts = []
for i in range(200):
parts.append(str(i))
return ",".join(parts)
show_memory("Before concat")
a = build_with_concat()
show_memory("After concat")
del a
gc.collect()
show_memory("After deleting concat result")
b = build_with_join()
show_memory("After join")
del b
gc.collect()
show_memory("After deleting join result")
Example output
Before concat free = 183000 used = 28680
After concat free = 178500 used = 33180
After deleting concat result free = 183000 used = 28680
After join free = 180200 used = 31480
After deleting join result free = 183000 used = 28680
Key takeaway
Building large strings repeatedly can create many temporary objects. In MicroPython, this can increase memory pressure and fragmentation.
6. Hands-On Exercise 3: Simulating a Sensor Buffer Efficiently
Objective
Use a preallocated buffer to store values efficiently.
Code
# buffer_demo.py
# Preallocate a buffer and reuse it
import gc
def show_memory(label):
gc.collect()
print(label, "free =", gc.mem_free(), "used =", gc.mem_alloc())
# Preallocate space for 64 sensor readings
readings = bytearray(64)
# Fill buffer with simulated data
for i in range(len(readings)):
readings[i] = i % 256
show_memory("After creating and filling bytearray")
print("First 10 bytes:", list(readings[:10]))
del readings
gc.collect()
show_memory("After deleting bytearray")
Example output
After creating and filling bytearray free = 182800 used = 28880
First 10 bytes: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
After deleting bytearray free = 183456 used = 28224
Why this matters
A bytearray is ideal for:
- sensor buffers
- I2C data exchange
- UART packets
- network payloads
7. Practical Guidance for Writing Memory-Conscious MicroPython Code
Do
- Collect garbage before measuring memory
- Reuse buffers and objects
- Prefer fixed-size data structures
- Keep long-running loops lightweight
- Avoid unnecessary temporary values
Don’t
- Build huge strings in tight loops
- Create many short-lived objects repeatedly
- Keep references to data you no longer need
- Use recursion for deeply nested logic
- Store large logs in RAM
8. Common Memory Pitfalls in Pico Projects
Sensor logging
Problem: storing every reading in a list forever
Better: - keep only recent values - stream data out over serial or Wi-Fi - use a ring buffer
Wi-Fi and MQTT payloads
Problem: repeated string construction for JSON
Better: - use small payloads - reuse dicts if possible - build compact messages carefully
OLED/UI updates
Problem: recreating text on every refresh
Better: - update only changed parts - keep display buffers small
9. Hands-On Challenge: Memory-Aware Status Logger
Objective
Create a small program that periodically reports free memory without creating unnecessary objects.
Code
# status_logger.py
# Periodically prints memory status in a memory-conscious way
import gc
import time
def report_status():
gc.collect()
free_mem = gc.mem_free()
used_mem = gc.mem_alloc()
print("Memory status -> free:", free_mem, "used:", used_mem)
# Run 5 status checks with a pause
for _ in range(5):
report_status()
time.sleep(2)
Example output
Memory status -> free: 183120 used: 28560
Memory status -> free: 183120 used: 28560
Memory status -> free: 183120 used: 28560
Memory status -> free: 183120 used: 28560
Memory status -> free: 183120 used: 28560
Extension ideas
- Print a warning if free memory drops below a threshold
- Add a counter for loop iterations
- Store only the latest memory reading instead of a growing history
10. Wrap-Up and Key Takeaways
Summary
- MicroPython runs in a constrained memory environment
- The heap holds Python objects and can fragment over time
- Garbage collection helps reclaim unused memory
- Efficient coding practices are essential for reliable Pico 2 W applications
bytearray, object reuse, and careful string handling help conserve RAM
Reflection questions
- Which of your Python habits are less suitable on a microcontroller?
- Where in an IoT project would memory use grow unexpectedly?
- How can you design code to minimize allocation during runtime?
11. Suggested Homework
- Write a program that prints memory usage every 10 seconds.
- Modify it to keep a fixed-size history of the last 10 readings.
- Compare memory use between:
- a list of strings
- a list of integers
- a
bytearray - Record your observations in a short table.
12. Reference Links
- MicroPython Wiki: https://github.com/micropython/micropython/wiki
- MicroPython Quick Reference: https://docs.micropython.org/en/latest/rp2/quickref.html
- Raspberry Pi MicroPython Documentation: https://www.raspberrypi.com/documentation/microcontrollers/micropython.html
- Raspberry Pi Pico Series Documentation: https://www.raspberrypi.com/documentation/microcontrollers/pico-series.html#pico2