|
| 1 | +.. _test_developers: |
| 2 | + |
| 3 | +Test Suite Developer Guide |
| 4 | +========================== |
| 5 | + |
| 6 | +This guide provides comprehensive information for developers writing and maintaining MicroPython tests. For a quick start on running tests, see :ref:`test_quickstart`. |
| 7 | + |
| 8 | +Test Suite Organization |
| 9 | +----------------------- |
| 10 | + |
| 11 | +The ``tests/`` directory structure: |
| 12 | + |
| 13 | +- ``basics/``: Core Python language features |
| 14 | +- ``extmod/``: Extended modules (ujson, ure, etc.) |
| 15 | +- ``float/``: Floating-point arithmetic tests |
| 16 | +- ``micropython/``: MicroPython-specific features |
| 17 | +- ``import/``: Import mechanism tests |
| 18 | +- ``io/``: Input/Output operations |
| 19 | +- ``stress/``: Resource limit tests (memory, recursion) |
| 20 | +- ``thread/``: Threading module tests |
| 21 | +- ``cmdline/``: Command-line interface and REPL tests |
| 22 | +- ``ports/<port_name>/``: Port-specific tests |
| 23 | +- ``feature_check/``: Target capability detection scripts |
| 24 | +- ``multi_bluetooth/``, ``multi_network/``: Multi-instance tests |
| 25 | +- ``perf_bench/``: Performance benchmarks |
| 26 | +- ``internal_bench/``: Low-level internal benchmarks |
| 27 | + |
| 28 | +Writing Standard Tests |
| 29 | +---------------------- |
| 30 | + |
| 31 | +Test Types |
| 32 | +~~~~~~~~~~ |
| 33 | + |
| 34 | +MicroPython supports three testing approaches: |
| 35 | + |
| 36 | +1. **CPython Comparison Tests** (preferred for standard Python features): |
| 37 | + |
| 38 | + - Tests run on both CPython and MicroPython |
| 39 | + - Outputs must match exactly |
| 40 | + - Used for standard Python behavior |
| 41 | + |
| 42 | +2. **Expected Output Tests** (``.exp`` files): |
| 43 | + |
| 44 | + - For MicroPython-specific features |
| 45 | + - Compare output against ``<testname>.py.exp`` |
| 46 | + - Use when CPython behavior differs or feature doesn't exist |
| 47 | + |
| 48 | +3. **unittest-based Tests** (preferred for MicroPython-specific features): |
| 49 | + |
| 50 | + - Requires ``unittest`` module on target |
| 51 | + - Better error messages and structure |
| 52 | + - Use for hardware testing and MicroPython-specific behavior |
| 53 | + |
| 54 | +Writing Test Files |
| 55 | +~~~~~~~~~~~~~~~~~~ |
| 56 | + |
| 57 | +**Basic test structure:** |
| 58 | + |
| 59 | +.. code-block:: python |
| 60 | +
|
| 61 | + # tests/basics/my_feature.py |
| 62 | + # Test description comment |
| 63 | + |
| 64 | + # Use print() for output - this is what gets compared |
| 65 | + print("Testing feature X") |
| 66 | + result = some_function() |
| 67 | + print(result) |
| 68 | +
|
| 69 | +**Conditional skipping:** |
| 70 | + |
| 71 | +.. code-block:: python |
| 72 | +
|
| 73 | + import sys |
| 74 | + |
| 75 | + # Skip if feature not available |
| 76 | + if not hasattr(sys, 'required_feature'): |
| 77 | + print('SKIP') |
| 78 | + sys.exit() |
| 79 | +
|
| 80 | +**Platform-specific considerations:** |
| 81 | + |
| 82 | +.. code-block:: python |
| 83 | +
|
| 84 | + import sys |
| 85 | + |
| 86 | + # Handle endianness differences |
| 87 | + if sys.byteorder == 'little': |
| 88 | + expected = b'\x01\x02' |
| 89 | + else: |
| 90 | + expected = b'\x02\x01' |
| 91 | +
|
| 92 | +Advanced run-tests.py Usage |
| 93 | +--------------------------- |
| 94 | + |
| 95 | +Test Filtering and Selection |
| 96 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 97 | + |
| 98 | +.. code-block:: bash |
| 99 | +
|
| 100 | + # Filter by regular expression |
| 101 | + ./run-tests.py -e viper # Exclude tests matching 'viper' |
| 102 | + ./run-tests.py -i float # Include only tests matching 'float' |
| 103 | + |
| 104 | + # Run with specific code emitter |
| 105 | + ./run-tests.py --emit native # Test native code emitter |
| 106 | + ./run-tests.py --emit viper # Test viper code emitter |
| 107 | + |
| 108 | + # Compile to .mpy first |
| 109 | + ./run-tests.py --via-mpy |
| 110 | + |
| 111 | + # Parallel execution (PC targets only) |
| 112 | + ./run-tests.py -j 4 # Run 4 tests in parallel |
| 113 | +
|
| 114 | +How Test Discovery Works |
| 115 | +~~~~~~~~~~~~~~~~~~~~~~~~ |
| 116 | + |
| 117 | +``run-tests.py`` uses a sophisticated test discovery and filtering system: |
| 118 | + |
| 119 | +1. **Feature Detection**: Runs scripts in ``feature_check/`` to determine: |
| 120 | + |
| 121 | + - Architecture and platform |
| 122 | + - Available modules and features |
| 123 | + - Code emitter support (native, viper) |
| 124 | + - Float precision |
| 125 | + - Endianness |
| 126 | + |
| 127 | +2. **Automatic Skipping**: Tests are skipped based on: |
| 128 | + |
| 129 | + **Filename patterns:** |
| 130 | + |
| 131 | + - ``native_*``, ``viper_*`` - Skip without native code emitter support |
| 132 | + - ``*_endian`` - Skip when host/target have different byte order |
| 133 | + - ``int_big*``, ``*_intbig`` - Skip without arbitrary-precision integers |
| 134 | + - ``bytearray*``, ``*_bytearray`` - Skip without bytearray support |
| 135 | + - ``set_*``, ``frozenset*``, ``*_set`` - Skip without set type support |
| 136 | + - ``*slice*`` - Skip without slice support (includes specific test list) |
| 137 | + - ``async_*``, ``asyncio_*`` - Skip without async/await support |
| 138 | + - ``const*`` - Skip without const keyword (MicroPython extension) |
| 139 | + - ``*reverse_op*`` - Skip without __rOP__ special methods |
| 140 | + - ``io_*`` - Skip when io module doesn't exist |
| 141 | + - ``string_fstring*`` - Skip without f-string support |
| 142 | + - ``asm*`` - Skip without inline assembly for target architecture |
| 143 | + |
| 144 | + **Other skip conditions:** |
| 145 | + |
| 146 | + - Platform skip lists in ``run-tests.py`` |
| 147 | + - Missing required features |
| 148 | + - Explicit ``SKIP`` output from test |
| 149 | + - Command-line filters |
| 150 | + |
| 151 | +3. **Special Test Handling**: Some tests need special treatment: |
| 152 | + |
| 153 | + - Command-line options: ``# cmdline: -X heapsize=16k`` |
| 154 | + - Tests listed in ``special_tests`` dictionary |
| 155 | + - Tests requiring specific setup or teardown |
| 156 | + |
| 157 | +Writing Multi-Instance Tests |
| 158 | +---------------------------- |
| 159 | + |
| 160 | +Multi-instance tests coordinate multiple MicroPython instances for testing communication protocols. |
| 161 | + |
| 162 | +Test Structure |
| 163 | +~~~~~~~~~~~~~~ |
| 164 | + |
| 165 | +.. code-block:: python |
| 166 | +
|
| 167 | + # tests/multi_network/tcp_echo.py |
| 168 | + |
| 169 | + def instance0(): |
| 170 | + # Server instance |
| 171 | + multitest.globals(IP=multitest.get_network_ip()) |
| 172 | + multitest.next() |
| 173 | + |
| 174 | + import socket |
| 175 | + s = socket.socket() |
| 176 | + s.bind(('0.0.0.0', 8000)) |
| 177 | + s.listen(1) |
| 178 | + multitest.broadcast('server ready') |
| 179 | + |
| 180 | + conn, addr = s.accept() |
| 181 | + data = conn.recv(1024) |
| 182 | + conn.send(data) # Echo back |
| 183 | + conn.close() |
| 184 | + s.close() |
| 185 | + |
| 186 | + def instance1(): |
| 187 | + # Client instance |
| 188 | + multitest.next() |
| 189 | + multitest.wait('server ready') |
| 190 | + |
| 191 | + import socket |
| 192 | + s = socket.socket() |
| 193 | + s.connect((IP, 8000)) |
| 194 | + s.send(b'Hello') |
| 195 | + print(s.recv(1024)) |
| 196 | + s.close() |
| 197 | +
|
| 198 | +Coordination Methods |
| 199 | +~~~~~~~~~~~~~~~~~~~~ |
| 200 | + |
| 201 | +The ``multitest`` helper provides: |
| 202 | + |
| 203 | +- ``next()``: Synchronize instances at stages |
| 204 | +- ``broadcast(msg)``: Send message to all instances |
| 205 | +- ``wait(msg)``: Wait for specific broadcast |
| 206 | +- ``globals(**kwargs)``: Share variables between stages |
| 207 | +- ``get_network_ip()``: Get instance's IP address |
| 208 | +- ``expect_reboot(resume_func, delay_ms)``: Handle device reboots |
| 209 | +- ``skip()``: Skip test from any instance |
| 210 | + |
| 211 | +Running Multi-Instance Tests |
| 212 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 213 | + |
| 214 | +.. code-block:: bash |
| 215 | +
|
| 216 | + # Different instance combinations |
| 217 | + ./run-multitests.py -i micropython -i cpython test.py |
| 218 | + ./run-multitests.py -i pyb:a0 -i pyb:a1 test.py |
| 219 | + |
| 220 | + # Test permutations (swap instance assignments) |
| 221 | + ./run-multitests.py -p 2 -i inst1 -i inst2 test.py |
| 222 | +
|
| 223 | +Native Module Tests |
| 224 | +------------------- |
| 225 | + |
| 226 | +Testing dynamic native modules (``.mpy`` files with machine code). |
| 227 | + |
| 228 | +How It Works |
| 229 | +~~~~~~~~~~~~ |
| 230 | + |
| 231 | +1. Pre-compiled ``.mpy`` files in ``examples/natmod/`` |
| 232 | +2. Script injects module into target's RAM via VFS |
| 233 | +3. Test runs against the loaded module |
| 234 | +4. Architecture auto-detected or specified with ``-a`` |
| 235 | + |
| 236 | +Running Native Module Tests |
| 237 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 238 | + |
| 239 | +.. code-block:: bash |
| 240 | +
|
| 241 | + # Auto-detect architecture |
| 242 | + ./run-natmodtests.py extmod/btree_bdb.py |
| 243 | + |
| 244 | + # Specify architecture |
| 245 | + ./run-natmodtests.py -a armv7em extmod/re_basic.py |
| 246 | + |
| 247 | + # Run on pyboard |
| 248 | + ./run-natmodtests.py -p -d /dev/ttyACM0 extmod/btree_bdb.py |
| 249 | +
|
| 250 | +Internal Benchmarks |
| 251 | +------------------- |
| 252 | + |
| 253 | +Low-level benchmarks for VM operations and C code performance. |
| 254 | + |
| 255 | +Writing Internal Benchmarks |
| 256 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 257 | + |
| 258 | +Tests output structured metrics: |
| 259 | + |
| 260 | +.. code-block:: python |
| 261 | +
|
| 262 | + # tests/internal_bench/loop_simple.py |
| 263 | + import time |
| 264 | + |
| 265 | + start = time.ticks_us() |
| 266 | + for i in range(1000): |
| 267 | + pass |
| 268 | + elapsed = time.ticks_diff(time.ticks_us(), start) |
| 269 | + |
| 270 | + print(f"core : loop : simple_loop_1000 : {elapsed} : us") |
| 271 | +
|
| 272 | +Results are validated against ``internalbench_results.py``. |
| 273 | + |
| 274 | +Running and Updating Benchmarks |
| 275 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 276 | + |
| 277 | +.. code-block:: bash |
| 278 | +
|
| 279 | + # Run against existing baselines |
| 280 | + ./run-internalbench.py --target unix internal_bench/*.py |
| 281 | + |
| 282 | + # Generate new baselines |
| 283 | + ./run-internalbench.py --reference myboard internal_bench/*.py |
| 284 | +
|
| 285 | +Performance Benchmarks |
| 286 | +---------------------- |
| 287 | + |
| 288 | +Advanced Usage |
| 289 | +~~~~~~~~~~~~~~ |
| 290 | + |
| 291 | +.. code-block:: bash |
| 292 | +
|
| 293 | + # Run specific benchmarks with custom parameters |
| 294 | + ./run-perfbench.py -p 168 100 -a 10 perf_bench/bm_float.py |
| 295 | + |
| 296 | + # Compare different aspects |
| 297 | + ./run-perfbench.py -t baseline.txt new.txt # Compare times |
| 298 | + ./run-perfbench.py -s baseline.txt new.txt # Compare scores |
| 299 | +
|
| 300 | +Understanding Results |
| 301 | +~~~~~~~~~~~~~~~~~~~~~ |
| 302 | + |
| 303 | +- **Error percentages**: High values indicate variability |
| 304 | +- **N parameter**: Affects test duration and score normalization |
| 305 | +- **M parameter**: Affects memory-intensive test behavior |
| 306 | + |
| 307 | +Best Practices |
| 308 | +-------------- |
| 309 | + |
| 310 | +1. **Test Isolation**: Each test should be independent |
| 311 | +2. **Deterministic Output**: Avoid timing-dependent output |
| 312 | +3. **Resource Awareness**: Consider memory constraints on embedded targets |
| 313 | +4. **Clear Failure Messages**: Make failures easy to diagnose |
| 314 | +5. **Documentation**: Comment complex test logic |
| 315 | +6. **Cross-platform**: Test on multiple architectures when possible |
| 316 | + |
| 317 | +Debugging Test Failures |
| 318 | +----------------------- |
| 319 | + |
| 320 | +1. **Examine outputs**: |
| 321 | + |
| 322 | + .. code-block:: bash |
| 323 | + |
| 324 | + # View specific failure |
| 325 | + cat results/test_name.py.out |
| 326 | + cat results/test_name.py.exp |
| 327 | + |
| 328 | + # Diff outputs |
| 329 | + diff results/test_name.py.exp results/test_name.py.out |
| 330 | +
|
| 331 | +2. **Run individual test**: |
| 332 | + |
| 333 | + .. code-block:: bash |
| 334 | + |
| 335 | + # With verbose output |
| 336 | + ./run-tests.py -v test_name.py |
| 337 | +
|
| 338 | +3. **Check feature support**: |
| 339 | + |
| 340 | + .. code-block:: bash |
| 341 | + |
| 342 | + # Run feature check directly |
| 343 | + micropython feature_check/float.py |
| 344 | +
|
| 345 | +Contributing Tests |
| 346 | +------------------ |
| 347 | + |
| 348 | +When submitting new tests: |
| 349 | + |
| 350 | +1. Place in appropriate directory |
| 351 | +2. Include clear comments explaining what's tested |
| 352 | +3. Test on multiple platforms if possible |
| 353 | +4. Ensure deterministic output |
| 354 | +5. Follow existing naming conventions |
| 355 | +6. Update skip lists if platform-specific |
0 commit comments