Skip to content

tests/run-tests.py: Add test statistics to output _result.json file. #17296

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

dpgeorge
Copy link
Member

@dpgeorge dpgeorge commented May 13, 2025

Summary

This commit add some simple test statistics to the output _result.json file generated at the end of a test run.

This will be useful for Octoprobe (automated hardware testing framework), to report a summary of the number of tests passed/skipped/failed.

Testing

Running a few tests on an esp8266:

$ ./run-tests.py -t u0 basics/0prelim.py extmod/vfs_rom.py extmod/vfs_lfs.py extmod/vfs_posix.py extmod/vfs_blockdev_invalid.py
platform=esp8266 arch=xtensa inlineasm=xtensa
pass  basics/0prelim.py 
FAIL  extmod/vfs_rom.py 
NOTE: extmod/vfs_rom.py may be a unittest that doesn't run unittest.main()
skip  extmod/vfs_lfs.py
skip  extmod/vfs_posix.py
FAIL  extmod/vfs_blockdev_invalid.py 
3 tests performed (31 individual testcases)
1 tests passed
2 tests skipped: vfs_lfs vfs_posix
2 tests failed: vfs_blockdev_invalid vfs_rom

The output _result.json is:

$ cat results/_results.json|jq
{                             
  "args": {
    "test_instance": "u0",
    "baudrate": 115200,
    "user": "micro",
    "password": "python",
    "test_dirs": null,
    "result_dir": "micropython/tests/results",
    "filters": [],
    "emit": "bytecode",
    "heapsize": null,
    "via_mpy": false,
    "mpy_cross_flags": "-march=xtensa",
    "keep_path": false,
    "jobs": 8,
    "files": [
      "basics/0prelim.py",
      "extmod/vfs_rom.py",
      "extmod/vfs_lfs.py",
      "extmod/vfs_posix.py",
      "extmod/vfs_blockdev_invalid.py"
    ],
    "print_failures": false,
    "clean_failures": false,
    "run_failures": false,
    "platform": "esp8266",
    "arch": "xtensa",
    "inlineasm_arch": "xtensa"
  },
  "failed_tests": [
    "extmod/vfs_blockdev_invalid.py",
    "extmod/vfs_rom.py"
  ],
  "statistics": {
    "total": 5,
    "pass": 1,
    "fail": 2,
    "skip": 2
  }
}

EDIT: implementation now changed to output the entire list of passed/skipped/failed tests. The above output is now changed to:

{                             
  "args": {
    "test_instance": "u0",
    "baudrate": 115200,
    "user": "micro",
    "password": "python",
    "test_dirs": null,
    "result_dir": "micropython/tests/results",
    "filters": [],
    "emit": "bytecode",
    "heapsize": null,
    "via_mpy": false,
    "mpy_cross_flags": "-march=xtensa",
    "keep_path": false,
    "jobs": 8,
    "files": [
      "basics/0prelim.py",
      "extmod/vfs_rom.py",
      "extmod/vfs_lfs.py",
      "extmod/vfs_posix.py",
      "extmod/vfs_blockdev_invalid.py"
    ],
    "print_failures": false,
    "clean_failures": false,
    "run_failures": false,
    "platform": "esp8266",
    "arch": "xtensa",
    "inlineasm_arch": "xtensa"
  },
  "passed_tests": [
    "basics/0prelim.py"
  ],
  "skipped_tests": [
    "extmod/vfs_lfs.py",
    "extmod/vfs_posix.py"
  ],
  "failed_tests": [
    "extmod/vfs_blockdev_invalid.py",
    "extmod/vfs_rom.py"
  ]
}

Trade-offs and Alternatives

This is a simple implementation that overwrites the _results.json file each time. So if you run again with ./run-tests.py --run-failures the statistics will only reflect the rerun, not the original run. IMO that's acceptable (and Octoprobe doesn't use the --run-failures feature).

Eventually a similar thing needs to be added to the other test runners, eg run-multitests.py.

@dpgeorge dpgeorge added the tests Relates to tests/ directory in source label May 13, 2025
@dpgeorge
Copy link
Member Author

@hmaerki FYI

Copy link

codecov bot commented May 13, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 98.54%. Comparing base (e39243c) to head (7a55cb6).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #17296   +/-   ##
=======================================
  Coverage   98.54%   98.54%           
=======================================
  Files         169      169           
  Lines       21897    21897           
=======================================
  Hits        21579    21579           
  Misses        318      318           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@dpgeorge
Copy link
Member Author

An alternative to this would be to write the names of all the passed and skipped tests to _results.json, just like they are written to the failed_tests entry. That would make a much bigger _results.json but would include much more information.

@hmaerki
Copy link
Contributor

hmaerki commented May 13, 2025

An alternative to this would be to write the names of all the passed and skipped tests to _results.json, just like they are written to the failed_tests entry. That would make a much bigger _results.json but would include much more information.

Another benefit would be that - if the reporting tool supports it - flaky tests could be traced.

The file structure could be:

  "failed_tests": [
    "extmod/vfs_blockdev_invalid.py",
    "extmod/vfs_rom.py"
  ],
 "successful_tests": [
    "xy.py"
 ],
 "skipped_tests": []

This would be my favorite - however, I am also fine with just the summary above.

@dpgeorge
Copy link
Member Author

This would be my favorite

OK, I've now updated this PR to provide a full list of passed, skipped and failed tests in the _results.json file.

@hmaerki
Copy link
Contributor

hmaerki commented May 16, 2025

I reviewed the MR and it look good to me.

The complementary octoprobe commits are here:

Testresult against this PR:

NOTE: The failed/skipped tests counters are still missing for

  • RUN-MULTITESTS_MULTIBLUETOOTH
  • RUN-MULTITESTS_MULTINET
  • RUN-NATMODTESTS
  • RUN-PERFBENCH

The output `_result.json` file generated by `run-tests.py` currently
contains a list of failed tests.  This commit adds to the output a list of
passed and skipped tests, and so now provides full information about which
tests were run and what their results were.

Signed-off-by: Damien George <damien@micropython.org>
@dpgeorge dpgeorge force-pushed the tests-run-tests-add-statistics-to-result-json branch from ea4d4c5 to 7a55cb6 Compare May 17, 2025 14:37
@dpgeorge
Copy link
Member Author

Thanks for testing, the Octoprobe output summary is now looking a lot better!

NOTE: The failed/skipped tests counters are still missing for

Yes, that's the next thing to improve.

@dpgeorge dpgeorge merged commit 7a55cb6 into micropython:master May 17, 2025
26 checks passed
@dpgeorge dpgeorge deleted the tests-run-tests-add-statistics-to-result-json branch May 17, 2025 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests Relates to tests/ directory in source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy