-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
tests/run-tests.py: Change _results.json to have a combined result list. #17373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests/run-tests.py: Change _results.json to have a combined result list. #17373
Conversation
@hmaerki FYI |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #17373 +/- ##
=======================================
Coverage 98.54% 98.54%
=======================================
Files 169 169
Lines 21898 21898
=======================================
Hits 21579 21579
Misses 319 319 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
tests/run-tests.py
Outdated
"results": ( | ||
list([test[1], "pass", ""] for test in passed_tests) | ||
+ list([test[1], "skip", ""] for test in skipped_tests) | ||
+ list([test[1], "fail", ""] for test in failed_tests) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytest (and other tools) use 'passed', 'skipped', 'failed'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to keep the output concise, so it takes up less space. I think it's important not to take up resources if you don't need to.
You use 3 lists: passed_tests, skipped_tests and failed_tests. If it would be one list, the order of test execution would be preserved. So it becomes visible if one test breaks the following test. But in the context of 'run-tests.py' the current implementation with 3 lists is completely fine.. |
Yes, I was also thinking to do this. It's probably worth doing so you can see the exact output order, as you say. |
I adapted the octoprobe side and it works fine. I am fine to merge this PR as it is! |
e54f954
to
994ca4b
Compare
I've now updated this PR to accumulate all tests results in one big list. So that means the order that the tests are run is retained in the (Your recent adaptation in octoprobe should still work fine with this adjusted PR, the only difference is the order of tests in @hmaerki do you like this new approach better? |
I reviewed and tested this PR: https://reports.octoprobe.org/github_selfhosted_testrun_24/octoprobe_summary_report.html @dpgeorge: Please merge into master. |
The `_results.json` output of `run-tests.py` was recently changed in 7a55cb6 to add a list of passed and skipped tests. The way this was done turned out to be not general enough, because we want to add another type of result, namely tests that are skipped because they are too large. Instead of having separate lists in `_results.json` for each kind of result (pass, fail, skip, skip too large, etc), this commit changes the output form of `_results.json` so that it stores a single list of 3-tuples of all tests that were run: [(test_name, result, reason), ...] That's more general and allows adding a reason for skipped and failed tests. At the moment this reason is just an empty string, but can be improved in the future. Signed-off-by: Damien George <damien@micropython.org>
994ca4b
to
4dff9cb
Compare
Thanks for review and testing! Now merged. |
Summary
The
_results.json
output ofrun-tests.py
was recently changed in #17296 to add a list of passed and skipped tests.The way this was done turned out to be not general enough, because in #17361 we want to add another type of result, namely tests that are skipped because they are too large.
Instead of having separate lists in
_results.json
for each kind of result (pass, fail, skip, skip too large, etc), this PR changes the output form of_results.json
so that it stores a single list of 3-tuples of all tests that were run:That's more general and allows adding a reason for skipped and failed tests. At the moment this reason is just an empty string, but can be improved in the future.
Testing
Ran
run-tests.py
on a board with passing, failing and skipping tests and verified the output. Also ranrun-tests.py --run-failures
to check that it could re-read the failed test list.Trade-offs and Alternatives
Could instead keep three separate lists (pass, fail, skip) and have them be a list of 2-tuples of the form
(test_name, reason)
. But I think the approach in this PR is more general, with just one data structure of results.Note that the change in #17296 was very recent and not in a release so can be changed without backwards compatibility issues.