-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Unittest: Extend cpython compatility including discover command #488
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unittest: Extend cpython compatility including discover command #488
Conversation
41d3338
to
f3a9092
Compare
f3a9092
to
8a1973d
Compare
Let's remove all version bump commits, and add a new, single bump to 0.9.0 as the last commit. To do:
|
@dpgeorge turns out the reason for codeformat issues is I've got an older version of black installed locally. Would you be interesting in pinning the version used in CI here and making codeformat.py check the version matches? I'd be happy to raise PR for that if so. I'm also happy to stick with the current implcit "check you're got the latest black" and include that in the "developer guide" I'm yet to write for this repo... |
8a1973d
to
839dc05
Compare
839dc05
to
b4be304
Compare
Removes dependency on re-pcre which is only available on unix port.
6e3a412
to
a193a57
Compare
Can be tested with
|
I've just noticed an issue with using See Distinguishing test iterations using subtests. The example provided there is a useful test case for that scenario: import unittest
class NumbersTest(unittest.TestCase):
def test_even(self):
"""
Test that numbers between 0 and 5 are all even.
"""
for i in range(0, 6):
with self.subTest(i=i):
self.assertEqual(i % 2, 0) Three failures should be recorded (one for each odd number) but currently failure occurs immediately when |
9244f4d
to
35b6ca4
Compare
python-stdlib/fnmatch/fnmatch.py
Outdated
import re | ||
|
||
# import functools | ||
|
||
def normcase(s): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this actually doesn't do anything, and is local to this file, wouldn't it make more sense to then just remove all calls to it here?
Also this might break usage / hurt CPython compatibility: I have an os.path
which does implement normcase the CPython way but this now won't be used anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair point, I was keen to remove the hard-coded need for a dependency because installing them is pretty dicey at the moment, and os.path
can't be included directly from the micropython-lib folder with the MICROPYPATH
env because it's a split package with os
.
For situations where it's installed manually however, it makes sense to use it - so I've updated this change with a try/except ImportError fallback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok that's a nice workaround
35b6ca4
to
b72dc02
Compare
Store traceback details for each test failure and log to console at the end of the test, like CPython version of the module does.
And for clarity, rename runner function run_class() -> run_suite(). Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Mostly to workaround inherited MicroPython's issues with inheritance. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Just runs "subtests" in the scope of the main TestCase. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
For CPython compatibility. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Also, rework result printing to be more compatible with CPython. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Perhaps, modern CPython (3.8). Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
For CPython compatibility. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
E.g. for doctest. Signed-off-by: Paul Sokolovsky <pfalcon@users.sourceforge.net>
Matches cpython format.
Supports setUp and tearDown functionality at Class level.
b72dc02
to
db4c739
Compare
Great work, thank you very much! |
test_result.testsRun += 1 | ||
test_globals = dict(**globals()) | ||
test_globals["test_function"] = test_function | ||
exec("test_function()", test_globals, test_globals) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andrewleech the change from just calling test_function()
to this breaks some tests for me (specifically: tests which call custom code implemented in C which calls mp_parse/mp_compile
on a string which modifies a global), however I'm not sure if what I'm doing is not really supported, or whether this change is incorrect. FWIW CPython also just calls the function without exec
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, this was all done so that each test is run in a clean python environment, avoiding tests unexpectedly breaking because a previous test has modified the global environment in some way. This was particularly helpful with big test suits running lots of tests were it can be very hard to track down an earlier test that hasn't quite reverted any changes made... Or conversely tests passing only because a previous one has inadvertently set/overridden something.
This was done to make tests simpler and more reliable... however in hindsight I never thought to check for explicit cpython compatibility on this point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is test_NotChangedByOtherTest
supposed to test this? Because that tests succeeeds here, both in python and micropython, and in the latter also after removing the test_globals
code. I'm not completely sure why, but it's not quite clear what it is actually being tested now.
This was all done so that each test is run in a clean python environment, avoiding tests unexpectedly breaking because a previous test has modified the global environment in some way
Arguably those tests should use setup/teardown correctly; but in any case:
globals()
Return a dictionary representing the current global symbol table. This is always the dictionary of the current module (inside a function or method, this is the module where it is defined, not the module from which it is called).
so I wonder whether this actually changed anything for you, i.e. this is just the globals for the unittest module, which doesn't affect tests which modify their own global environment or globals in another module? Or am I missing something here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've re-visited this code myself.... and yeah I think it's a bit broken. My "test isolation" feature is mostly based on caching / resetting sys.modules
to force a reload of all the imports clean for each test. This was intended to blow away any monkey patching etc. that might have been done and not fixed by previous tests.
I think this exec()
line was left over from an earlier iteration of this feature and really doen't add value here, it's superflous, the test case doesn't care about whether it's there or not and it's confusing to reviewers like you (and me a few months later). I think I had a crossed wire at the time thinking globals() was shared between modules or something.
You're entirely correct that setup/teardown functions should be used to reverse anything like this on tests, I was basically trying to protect developers (eg me) from themselves by reducing this need... on many projects in the past I and others have lost a lot of time chasing down intermittent test failures that can be due to really subtle changes in timing / order of tests that this could certainly help with!
On a related note, the time when test_NotChangedByOtherTest
failed for me yesterday is when running it via micropython ./test_unittest.py
rather than micropython -m unittest
.
When ./test_unittest.py
is the main test, its imports / global values are the initial values that get cached in _snapshot_modules()
function so will be made available to each test in run_module()
when discover
is later run. Certainly discover
was not supposed to be run here though...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so the test_globals stuff can go. What about _snapshot_modules (also not CPython-compatible) and test_unittest_isolated (doesn't actually test anuthing iunittest-specific)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm conflicted about the _snapshot_modules
stuff because while it's not entirely consistent with cpython, I find it very useful (in terms of saving hours chasing down issues in large test suites). I wonder if there's a way I could pull that out into a separate module / hook / etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thing is, sometimes it might be wanted, exactly to be able to verify inter-modle dependencies (whether that's good is something else). So the only sane thing would be an argument which can be passed to run_module and via the commandline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The _snapshot_modules
resets the state between separate test*.py
files, not between individual functions within a test file. It would be a very fragile test suite if you wanted state from one test file to be configured / shared to other test files... surely?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But yeah, considering it's non-standard behavior maybe it's better as an extension TestRunner that can be called something more recognisable for people to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Order can of files can be specified, and if that's what needs testing then it needs to happen whatever the way is :)
PR primary goal is to provide
discover
argument matching cpython/unittest module.Also improves other compatibility with cpython, mostly around running as a module (
micropython-coverage -m unittest
) and test stdout format.This is sitting on top of #487 and also includes some related module fixes which should get split out into separate PR's.
I also plan to investifate splitting
discover
and hopefully the rest of the new dependencies into a separate module (with auto-import on use like in uasyncio) that can be left out of deployment to device to reduce size if desired.