Skip to content

py/parse: Add support for math module constants and float folding #16666

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 1, 2025

Conversation

yoctopuce
Copy link
Contributor

@yoctopuce yoctopuce commented Jan 28, 2025

Summary

This is the first of four pull requests providing enhancements to the MicroPython parser, mainly targeting mpy-cross, with the aim of reducing the footprint of compiled mpy files to save flash and RAM. I have previously opened a discussion in the MicroPython discussion forum and asked for comments.

The first new feature is to extend the use of compile-time const() expressions to some unhandled cases, that we found useful in the MicroPython implementation of our programming API. our programming API uses named constants to refer to specific cases, such as INVALID_MEASURE. For floating-point methods, this requires a definition such as:

_INVALID_MEASURE = const(math.nan)

or

_INVALID_MEASURE = const(-math.inf)

However, this is not supported by the current implementation, because MICROPY_COMP_MODULE_CONST and MICROPY_COMP_CONST_FOLDING are restricted to integer constants.

So we have introduced a new MICROPY_COMP_FLOAT_CONST feature which reuses the code of MICROPY_COMP_CONST_FOLDING to also support folding of floating point constants, and to include math module constants when MICROPY_COMP_MODULE_CONST is defined. This makes it possible to use compile-time math constants such as:

_DEG_TO_GRADIANT = const(math.pi/180)
_INVALID_VALUE = const(math.nan)

The commit explicitely enables this feature for mpy-cross, as it makes the most sense there, but is otherwise limited to ports using MICROPY_CONFIG_ROM_LEVEL_FULL_FEATURES.

Testing

We have verified that the new code in mpy-cross works properly both on Windows and Linux. As target for running mpy code, we have been testing various Windows and Linux versions, as well as our custom board, which uses a Texas ARM Cortex processor very similar to the cc3200 port.

Micropython integration testing has found some tricky corner cases that have been solved:

  • Constants 0.0 and -0.0 should not be merged during code emission, even though they are identical according to ==
  • The string based encoding used for floats in the .mpy file must be done carefully, as the mp_parse_num_float() used when loading the .mpy constants has some quirks (due to the use of a float to build the mantissa from the decimal form) which can cause a decrease in precision when adding more decimals. For instance, the number returned by mp_parse_num_float() when parsing 2.7182818284590451 is smaller than 2.718281828459045, and therefore less accurate to represent math.e although it should actually be closer. But relying on 16 decimal places to represent double-precision does not work in all cases, such as properly encoding/decoding 2.0**100. So we ended up checking at compile time if mp_parse_num_float() would give the exact same number using the shortest representation, and only adding an extra digit only if this is not the case. This empirical method seems to work for all test cases.

Another way to solve the mp_parse_num_float() problem would have been to avoid the mantissa overflow altogether by using a pair of floats instead of a single float, but this would have required a change to the runtime code that is otherwise not needed by this pull request, and causing

  1. an increase in code size in the runtime code
  2. possible loss of precision when using new mpy-cross binaries with unpatched runtime

There are two qemu ports for which the integration tests show failed test cases, that appear to be related to this mp_parse_num_float() problem. We could investigate these further if you can provide information on how to reproduce this test environment.

Trade-offs and Alternatives

This Pull Request only affects the code size of mpy-cross and ports using MICROPY_CONFIG_ROM_LEVEL_FULL_FEATURES, for which the negative impact of increased code size is unlikely to be relevant.

Folding floating-point expressions at compile time generally reduces the memory footprint of .mpy files, by saving some opcodes and even some qstr for referencing math constants. The saving is even greater for use cases like ours, where global definition such as _INVALID_MEASURE = -math.inf can be replaced by a compile-time const() expression, removing all references to the qstr _INVALID_MEASURE.

Copy link

codecov bot commented Jan 28, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 98.38%. Comparing base (f67a370) to head (69ead7d).
⚠️ Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #16666   +/-   ##
=======================================
  Coverage   98.38%   98.38%           
=======================================
  Files         171      171           
  Lines       22276    22283    +7     
=======================================
+ Hits        21917    21924    +7     
  Misses        359      359           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link

github-actions bot commented Jan 28, 2025

Code size report:

   bare-arm:    +0 +0.000% 
minimal x86:    +0 +0.000% 
   unix x64:  +272 +0.032% standard
      stm32:  +176 +0.045% PYBV10
     mimxrt:  +168 +0.045% TEENSY40
        rp2:  +152 +0.017% RPI_PICO_W
       samd:  +160 +0.059% ADAFRUIT_ITSYBITSY_M4_EXPRESS
  qemu rv32:  +196 +0.043% VIRT_RV32

@dpgeorge dpgeorge added the py-core Relates to py/ directory in source label Jan 28, 2025
@dpgeorge
Copy link
Member

Thanks for the contribution! At first glance this looks like a good enhancement.

Please can you add tests to get 100% coverage of the new code.

@yoctopuce
Copy link
Contributor Author

Sure, I will update to get full coverage.

I have also identified the issue with qemu arm integration test failing and will fix it.

Should I convert to 'draft' or is it fine to leave the pull request as-is in between ?

@yoctopuce
Copy link
Contributor Author

Followup on the qemu arm integration failure: the problem is indeed linked to the suboptimal result provided by mp_parse_num_float(). Contrarily to what I initially thought, the problem is not linked to a mantissa overflow but to a round-up correction after multiplying by the exponent.

I wrote a small piece of test code that compares the value provide by mp_parse_num_float() to the value computed by casting the result of bigint arithmetic to a float, which is expected to provided the closest value. The two resulting values are displayed using a format string showing extra digits to put the difference in evidence. As illustrated below run with MICROPY_FLOAT_IMPL_FLOAT, mp_parse_num_float() currently fails to provide the best floating point number even in some very simple cases:

float('1.2e30'):
    => 1.2e+30
    vs 1.2e+30
float('1.26e30'):
    => 1.2599999020249624675e+30  FAIL
    vs 1.2599999775828091928e+30
float('1.267e30'):
    => 1.2669999610545268366e+30
    vs 1.2669999610545268366e+30
float('1.2676e30'):
    => 1.2676000416158554855e+30  FAIL
    vs 1.2675999660720408409e+30
float('1.26765e30'):
    => 1.2676499097910229878e+30  FAIL
    vs 1.2676499853488888585e+30
float('1.267650e30'):
    => 1.2676499097910229878e+30  FAIL
    vs 1.2676499853488888585e+30
float('1.2676506e30'):
    => 1.2676505898138652462e+30
    vs 1.2676505898138652462e+30
float('1.26765060e30'):
    => 1.2676505898138652462e+30
    vs 1.2676505898138652462e+30
float('1.267650600e30'):
    => 1.2676505898138652462e+30
    vs 1.2676505898138652462e+30
float('1.2676506002e30'):
    => 1.2676505898138652462e+30
    vs 1.2676505898138652462e+30
float('1.26765060022e30'):
    => 1.2676505142560004287e+30  FAIL
    vs 1.2676505898138652462e+30
float('1.267650600228e30'):
    => 1.2676505142560004287e+30  FAIL
    vs 1.2676505898138652462e+30
float('1.2676506002282e30'):
    => 1.2676505142560004287e+30  FAIL
    vs 1.2676505898138652462e+30
float('1.26765060022822e30'):
    => 1.2676505898138652462e+30
    vs 1.2676505898138652462e+30
float('1.267650600228229e30'):
    => 1.2676504386981526132e+30  FAIL
    vs 1.2676505898138652462e+30

I will see if I can fix that rounding issue properly.

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch from 6c82468 to cde2ff7 Compare January 30, 2025 18:49
@yoctopuce
Copy link
Contributor Author

The code has been improved since previous review, and coverage fixed.

The two outstanding failed checks are due to inaccurate float parsing when compiling with MICROPY_FLOAT_IMPL_FLOAT, but they should be fixed once my other pull request py/parsenum.c: reduce code footprint of mp_parse_num_float. #16672 is integrated

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch from 67497a0 to 902db0a Compare February 18, 2025 10:06
@yoctopuce
Copy link
Contributor Author

As mentionned in my previous comment, as this pull request depends on the improvement in parsenum.c in another pull request (#16672) , I have rebased it accordingly so that the unit test checks show the actual status once pulled in.

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch from 902db0a to 9b59aa4 Compare March 3, 2025 13:57
@yoctopuce
Copy link
Contributor Author

Rebased to master now that #16672 has been merged in.

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch 2 times, most recently from cfab33f to fdecfc5 Compare March 5, 2025 13:04
@yoctopuce
Copy link
Contributor Author

rebased to head revision

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch from fdecfc5 to 9c601a2 Compare May 16, 2025 14:07
@yoctopuce
Copy link
Contributor Author

rebased to head revision

@dpgeorge dpgeorge added this to the release-1.26.0 milestone May 20, 2025
# They should not anymore be expected to always fail:
#
# test_syntax("A = const(1 / 2)")
# test_syntax("A = const(1 ** -2)")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth putting these in a new test file tests/micropython/const_float.py, along with some other basic tests of float constant folding.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added two test files, one for simple const floats and one for const expression referencing math constants (if math is available).
I have also added a note to point out the the const folding code is actually mostly tested by running the full coverage tests via mpy, which involves lots of float constant folding.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have also added a note to point out the the const folding code is actually mostly tested by running the full coverage tests via mpy, which involves lots of float constant folding.

Does it really? Going via mpy doesn't change the parsing or compilation process. Can you point to a specific existing test that tests float folding?

Copy link
Contributor Author

@yoctopuce yoctopuce May 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

diff.txt
I was originally enabling float folding only in mpy-cross, hence my comment. Now that float folding is enabled in CORE, float folding is actually tested during float test cases regardless of whether mpy is used or not.

Looking at the difference in generated bytecode for floating point test cases demonstrates the effect of float folding. Here is an extract (I have attached to this message the complete diff file if you want to have a deeper look at it):

--- no-const-float.dump	2025-05-30 09:09:53.530062733 +0200
+++ const-float.dump	2025-05-30 09:09:06.069969310 +0200
@@ -1,6 +1,6 @@
 mpy_source_file: float1.mpy
 source_file: float1.py
-obj_table: [0.12, 1.0, 1.2, 0.0, b'1.2', b'3.4', 2.0, 3.4, 1.847286994360591]
+obj_table: [0.12, 1.0, 1.2, 0.0, b'1.2', b'3.4', -1.2, 0.5, 3.4, -3.4, 1.8472869943605905]
 simple_name: <module>
   11:16       LOAD_NAME print
   23:00       LOAD_CONST_OBJ 0.12
@@ -189,17 +189,13 @@
   59          POP_TOP 
   11:16       LOAD_NAME print
   23:02       LOAD_CONST_OBJ 1.2
-  d0          UNARY_OP 0 __pos__ 
   34:01       CALL_FUNCTION 1
   59          POP_TOP 
   11:16       LOAD_NAME print
-  23:02       LOAD_CONST_OBJ 1.2
-  d1          UNARY_OP 1 __neg__ 
+  23:06       LOAD_CONST_OBJ -1.2
   34:01       CALL_FUNCTION 1
   59          POP_TOP 
-  81          LOAD_CONST_SMALL_INT 1 
-  82          LOAD_CONST_SMALL_INT 2 
-  f7          BINARY_OP 32 __truediv__ 
+  23:07       LOAD_CONST_OBJ 0.5
   16:1a       STORE_NAME x
   11:16       LOAD_NAME print
   11:1a       LOAD_NAME x
...

Another extract from float2int_doubleprec_intbig.py

   11:19       LOAD_NAME is_64bit
-  44:66       POP_JUMP_IF_FALSE 38
-  23:0a       LOAD_CONST_OBJ 1.00000005
-  d1          UNARY_OP 1 __neg__ 
-  23:01       LOAD_CONST_OBJ 2.0
-  23:0b       LOAD_CONST_OBJ 62.0
-  f9          BINARY_OP 34 __pow__ 
-  f4          BINARY_OP 29 __mul__ 
+  44:52       POP_JUMP_IF_FALSE 18
+  23:0b       LOAD_CONST_OBJ -4.611686249011688e+18
   16:27       STORE_NAME neg_bad_fp
-  23:01       LOAD_CONST_OBJ 2.0
-  23:0b       LOAD_CONST_OBJ 62.0
-  f9          BINARY_OP 34 __pow__ 
+  23:0c       LOAD_CONST_OBJ 4.611686018427388e+18
   16:28       STORE_NAME pos_bad_fp
-  23:01       LOAD_CONST_OBJ 2.0
-  23:0b       LOAD_CONST_OBJ 62.0
-  f9          BINARY_OP 34 __pow__ 
-  d1          UNARY_OP 1 __neg__ 
+  23:0d       LOAD_CONST_OBJ -4.611686018427388e+18
   16:29       STORE_NAME neg_good_fp
-  23:0c       LOAD_CONST_OBJ 0.9999999299999999
-  23:01       LOAD_CONST_OBJ 2.0
-  23:0b       LOAD_CONST_OBJ 62.0
-  f9          BINARY_OP 34 __pow__ 
-  f4          BINARY_OP 29 __mul__ 
+  23:0e       LOAD_CONST_OBJ 4.6116856956093665e+18
   16:2a       STORE_NAME pos_good_fp

Another nice one from math_fun.py

   10:19       LOAD_CONST_STRING pow
   11:19       LOAD_NAME pow
-  23:0d       LOAD_CONST_OBJ (1.0, 0.0)
-  23:0e       LOAD_CONST_OBJ (0.0, 1.0)
-  23:0f       LOAD_CONST_OBJ (2.0, 0.5)
-  23:10       LOAD_CONST_OBJ 3.0
-  d1          UNARY_OP 1 __neg__ 
-  23:11       LOAD_CONST_OBJ 5.0
-  2a:02       BUILD_TUPLE 2
-  23:10       LOAD_CONST_OBJ 3.0
-  d1          UNARY_OP 1 __neg__ 
-  23:12       LOAD_CONST_OBJ 4.0
-  d1          UNARY_OP 1 __neg__ 
-  2a:02       BUILD_TUPLE 2
-  2a:05       BUILD_TUPLE 5
+  23:17       LOAD_CONST_OBJ ((1.0, 0.0), (0.0, 1.0), (2.0, 0.5), (-3.0, 5.0), (-3.0, -4.0))
   2a:03       BUILD_TUPLE 3

By the way, this dump shows that float folding shortcuts the true intent of some of these tests, as some float operations were supposed to execute in runtime but are now executed at compile-time. Should we rewrite them using a temporary variable to prevent float folding ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, this dump shows that float folding shortcuts the true intent of some of these tests, as some float operations were supposed to execute in runtime but are now executed at compile-time.

That's correct.

But, at the same time, they are still executed "at runtime" on the target, and by the same code as before (eg float_binary_op). The only difference is that the computation is done at compile time instead of when the bytecode is executed.

I think it's OK, because we still get the same coverage of the same functions running the same tests.

A difference will be when running a test via mpy files. Then the computation is done on the host. So these mpy tests change and now exercise mpy-cross more and also saving/loading of floats in the mpy file. That's probably a good thing, to test that more.

In the future we could add some tests for explicit float computation during bytecode execution (because maybe that somehow differs to doing it during compilation) but for now I'm happy that we still have the same coverage of functions like float_binary_op running on the target.

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch 3 times, most recently from c2352bb to f40be29 Compare May 26, 2025 21:56
@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch 6 times, most recently from db08842 to 860cc5c Compare May 30, 2025 13:46
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.
It also fix a similar math.nan sign error in REPR_C
(i.e. copysign(0.0,math.nan) should return 0.0).

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 19, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

The commit also include two minor fix for nanbox, that were
preventing the new CI tests to run properly on that port.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jun 23, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 3, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 4, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 4, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision.
- EXACT method uses higher-precision floats during conversion,
  which provides best results but, has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10    FLOAT   DOUBLE
current = 364136   27.57%   37.90%
basic   = 364188   91.01%   62.18%
approx  = 364396   98.50%   99.84%
exact   = 365608  100.00%  100.00%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 16, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364596   12.9%   27.6%   37.9%
basic   = 364712   85.6%   60.5%   85.7%
approx  = 364964  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Note that when using REPR_C, a few test cases do not pass
due to the missing bits in the actual value, which are now
properly reflected inthe result by the format function.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 16, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364596   12.9%   27.6%   37.9%
basic   = 364712   85.6%   60.5%   85.7%
approx  = 364964  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Note that when using REPR_C, a few test cases do not pass
due to the missing bits in the actual value, which are now
properly reflected inthe result by the format function.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 16, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364596   12.9%   27.6%   37.9%
basic   = 364712   85.6%   60.5%   85.7%
approx  = 364964  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Note that when using REPR_C, a few test cases do not pass
due to the missing bits in the actual value, which are now
properly reflected inthe result by the format function.

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 23, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 23, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 23, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 24, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 24, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
yoctopuce added a commit to yoctopuce/micropython that referenced this pull request Jul 24, 2025
Following discussions in PR micropython#16666, this commit updates the
float formatting code to improve the `repr` reversibility,
i.e. the percentage of valid floating point numbers that
do parse back to the same number when formatted by `repr`.

This new code offers a choice of 3 float conversion methods,
depending on the desired tradeoff between code size and
conversion precision:
- BASIC method is the smallest code footprint
- APPROX method uses an iterative method to approximate
  the exact representation, which is a bit slower but
  but does not have a big impact on code size.
  It provides `repr` reversibility on >99.8% of the cases
  in double precision, and on >98.5% in single precision
  (except with REPR_C, where reversibility is 100% as the
  last two bits are not taken into account).
- EXACT method uses higher-precision floats during conversion,
  which provides perfect results but has a higher impact on code
  size. It is faster than APPROX method, and faster than
  CPython equivalent implementation. It is however not available
  on all compilers when using FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion
methods on code footprint on PYBV10 (using single-precision
floats) and reversibility rate for both single-precision and
double-precision floats. The table includes current situation
as a baseline for the comparison:

          PYBV10  REPR_C   FLOAT  DOUBLE
current = 364688   12.9%   27.6%   37.9%
basic   = 364812   85.6%   60.5%   85.7%
approx  = 365080  100.0%   98.5%   99.8%
exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
dpgeorge pushed a commit to yoctopuce/micropython that referenced this pull request Jul 31, 2025
Following discussions in PR micropython#16666, this commit updates the float
formatting code to improve the `repr` reversibility, i.e. the percentage of
valid floating point numbers that do parse back to the same number when
formatted by `repr` (in CPython it's 100%).

This new code offers a choice of 3 float conversion methods, depending on
the desired tradeoff between code size and conversion precision:

- BASIC method is the smallest code footprint

- APPROX method uses an iterative method to approximate the exact
  representation, which is a bit slower but but does not have a big impact
  on code size.  It provides `repr` reversibility on >99.8% of the cases in
  double precision, and on >98.5% in single precision (except with REPR_C,
  where reversibility is 100% as the last two bits are not taken into
  account).

- EXACT method uses higher-precision floats during conversion, which
  provides perfect results but has a higher impact on code size.  It is
  faster than APPROX method, and faster than the CPython equivalent
  implementation.  It is however not available on all compilers when using
  FLOAT_IMPL_DOUBLE.

Here is the table comparing the impact of the three conversion methods on
code footprint on PYBV10 (using single-precision floats) and reversibility
rate for both single-precision and double-precision floats.  The table
includes current situation as a baseline for the comparison:

              PYBV10  REPR_C   FLOAT  DOUBLE
    current = 364688   12.9%   27.6%   37.9%
    basic   = 364812   85.6%   60.5%   85.7%
    approx  = 365080  100.0%   98.5%   99.8%
    exact   = 366408  100.0%  100.0%  100.0%

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
@dpgeorge
Copy link
Member

This can now be rebased on the latest master, to pick up the new float formatting code.

@yoctopuce
Copy link
Contributor Author

I will take care of that right now

@yoctopuce yoctopuce force-pushed the MICROPY_COMP_FLOAT_CONST branch 2 times, most recently from 0ab6acc to 1aa03e9 Compare July 31, 2025 15:31
Copy link
Member

@dpgeorge dpgeorge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking really good now with the new float format changes (that are already merged).

This PR is now relatively simple, and the code size increase has dropped a lot. Basically what's done here is (1) allow floats and ints in constant folding, and (2) use NLR to protect the binary_op, and don't fold the constant if that call fails.

Very simple and very effective!

Add a new MICROPY_COMP_CONST_FLOAT feature, enabled by in mpy-cross and
when compiling with MICROPY_CONFIG_ROM_LEVEL_CORE_FEATURES.  The new
feature leverages the code of MICROPY_COMP_CONST_FOLDING to support folding
of floating point constants.

If MICROPY_COMP_MODULE_CONST is defined as well, math module constants are
made available at compile time. For example:

    _DEG_TO_GRADIANT = const(math.pi / 180)
    _INVALID_VALUE = const(math.nan)

A few corner cases had to be handled:
- The float const folding code should not fold expressions resulting into
  complex results, as the mpy parser for complex immediates has
  limitations.
- The constant generation code must distinguish between -0.0 and 0.0, which
  are different even if C consider them as ==.

This change removes previous limitations on the use of `const()`
expressions that would result in floating point number, so the test cases
of micropython/const_error have to be updated.

Additional test cases have been added to cover the new repr() code (from a
previous commit).  A few other simple test cases have been added to handle
the use of floats in `const()` expressions, but the float folding code
itself is also tested when running general float test cases, as float
expressions often get resolved at compile-time (with this change).

Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
@dpgeorge dpgeorge force-pushed the MICROPY_COMP_FLOAT_CONST branch from 1aa03e9 to 69ead7d Compare August 1, 2025 03:39
@dpgeorge dpgeorge merged commit 69ead7d into micropython:master Aug 1, 2025
89 of 90 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
py-core Relates to py/ directory in source
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy