Skip to content

gh-55531: Implement normalize_encoding in C #136643

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

StanFromIreland
Copy link
Member

@StanFromIreland StanFromIreland commented Jul 14, 2025

Copy link
Member

@picnixz picnixz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that it's a draft but here are already some comments that you can dismiss if you're working on them.

@StanFromIreland
Copy link
Member Author

StanFromIreland commented Jul 14, 2025

I have cleaned up the changes and ensure the behavior remains the same, however there are still a few points I need input from @malemburg
(And as Benedikt said, should be their own issue)

  • This function is documented as taking strings, but during the 2->3 conversion and undocumented, and untested change was made which allowed it to accept bytes. I have kept it this way (in Python, to make removal simpler), though I think this should either be documented and tested, or removed.
  • The function has been documented as ascii only, and for bytes, it is. However, for strings, it has not been enforced with an error. What should we do?

@StanFromIreland StanFromIreland marked this pull request as ready for review July 14, 2025 12:54
@StanFromIreland StanFromIreland requested a review from picnixz July 14, 2025 12:54
Copy link
Member

@ZeroIntensity ZeroIntensity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would you mind running some microbenchmarks?

@StanFromIreland
Copy link
Member Author

StanFromIreland commented Jul 15, 2025

Benchmarks:

script
import time
from encodings import normalize_encoding
import pyperf


def bench(loops):
    range_it = range(loops)
    t0 = time.perf_counter()

    for _ in range_it:
        normalize_encoding('utf_8')
        normalize_encoding('utf\xE9\u20AC\U0010ffff-8')
        normalize_encoding('utf   8')
        normalize_encoding('%%%~')
        normalize_encoding('UTF...8')

    return time.perf_counter() - t0


runner = pyperf.Runner()
runner.bench_time_func('normalize_encoding', bench, inner_loops=10)

Main branch:

normalize_encoding: Mean +- std dev: 173 ns +- 7 ns

This PR:

normalize_encoding: Mean +- std dev: 42.9 ns +- 1.1 ns

@malemburg
Copy link
Member

Sorry for the lack of response. I'm currently at EuroPython and pretty busy with other things. I'll have a look on Saturday during the sprints.

@serhiy-storchaka
Copy link
Member

There are some subtle differences between Python and C code. We first need to decide what normalization is needed in Python and C code. It seems that excessive normalization caused problems (see #88886).

@StanFromIreland
Copy link
Member Author

StanFromIreland commented Jul 21, 2025

There are some subtle differences between Python and C code.

All of our tests pass, and from my further testing it also matches behaviour, can you please point out such cases?

These are long standing issues that have had no progress for quite a while, I propose, to keep this PR simple/organized and therefore focused on switching the implementation to the existing C code. This PR will become more complex and therefore harder to review if it has to rewrite the existing C code too. The existing issues, and yours from a few days ago, can be addressed in the C implementation, rather than both implementations.

@serhiy-storchaka
Copy link
Member

I suggest holding this PR until we solve other issues. Otherwise it will make backporting other changes more difficult.

I have some comments about this PR, but it's too early to address them because in the end, everything could change radically.

return NULL;
}

PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that the Unicode writer API is new and shiny, but this complicated ? A simple call to PyUnicode_FromStringAndSize(normalized, strlen(normalized)) would have worked as well, if I'm not mistaken 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had it that way originally but I was told to use it by @ZeroIntensity

Copy link
Member

@ZeroIntensity ZeroIntensity Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

People look to CPython for inspiration/howtos on their own extensions, I think we should be encouraging them to use things like PyUnicodeWriter.

That said, are encoding strings particularly large? If not, I think a simple stack allocation (e.g. char normalized[16];) would be the most robust here.

Copy link
Member Author

@StanFromIreland StanFromIreland Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said, are encoding strings particularly large?

From my knowledge, they should generally not be excessively long (I would estimate an upper bound 50 chars would be safe, 30 would probably be fine too), though there is no standard to refer to. I originally allocated the length of the input string, as it is the maximum length of the normalized string, I think that would be better than hard coding it.

So, should I revert the commits to the original state, Marc/Peter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If performance is the motivation here, then I'm not a big fan of the original version. It made some needless copies and recalculations of the string size.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

diff --git a/Modules/_codecsmodule.c b/Modules/_codecsmodule.c
--- a/Modules/_codecsmodule.c	(revision 1c9e55ab8ffafd2bb0e68c688fadab90399cfc16)
+++ b/Modules/_codecsmodule.c	(date 1753180784174)
@@ -1048,30 +1048,19 @@
         return NULL;
     }
 
-    PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1);
-    if (writer == NULL) {
-        return NULL;
-    }
-
     char *normalized = PyMem_Malloc(len + 1);
     if (normalized == NULL) {
-        PyUnicodeWriter_Discard(writer);
         return PyErr_NoMemory();
     }
 
     if (!_Py_normalize_encoding(cstr, normalized, len + 1, 0)) {
         PyMem_Free(normalized);
-        PyUnicodeWriter_Discard(writer);
         return NULL;
     }
 
-    if (PyUnicodeWriter_WriteUTF8(writer, normalized, (Py_ssize_t)strlen(normalized)) < 0) {
-        PyUnicodeWriter_Discard(writer);
-        PyMem_Free(normalized);
-        return NULL;
-    }
+    PyObject *result = PyUnicode_FromString(normalized);
     PyMem_Free(normalized);
-    return PyUnicodeWriter_Finish(writer);
+    return result;
 }
 
 /* --- Module API --------------------------------------------------------- */

int
_Py_normalize_encoding(const char *encoding,
char *lower,
size_t lower_len)
size_t lower_len,
int to_lower)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having the to_lower conditional in the tight loop is not ideal. It makes the function slower for all other uses.

It's better to copy the value into a const int apply_lower local variable and then use apply_lower in the loop. The compiler can then optimize the code accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OTOH, perhaps compilers are smart enough nowadays to figure this out by themselves 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some benchmarks show that the two cases are equivalent, so I assume my compiler optimizes it to the same thing in the end. It makes the code slightly more complex but I don't mind adding it if you insist.

@bedevere-app
Copy link

bedevere-app bot commented Jul 22, 2025

A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated.

Once you have made the requested changes, please leave a comment on this pull request containing the phrase I have made the requested changes; please review again. I will then notify any core developers who have left a review that you're ready for them to take another look at this pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy