-
-
Notifications
You must be signed in to change notification settings - Fork 32.4k
gh-55531: Implement normalize_encoding
in C
#136643
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
gh-55531: Implement normalize_encoding
in C
#136643
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that it's a draft but here are already some comments that you can dismiss if you're working on them.
I have cleaned up the changes and ensure the behavior remains the same, however there are still a few points I need input from @malemburg
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind running some microbenchmarks?
Benchmarks: script
Main branch:
This PR:
|
Sorry for the lack of response. I'm currently at EuroPython and pretty busy with other things. I'll have a look on Saturday during the sprints. |
There are some subtle differences between Python and C code. We first need to decide what normalization is needed in Python and C code. It seems that excessive normalization caused problems (see #88886). |
All of our tests pass, and from my further testing it also matches behaviour, can you please point out such cases? These are long standing issues that have had no progress for quite a while, I propose, to keep this PR simple/organized and therefore focused on switching the implementation to the existing C code. This PR will become more complex and therefore harder to review if it has to rewrite the existing C code too. The existing issues, and yours from a few days ago, can be addressed in the C implementation, rather than both implementations. |
I suggest holding this PR until we solve other issues. Otherwise it will make backporting other changes more difficult. I have some comments about this PR, but it's too early to address them because in the end, everything could change radically. |
return NULL; | ||
} | ||
|
||
PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know that the Unicode writer API is new and shiny, but this complicated ? A simple call to PyUnicode_FromStringAndSize(normalized, strlen(normalized))
would have worked as well, if I'm not mistaken 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had it that way originally but I was told to use it by @ZeroIntensity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
People look to CPython for inspiration/howtos on their own extensions, I think we should be encouraging them to use things like PyUnicodeWriter
.
That said, are encoding strings particularly large? If not, I think a simple stack allocation (e.g. char normalized[16];
) would be the most robust here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That said, are encoding strings particularly large?
From my knowledge, they should generally not be excessively long (I would estimate an upper bound 50 chars would be safe, 30 would probably be fine too), though there is no standard to refer to. I originally allocated the length of the input string, as it is the maximum length of the normalized string, I think that would be better than hard coding it.
So, should I revert the commits to the original state, Marc/Peter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If performance is the motivation here, then I'm not a big fan of the original version. It made some needless copies and recalculations of the string size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
diff --git a/Modules/_codecsmodule.c b/Modules/_codecsmodule.c
--- a/Modules/_codecsmodule.c (revision 1c9e55ab8ffafd2bb0e68c688fadab90399cfc16)
+++ b/Modules/_codecsmodule.c (date 1753180784174)
@@ -1048,30 +1048,19 @@
return NULL;
}
- PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1);
- if (writer == NULL) {
- return NULL;
- }
-
char *normalized = PyMem_Malloc(len + 1);
if (normalized == NULL) {
- PyUnicodeWriter_Discard(writer);
return PyErr_NoMemory();
}
if (!_Py_normalize_encoding(cstr, normalized, len + 1, 0)) {
PyMem_Free(normalized);
- PyUnicodeWriter_Discard(writer);
return NULL;
}
- if (PyUnicodeWriter_WriteUTF8(writer, normalized, (Py_ssize_t)strlen(normalized)) < 0) {
- PyUnicodeWriter_Discard(writer);
- PyMem_Free(normalized);
- return NULL;
- }
+ PyObject *result = PyUnicode_FromString(normalized);
PyMem_Free(normalized);
- return PyUnicodeWriter_Finish(writer);
+ return result;
}
/* --- Module API --------------------------------------------------------- */
int | ||
_Py_normalize_encoding(const char *encoding, | ||
char *lower, | ||
size_t lower_len) | ||
size_t lower_len, | ||
int to_lower) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having the to_lower conditional in the tight loop is not ideal. It makes the function slower for all other uses.
It's better to copy the value into a const int apply_lower
local variable and then use apply_lower
in the loop. The compiler can then optimize the code accordingly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OTOH, perhaps compilers are smart enough nowadays to figure this out by themselves 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some benchmarks show that the two cases are equivalent, so I assume my compiler optimizes it to the same thing in the end. It makes the code slightly more complex but I don't mind adding it if you insist.
A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated. Once you have made the requested changes, please leave a comment on this pull request containing the phrase |
Uh oh!
There was an error while loading. Please reload this page.