Skip to content

gh-55531: Implement normalize_encoding in C #136643

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 2 additions & 12 deletions Lib/encodings/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@

import codecs
import sys
from _codecs import _normalize_encoding
from . import aliases

_cache = {}
Expand All @@ -55,18 +56,7 @@ def normalize_encoding(encoding):
if isinstance(encoding, bytes):
encoding = str(encoding, "ascii")

chars = []
punct = False
for c in encoding:
if c.isalnum() or c == '.':
if punct and chars:
chars.append('_')
if c.isascii():
chars.append(c)
punct = False
else:
punct = True
return ''.join(chars)
return _normalize_encoding(encoding)

def search_function(encoding):

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
:mod:`encodings`: Improve :func:`~encodings.normalize_encoding` performance
by implementing the function in C using the private
``_Py_normalize_encoding`` which has been modified to make lowercase
conversion optional.
53 changes: 53 additions & 0 deletions Modules/_codecsmodule.c
Original file line number Diff line number Diff line change
Expand Up @@ -1022,6 +1022,58 @@ _codecs_lookup_error_impl(PyObject *module, const char *name)
return PyCodec_LookupError(name);
}

extern int _Py_normalize_encoding(const char *, char *, size_t, int);

/*[clinic input]
_codecs._normalize_encoding
encoding: unicode

Normalize an encoding name *encoding*.

Used for encodings.normalize_encoding. Does not convert to lower case.
[clinic start generated code]*/

static PyObject *
_codecs__normalize_encoding_impl(PyObject *module, PyObject *encoding)
/*[clinic end generated code: output=d27465d81e361f8e input=3ff3f4d64995b988]*/
{
Py_ssize_t len;
const char *cstr = PyUnicode_AsUTF8AndSize(encoding, &len);
if (cstr == NULL) {
return NULL;
}

if (len > PY_SSIZE_T_MAX) {
PyErr_SetString(PyExc_OverflowError, "encoding is too large");
return NULL;
}

PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know that the Unicode writer API is new and shiny, but this complicated ? A simple call to PyUnicode_FromStringAndSize(normalized, strlen(normalized)) would have worked as well, if I'm not mistaken 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had it that way originally but I was told to use it by @ZeroIntensity

Copy link
Member

@ZeroIntensity ZeroIntensity Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

People look to CPython for inspiration/howtos on their own extensions, I think we should be encouraging them to use things like PyUnicodeWriter.

That said, are encoding strings particularly large? If not, I think a simple stack allocation (e.g. char normalized[16];) would be the most robust here.

Copy link
Member Author

@StanFromIreland StanFromIreland Jul 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That said, are encoding strings particularly large?

From my knowledge, they should generally not be excessively long (I would estimate an upper bound 50 chars would be safe, 30 would probably be fine too), though there is no standard to refer to. I originally allocated the length of the input string, as it is the maximum length of the normalized string, I think that would be better than hard coding it.

So, should I revert the commits to the original state, Marc/Peter?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If performance is the motivation here, then I'm not a big fan of the original version. It made some needless copies and recalculations of the string size.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

diff --git a/Modules/_codecsmodule.c b/Modules/_codecsmodule.c
--- a/Modules/_codecsmodule.c	(revision 1c9e55ab8ffafd2bb0e68c688fadab90399cfc16)
+++ b/Modules/_codecsmodule.c	(date 1753180784174)
@@ -1048,30 +1048,19 @@
         return NULL;
     }
 
-    PyUnicodeWriter *writer = PyUnicodeWriter_Create(len + 1);
-    if (writer == NULL) {
-        return NULL;
-    }
-
     char *normalized = PyMem_Malloc(len + 1);
     if (normalized == NULL) {
-        PyUnicodeWriter_Discard(writer);
         return PyErr_NoMemory();
     }
 
     if (!_Py_normalize_encoding(cstr, normalized, len + 1, 0)) {
         PyMem_Free(normalized);
-        PyUnicodeWriter_Discard(writer);
         return NULL;
     }
 
-    if (PyUnicodeWriter_WriteUTF8(writer, normalized, (Py_ssize_t)strlen(normalized)) < 0) {
-        PyUnicodeWriter_Discard(writer);
-        PyMem_Free(normalized);
-        return NULL;
-    }
+    PyObject *result = PyUnicode_FromString(normalized);
     PyMem_Free(normalized);
-    return PyUnicodeWriter_Finish(writer);
+    return result;
 }
 
 /* --- Module API --------------------------------------------------------- */

if (writer == NULL) {
return NULL;
}

char *normalized = PyMem_Malloc(len + 1);
if (normalized == NULL) {
PyUnicodeWriter_Discard(writer);
return PyErr_NoMemory();
}

if (!_Py_normalize_encoding(cstr, normalized, len + 1, 0)) {
PyMem_Free(normalized);
PyUnicodeWriter_Discard(writer);
return NULL;
}

if (PyUnicodeWriter_WriteUTF8(writer, normalized, (Py_ssize_t)strlen(normalized)) < 0) {
PyUnicodeWriter_Discard(writer);
PyMem_Free(normalized);
return NULL;
}
PyMem_Free(normalized);
return PyUnicodeWriter_Finish(writer);
}

/* --- Module API --------------------------------------------------------- */

static PyMethodDef _codecs_functions[] = {
Expand Down Expand Up @@ -1071,6 +1123,7 @@ static PyMethodDef _codecs_functions[] = {
_CODECS_REGISTER_ERROR_METHODDEF
_CODECS__UNREGISTER_ERROR_METHODDEF
_CODECS_LOOKUP_ERROR_METHODDEF
_CODECS__NORMALIZE_ENCODING_METHODDEF
{NULL, NULL} /* sentinel */
};

Expand Down
66 changes: 65 additions & 1 deletion Modules/clinic/_codecsmodule.c.h

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

15 changes: 8 additions & 7 deletions Objects/unicodeobject.c
Original file line number Diff line number Diff line change
Expand Up @@ -3587,13 +3587,14 @@ PyUnicode_FromEncodedObject(PyObject *obj,
return v;
}

/* Normalize an encoding name: similar to encodings.normalize_encoding(), but
also convert to lowercase. Return 1 on success, or 0 on error (encoding is
longer than lower_len-1). */
/* Normalize an encoding name like encodings.normalize_encoding()
but allow to convert to lowercase if *to_lower* is true.
Return 1 on success, or 0 on error (encoding is longer than lower_len-1). */
int
_Py_normalize_encoding(const char *encoding,
char *lower,
size_t lower_len)
size_t lower_len,
int to_lower)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having the to_lower conditional in the tight loop is not ideal. It makes the function slower for all other uses.

It's better to copy the value into a const int apply_lower local variable and then use apply_lower in the loop. The compiler can then optimize the code accordingly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OTOH, perhaps compilers are smart enough nowadays to figure this out by themselves 😄

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some benchmarks show that the two cases are equivalent, so I assume my compiler optimizes it to the same thing in the end. It makes the code slightly more complex but I don't mind adding it if you insist.

{
const char *e;
char *l;
Expand Down Expand Up @@ -3624,7 +3625,7 @@ _Py_normalize_encoding(const char *encoding,
if (l == l_end) {
return 0;
}
*l++ = Py_TOLOWER(c);
*l++ = to_lower ? Py_TOLOWER(c) : c;
}
else {
punct = 1;
Expand Down Expand Up @@ -3659,7 +3660,7 @@ PyUnicode_Decode(const char *s,
}

/* Shortcuts for common default encodings */
if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower), 1)) {
char *lower = buflower;

/* Fast paths */
Expand Down Expand Up @@ -3916,7 +3917,7 @@ PyUnicode_AsEncodedString(PyObject *unicode,
}

/* Shortcuts for common default encodings */
if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower), 1)) {
char *lower = buflower;

/* Fast paths */
Expand Down
7 changes: 4 additions & 3 deletions Python/codecs.c
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ PyCodec_Unregister(PyObject *search_function)
return 0;
}

extern int _Py_normalize_encoding(const char *, char *, size_t);
extern int _Py_normalize_encoding(const char *, char *, size_t, int);

/* Convert a string to a normalized Python string(decoded from UTF-8): all characters are
converted to lower case, spaces and hyphens are replaced with underscores. */
Expand All @@ -108,10 +108,11 @@ PyObject *normalizestring(const char *string)
}

encoding = PyMem_Malloc(len + 1);
if (encoding == NULL)
if (encoding == NULL) {
return PyErr_NoMemory();
}

if (!_Py_normalize_encoding(string, encoding, len + 1))
if (!_Py_normalize_encoding(string, encoding, len + 1, 1))
{
PyErr_SetString(PyExc_RuntimeError, "_Py_normalize_encoding() failed");
PyMem_Free(encoding);
Expand Down
4 changes: 2 additions & 2 deletions Python/fileutils.c
Original file line number Diff line number Diff line change
Expand Up @@ -180,7 +180,7 @@ _Py_mbrtowc(wchar_t *pwc, const char *str, size_t len, mbstate_t *pmbs)

#define USE_FORCE_ASCII

extern int _Py_normalize_encoding(const char *, char *, size_t);
extern int _Py_normalize_encoding(const char *, char *, size_t, int);

/* Workaround FreeBSD and OpenIndiana locale encoding issue with the C locale
and POSIX locale. nl_langinfo(CODESET) announces an alias of the
Expand Down Expand Up @@ -231,7 +231,7 @@ check_force_ascii(void)
}

char encoding[20]; /* longest name: "iso_646.irv_1991\0" */
if (!_Py_normalize_encoding(codeset, encoding, sizeof(encoding))) {
if (!_Py_normalize_encoding(codeset, encoding, sizeof(encoding), 1)) {
goto error;
}

Expand Down
Loading
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy