-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
ENH: add a casting option 'same_value' and use it in np.astype #29129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is pretty cool that we can do it! That said, I would prefer some more tinkering about the approach.
Basically, I think making it work everywhere is not nearly as hard as it may seem and so I think we have to at least try.
I think the only good way to do this is that the new cast level(s) are reported by the loops (via resolve_descriptors
) and integrated into PyArray_MinCastSafety
. By doing so, we will achieve two things:
- Things should just work everywhere as long as you also set the
context.flag
correctly everywhere (should also not be too much). - I am very sure that currently things are broken for datatypes that don't implement this flag yet. And unfortunately, that is not great. If a user does
casting="same_value"
a"safe"
cast is fine but we need to reject"unsafe"
and"same_kind"
casts.
So, I think we really need something for 2, and honestly think the easiest way to solve it is this same path that gives us full support everywhere.
(And yes, I don't love having two "same_value" levels, but I doubt it can be helped.)
(All other things are just nitpicks at this time.)
@@ -144,6 +150,11 @@ typedef struct { | |||
#define NPY_METH_contiguous_indexed_loop 9 | |||
#define _NPY_METH_static_data 10 | |||
|
|||
/* | |||
* Constants for same_value casting |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would be constant for special array method return values, I think (i.e. not just applicable to casts).
Is it really worth to just keep this inside the inner-loop? Yes, you need to grab the GIL there, but on the plus side, you could actually consider reporting the error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought the point of having a return value from the inner loop function is so we can use it to report errors. In general I prefer a code style that tries as much as possible to separate programming metaphors: keep the python c-api calls separate from the "pure" C functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can be convinced to add this. But it is public API and we call these loops from a lot of places so we need machinery to make sure that all of these places give the same errors.
That may be a great refactor! E.g. we could have a single function that does HandleArrayMethodError("name", method_result, method_flags)
.
But I guess without such a refactor it feels very bolted on to me, because whether or not we check for this return depends on where we call/expect it.
EDIT: I.e. basically, wherever we have PyUFunc_GiveFloatinpointErrors()
we would also pass the actual return value and do the needed logic there.
cast_info.context.flags |= NPY_SAME_VALUE_CASTING; | ||
PyErr_SetString(PyExc_NotImplementedError, "'same_value' casting not implemented yet"); | ||
return -1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we really shouldn't need any of these checks.
What you need to do is modify:
PyArray_MinCastSafety()
to correctly propagate the new cast level. If we can correctly propagate cast levels, everything else will just drop out for free. Although, you have to also report the correct cast level in the cast loops (that is the *_resolve_descriptors
functions.)
What I am not quite sure about is how same-value and same-kind interact, unfortunately.
For the user, this doesn't matter, but we do the opposite internally, we say:
This cast is "SAFETY", which is why I suggested a flag, but maybe it is just two levels: same-value-unsafe
and same-value-same-kind
.
Before you think what madness begot Sebastian... The two levels would not to be user exposed to python users at all! The user would just use same_value
(translating to same-value-unsafe
). But a cast will be able to report same-value-same-kind
to know that both can_cast(x, y, "same_value")
and can_cast(x, y, "same_kind")
is valid.
(I might be tempted to make this a "flag", in the sense of making sure there is one bit involved that isn't used by other cast levels.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I want to expand the scope of this PR to support other things. Here are the other functions that support a casting kwarg via PyArray_CastingConverter
, which do you imagine should support same_value
in this PR?
datetime_as_string
copyto
concatenate
einsum
can_cast
(agreed, it should "support"same_value
like it doesunsafe
)npyiter_init
,NpyIter_NestedIters
(??? anyhow, should NOT support it in this PR)convert_ufunc_arguments
,py_resolve_dtypes_generic
(should NOT support it in this PR)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think I was asking for basically two things:
- Fix the
MinCastSafety
logic and return the (unfortunately two) "same_value" casting levels from theresolve_dtypes
functions. This:- Fixes user-dtype and other-dtype support.
- Is the only non "just bolt it on" way to fix
np.can_cast
(which this will automatically fix).
- We should pass cast safety to
PyArray_GetDTypeTransferFunction
so that we can correctly initialize thecontext
there already. This would:- Fix any problems with chained casts (this is a lingering bug, but dunno if it is worth much in practice; casting a structured dtype with one field is probably broken with
"same_value"
without this). - I honestly think that if you do this, you will solve everything else. (except I suggest to not do that magic return in this PR, because doing it right would probably touch more code than doing this right.)
- Fix any problems with chained casts (this is a lingering bug, but dunno if it is worth much in practice; casting a structured dtype with one field is probably broken with
Now 1. seems just necessary to me, to be honest?
2. seems hard to avoid long-term, because we want to support the level in more places, I think and also because of lingering small bugs.
I suspect that doing 2. is mostly easy. The one thing that is probably annoying is threading it through the internal/recrusive calls to PyArray_GetDTypeTransferFunction
.
convert_ufunc_arguments, py_resolve_dtypes_generic (should NOT support it in this PR)
Well, while I agree it is not important for them to do so. It also would be completely fine if it drops out. I don't think there is an API reason to dislike it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would actually be happy to try and split out 1. into it's own PR, although we would have to revert it if we don't follow-up. I.e. add support for can_cast()
and correct reporting for it, but just ignore it in practice when used?
(Just to keep this more focused and since I think it's very unlikely we won't see this to it's end.)
npy_clear_floatstatus_barrier((char*)&src_data); | ||
} | ||
|
||
if (same_value_cast) { | ||
cast_info.context.flags |= NPY_SAME_VALUE_CASTING; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we try to refactor this a big (maybe I always should have). Basically, if you do a:
git grep "\.caller ="
you find a few places where context
is initialized (that is cast_info.context
here). Since we need to do more complex things here now, I think it would make sense to create a static inline NPY_init_context()
that is passed "casting".
Yes, that will requires you to thread the cast level through PyArray_GetDTypeTransferFunction
as well. But that is also a feature. While you don't use it, the context
is already user exposed by the time we reach here.
I.e. in principle it could be used to return a specialized loop. That isn't particularly interesting for "same-value" maybe, but I don't see much reason not too right now.
Threading it through may be a bit tedious in some places (I hope not very, though), but it should be the only thing needed to make this work for all functions!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you add this option to
Line 969 in c6eed9a
_CastingKind: TypeAlias = L["no", "equiv", "safe", "same_kind", "unsafe"] |
It seems I touched something in the float -> complex
gives me
|
Maybe the compiler just didn't decide to lift the branch for some reason? I have to say that complex -> real casts, wouldn't be my biggest concern, there is this weird |
…e' in raw_array_assign_scalar and raw_array_wheremasked_assign_scalar
Somehow this caused Edit: it was due to the debug build exposing a missing initialization of |
CI is passing |
Hmm. I wonder what we should do with casting float-to-int or float-to-smaller-float? I think the second example should raise, but what about the third?
|
I had a list of 5-6 different possible definitions, but for now skipping it. I am tempted to go with your Unless we go all out and say that a
It may actually be worth bringing it up on the mailing list and meeting and sketching at least the main alternatives. (I don't usually expect much input, but sketching the alternatives tends to clarify things a bit either way.) I'll try to look over the recent changes here soon, but ping me here or on slack if I forget! |
I think it makes sense to do the most conservative thing and require "ability to accurately round-trip" for |
I suppose we have at least a brief window where we could still broaden the definition slightly.
That is currently same-kind I think. In that context I think it is best to forgive all float inaccuracies (including over and underflows). But, I don't want to get lost too much in this. Unless we are sure that we are only interested in one of these two use-cases for floats (or float to int), it seems fine to just focus on one now and add the other if we want to later. (I suppose I wouldn't mind asking around if there is a preference about the insufficient mantissa for |
This comment has been minimized.
This comment has been minimized.
I added a release note, refactored the code to use a single checking function across all the branches of the casting code, and added tests for exact round-tripping (including edge cases around inf and nan).
This PR only enables the casting in I still need to add tests that I disallow |
This comment has been minimized.
This comment has been minimized.
Diff from mypy_primer, showing the effect of this PR on type check results on a corpus of open source code: spark (https://github.com/apache/spark)
- python/pyspark/ml/functions.py:244: note: def [_ScalarT: generic[Any]] vstack(tup: Sequence[_SupportsArray[dtype[_ScalarT]] | _NestedSequence[_SupportsArray[dtype[_ScalarT]]]], *, dtype: None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
+ python/pyspark/ml/functions.py:244: note: def [_ScalarT: generic[Any]] vstack(tup: Sequence[_SupportsArray[dtype[_ScalarT]] | _NestedSequence[_SupportsArray[dtype[_ScalarT]]]], *, dtype: None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
- python/pyspark/ml/functions.py:244: note: def [_ScalarT: generic[Any]] vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[_ScalarT] | dtype[_ScalarT] | _SupportsDType[dtype[_ScalarT]], casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
+ python/pyspark/ml/functions.py:244: note: def [_ScalarT: generic[Any]] vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[_ScalarT] | dtype[_ScalarT] | _SupportsDType[dtype[_ScalarT]], casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[_ScalarT]]
- python/pyspark/ml/functions.py:244: note: def vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[Any] | dtype[Any] | _SupportsDType[dtype[Any]] | tuple[Any, Any] | list[Any] | _DTypeDict | str | None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[Any]]
+ python/pyspark/ml/functions.py:244: note: def vstack(tup: Sequence[Buffer | _SupportsArray[dtype[Any]] | _NestedSequence[_SupportsArray[dtype[Any]]] | complex | bytes | str | _NestedSequence[complex | bytes | str]], *, dtype: type[Any] | dtype[Any] | _SupportsDType[dtype[Any]] | tuple[Any, Any] | list[Any] | _DTypeDict | str | None = ..., casting: Literal['no', 'equiv', 'safe', 'same_kind', 'same_value', 'unsafe'] = ...) -> ndarray[tuple[Any, ...], dtype[Any]]
|
Users have asked for an option to raise an error when casting overflows. This is hard to do with the NEP-50 based casting, which only offers blanket rules for casting one dtype to another, without taking the actual values into account. This PR adds a new
same_value
argument to thecasting
kwarg (implemented inPyArray_CastingConverter
), and extends the casting loop functions to raise aValueError
ifsame_value
casting and the value would be changed by the cast. So far this is only implemented forndarray.astype
, i.e.np.array([1000]).astype(np.int8, casting='same_value')
will now raise an error.Performance: since the PR touches the assignment deep in the inner loop, I checked early on that it does not impact performance when not using
same_value
casting. The loop pseudo code now looks like this, and when compiled with-O3
(via a gcc pragma specific to the loop), it seems the compiler is smart enough to make theif
condition not impact performance.Protection: This PR only changes
ndarray.astype
. Future PRs may implement more. I tried to guard the places that usePyArray_CastingConverter
to raise an error if'same_value'
is passed in:array_datetime_as_string
(by disallowing'same_value'
incan_cast_datetime64_units
)array_copyto
array_concatenate
(by disallowing'same_value'
inPyArray_ConcatenateInto
)array_einsum
(by disallowing'same_value
' inPyArray_EinsteinSum
)ufunc_generic_fastcall
'same_value'
is allowed inarray_astype
(the whole goal of this PR)array_can_cast_safely
npyiter_init
(I am pretty sure that is OK?)NpyIter_NestedIters
(I am pretty sure that is OK?)Testing: I added tests for ndarray.astype() covering all combinations of built-in types, and also some tests for properly blocking
casting='same_value'
.TODO:
astype(..., casting='same_value')
withdatetime64
or user-defined dtypes?main