Skip to content

ENH: stats.pearsonr: add array API support #20284

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Mar 29, 2024
Merged

Conversation

mdhaber
Copy link
Contributor

@mdhaber mdhaber commented Mar 19, 2024

Reference issue

gh-20137
Closes gh-20324

What does this implement/fix?

This explores the addition of Array API support to scipy.stats.pearsonr. Only the last commit is relevant; the others are from gh-20137.

Additional information

Need to resolve merge conflicts and skip a test on 32-bit, but otherwise, I think this is ready to go.


Old news:

Most of this will be pretty straightforward. There are some little things I'll want to address later (e.g. previously, pearsonr converted inputs to be at least float64, but I imagine we'd want to respect dtype with array API), but there is one big question for now:

Calculation of the p-value currently relies on the incomplete beta function, which is not among the special functions for which we have experimental array API support (gh-19023). Even if it were, calculation of the p-value currently relies on the distribution infrastructure. In any case, calculation of the statistic with an alternative array backend can easily be done now, but calculation of the p-value with an alternative array backend will take more time.

For vectorized calculations, I think there is still value in calculating the statistic with the alternative array backend, then converting the statistic (which has been reduced by at least one dimension) to a NumPy array for calculation of the p-value. Are there objections to this?

I know there have been objections to converting non-NumPy arrays to NumPy arrays for running compiled code, but I think this is a little different, since the statistic might not even be an array after the reducing operation, and if it is, it's smaller than the original array.

@mdhaber mdhaber added scipy.stats array types Items related to array API support and input array validation (see gh-18286) labels Mar 19, 2024
@lucascolley
Copy link
Member

lucascolley commented Mar 20, 2024

For vectorized calculations, I think there is still value in calculating the statistic with the alternative array backend, then converting the statistic (which has been reduced by at least one dimension) to a NumPy array for calculation of the p-value. Are there objections to this?

This sounds right to me. As long as we avoid converting back-and-forth multiple times, one pair of conversions is necessary (unless someone decides to write the special functions in pure Python 😅 (or the special extension gets into the standard...) and the distn. infra gets support).

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 20, 2024

Yes, in the near term, the new distribution infrastructure will be able to evaluate the special functions in an array API compatible way, and I will give the resampling methods array API support soon, too. So this would just be temporary, probably for one release only, if that.

Further out, yeah, the special function array API extension (data-apis/array-api#725) would speed things up considerably. (Oops looks like you mentioned it in an update, but maybe good to have the link here for others.)

So in the meantime, is there a canonical way to do the conversion? np.asarray alone doesn't cut it, right?

@lucascolley
Copy link
Member

lucascolley commented Mar 20, 2024

So in the meantime, is there a canonical way to do the conversion? np.asarray alone doesn't cut it, right?

Just use np.asarray for now is my suggestion (and xp.asarray at the end). There is no universally portable solution, but the idiomatic way going forward will be to use from_dlpack and co. That has seen updates quite recently and we don't use it anywhere yet, so it's probably easiest to update wholesale across submodules at a later date. Plus, np.asarray is sufficient for the backends with which we test in CI.

@lucascolley lucascolley added the enhancement A new feature or improvement label Mar 20, 2024
@mdhaber mdhaber force-pushed the pearsonr_array_api branch from c5d60dd to 675dde6 Compare March 21, 2024 04:18
@mdhaber mdhaber changed the title WIP: stats.pearsonr: add Array API support ENH: stats.pearsonr: add Array API support Mar 21, 2024
Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking pretty good!

@mdhaber mdhaber force-pushed the pearsonr_array_api branch from 02242a4 to 6ddde92 Compare March 21, 2024 12:56
@mdhaber mdhaber marked this pull request as ready for review March 21, 2024 13:03
Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to add the CI check in this PR?

res_ci = res.confidence_interval()
ref_ci = ref.confidence_interval()
xp_assert_close(res_ci.low, xp.asarray(ref_ci.low))
xp_assert_close(res_ci.high, xp.asarray(ref_ci.high))
Copy link
Contributor Author

@mdhaber mdhaber Mar 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could use input validation tests. Maybe should also check edge cases like length-2 input, constant input, etc... with array API but I imagine that someday we'll just want to have the option of running all tests with non-numpy arrays, no? I don't think we should duplicate existing tests for array API, right? Should I convert all the tests now?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, better to convert existing tests where possible, perhaps splitting into 2 and running one with np_only.

Copy link
Member

@lucascolley lucascolley Mar 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

e.g.

@skip_if_array_api(np_only=True)
@pytest.mark.parametrize("dtype", [np.float16, np.longdouble])
def test_dtypes_nonstandard(self, dtype):
x = random(30).astype(dtype)
out_dtypes = {np.float16: np.complex64, np.longdouble: np.clongdouble}
x_complex = x.astype(out_dtypes[dtype])
res_fft = fft.ifft(fft.fft(x))
res_rfft = fft.irfft(fft.rfft(x))
res_hfft = fft.hfft(fft.ihfft(x), x.shape[0])
# Check both numerical results and exact dtype matches
assert_array_almost_equal(res_fft, x_complex)
assert_array_almost_equal(res_rfft, x)
assert_array_almost_equal(res_hfft, x)
assert res_fft.dtype == x_complex.dtype
assert res_rfft.dtype == np.result_type(np.float32, x.dtype)
assert res_hfft.dtype == np.result_type(np.float32, x.dtype)
@pytest.mark.parametrize("dtype", ["float32", "float64"])
def test_dtypes_real(self, dtype, xp):
x = xp.asarray(random(30), dtype=getattr(xp, dtype))
res_rfft = fft.irfft(fft.rfft(x))
res_hfft = fft.hfft(fft.ihfft(x), x.shape[0])
# Check both numerical results and exact dtype matches
rtol = {"float32": 1.2e-4, "float64": 1e-8}[dtype]
xp_assert_close(res_rfft, x, rtol=rtol, atol=0)
xp_assert_close(res_hfft, x, rtol=rtol, atol=0)
@pytest.mark.parametrize("dtype", ["complex64", "complex128"])
def test_dtypes_complex(self, dtype, xp):
x = xp.asarray(random(30), dtype=getattr(xp, dtype))
res_fft = fft.ifft(fft.fft(x))
# Check both numerical results and exact dtype matches
rtol = {"complex64": 1.2e-4, "complex128": 1e-8}[dtype]
xp_assert_close(res_fft, x, rtol=rtol, atol=0)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should I convert all the tests now?

If you want - it should be done eventually at least

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this separate test after converting the others?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that. I'll think about it some more before my next push.

Copy link
Contributor Author

@mdhaber mdhaber Mar 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For old functions like this, without really thinking through the existing test suite from the bottom up, it's hard to be sure that the tests are sufficient. So I think it's valuable to have a property-based test like this. To be more thorough, I might try using hypothesis.

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 21, 2024

do you want to add the CI check in this PR?

Oops, missed that.
So maybe:

        python dev.py --no-build test -b all -t scipy.stats.tests.test_stats -- --durations 3 --timeout=60

at the end of array_api.yml?

@lucascolley
Copy link
Member

Yeah, looks correct 👍

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 22, 2024

I had to create my own _move_axis_to_end function to get past that failure and see what else was wrong. I also added a _clip function and fixed the uses of [()] to get all the tests passing, so I went ahead and left everything in there. We can replace _move_axis_to_end and _clip when array_api_strict is updated.

Regarding [()] - These four lines used to appear all over the place in stats to ensure that we return scalars instead of 0D arrays:

if res.ndim == 0:
    return res.item
else:
    return res

Usingres[()] is not only more compact, but it also ensures that the values are NumPy scalars instead of Python scalars. This is more consistent with NumPy's behavior.

array_api_strict allows the use of [()] on 0D arrays, but raises an IndexError when the array has 1 or more dimensions, so I replaced this idiom with a ternary like res[()] if res.ndim == 0 else res. This is not so bad, but is there another option I'm missing?

@lucascolley
Copy link
Member

array_api_strict allows the use of [()] on 0D arrays, but raises an IndexError when the array has 1 or more dimensions, so I replaced this idiom with a ternary like res[()] if res.ndim == 0 else res. This is not so bad, but is there another option I'm missing?

@rgommers has this point about returning scalars come up previously?

@rgommers
Copy link
Member

@rgommers has this point about returning scalars come up previously?

I don't think so. This pattern is pretty specific to SciPy.

Usingres[()] is not only more compact, but it also ensures that the values are NumPy scalars instead of Python scalars. This is more consistent with NumPy's behavior.

Agreed, that is better.

so I replaced this idiom with a ternary like res[()] if res.ndim == 0 else res. This is not so bad, but is there another option I'm missing?

That looks okay to me; I don't think it'll get clearer/shorter than that.

Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know this isn't finished yet but the array API changes look good to me (I do need to check CUDA though). If a stats maintainer could give a 👍 once it is ready that would be good.


@asmeurer out of interest, do you have a rough ETA for 2023.12 support in array-api-strict?


def _move_axis_to_end(x, source, xp):
axes = list(range(x.ndim))
temp = axes.pop(source)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
temp = axes.pop(source)
temp = axes.pop(source)

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 25, 2024

Actually I do think I'd like to consider this done, unless there was something besides the array API test? After I do a few of these, I'll come back and replace the separate array API test with a more thorough test with hypothesis. It would probably be best to do those at the same time anyway, since I think we can share some things (e.g. strategys) between the tests.

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 25, 2024

@tupui Would you be interested in checking that I didn't change anything from a stats perspective?

@lucascolley lucascolley changed the title ENH: stats.pearsonr: add Array API support ENH: stats.pearsonr: add array API support Mar 25, 2024
@lucascolley
Copy link
Member

I'm having lots of problems with trying to get CUDA working again after changing some drivers while working on JAX. I'll try to get it resolved but someone else may have to check GPU.

@lucascolley
Copy link
Member

Okay, I got PyTorch CUDA working, quite a few failures, see below @mdhaber. CuPy will have to wait (cupy/cupy#8260 ...) but that's alright.

traceback
______________________________________________________________________________ TestPearsonr.test_r_almost_exactly_pos1[torch] ______________________________________________________________________________
scipy/stats/tests/test_stats.py:361: in test_r_almost_exactly_pos1
    r, prob = stats.pearsonr(a, a)
        a          = tensor([0., 1., 2.], device='cuda:0')
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd2991190>
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd4213490>
        dtype      = torch.float32
        method     = None
        n          = 3
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([1.4142], device='cuda:0')
        normym     = tensor([1.4142], device='cuda:0')
        one        = tensor(1., device='cuda:0')
        r          = tensor(1., device='cuda:0')
        threshold  = 6.4155305118844185e-06
        x          = tensor([0., 1., 2.], device='cuda:0')
        xm         = tensor([-1.,  0.,  1.], device='cuda:0')
        xmax       = tensor([1.], device='cuda:0')
        xmean      = tensor([1.], device='cuda:0')
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([0., 1., 2.], device='cuda:0')
        ym         = tensor([-1.,  0.,  1.], device='cuda:0')
        ymax       = tensor([1.], device='cuda:0')
        ymean      = tensor([1.], device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(1., device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(1., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(1., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(1., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(1., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(1., device='cuda:0')
______________________________________________________________________________ TestPearsonr.test_r_almost_exactly_neg1[torch] ______________________________________________________________________________
scipy/stats/tests/test_stats.py:370: in test_r_almost_exactly_neg1
    r, prob = stats.pearsonr(a, -a)
        a          = tensor([0., 1., 2.], device='cuda:0')
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd299d1d0>
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd3eed750>
        dtype      = torch.float32
        method     = None
        n          = 3
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([1.4142], device='cuda:0')
        normym     = tensor([1.4142], device='cuda:0')
        one        = tensor(1., device='cuda:0')
        r          = tensor(-1., device='cuda:0')
        threshold  = 6.4155305118844185e-06
        x          = tensor([0., 1., 2.], device='cuda:0')
        xm         = tensor([-1.,  0.,  1.], device='cuda:0')
        xmax       = tensor([1.], device='cuda:0')
        xmean      = tensor([1.], device='cuda:0')
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([-0., -1., -2.], device='cuda:0')
        ym         = tensor([ 1.,  0., -1.], device='cuda:0')
        ymax       = tensor([1.], device='cuda:0')
        ymean      = tensor([-1.], device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(-1., device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(-1., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(-1., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(-1., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(-1., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(-1., device='cuda:0')
______________________________________________________________________________________ TestPearsonr.test_basic[torch] ______________________________________________________________________________________
scipy/stats/tests/test_stats.py:382: in test_basic
    r, prob = stats.pearsonr(a, b)
        a          = tensor([-1,  0,  1], device='cuda:0')
        b          = tensor([0, 0, 3], device='cuda:0')
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd299e810>
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9f89ab90>
        dtype      = torch.float64
        method     = None
        n          = 3
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([1.4142], device='cuda:0', dtype=torch.float64)
        normym     = tensor([2.4495], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.8660, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([-1.,  0.,  1.], device='cuda:0', dtype=torch.float64)
        xm         = tensor([-1.,  0.,  1.], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([1.], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([0.], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([0., 0., 3.], device='cuda:0', dtype=torch.float64)
        ym         = tensor([-1., -1.,  2.], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([2.], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([1.], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.8660, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.8660, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.8660, device='cuda:0', dtype=torch.float64)
_________________________________________________________________________________ TestPearsonr.test_constant_input[torch] __________________________________________________________________________________
scipy/stats/tests/test_stats.py:393: in test_constant_input
    r, p = stats.pearsonr(x, y)
        msg        = 'An input array is constant'
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd35d7290>
        x          = tensor([0.6670, 0.6670, 0.6670], device='cuda:0')
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([0.1230, 0.4560, 0.7890], device='cuda:0')
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(True, device='cuda:0')
        const_xy   = tensor(True, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd32fcd50>
        dtype      = torch.float32
        method     = None
        msg        = 'An input array is constant; the correlation coefficient is not defined.'
        n          = 3
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([nan], device='cuda:0')
        normym     = tensor([0.4709], device='cuda:0')
        one        = tensor(1., device='cuda:0')
        r          = tensor(nan, device='cuda:0')
        threshold  = 6.4155305118844185e-06
        x          = tensor([0.6670, 0.6670, 0.6670], device='cuda:0')
        xm         = tensor([0., 0., 0.], device='cuda:0')
        xmax       = tensor([0.], device='cuda:0')
        xmean      = tensor([0.6670], device='cuda:0')
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([0.1230, 0.4560, 0.7890], device='cuda:0')
        ym         = tensor([-3.3300e-01, -2.9802e-08,  3.3300e-01], device='cuda:0')
        ymax       = tensor([0.3330], device='cuda:0')
        ymean      = tensor([0.4560], device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(nan, device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(nan, device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(nan, device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(nan, device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(nan, device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(nan, device='cuda:0')
___________________________________________________________________________ TestPearsonr.test_near_constant_input[float32-torch] ___________________________________________________________________________
scipy/stats/tests/test_stats.py:408: in test_near_constant_input
    stats.pearsonr(x, y)
        dtype      = torch.float32
        msg        = 'An input array is nearly constant; the computed'
        npdtype    = <class 'numpy.float32'>
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd35d6fd0>
        x          = tensor([2.0000, 2.0000, 2.0000], device='cuda:0')
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([3.0000, 3.0000, 3.0000], device='cuda:0')
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd283b0d0>
        dtype      = torch.float32
        method     = None
        msg        = 'An input array is nearly constant; the computed correlation coefficient may be inaccurate.'
        n          = 3
        nconst_x   = tensor(True, device='cuda:0')
        nconst_xy  = tensor(True, device='cuda:0')
        nconst_y   = tensor(True, device='cuda:0')
        normxm     = tensor([2.3842e-07], device='cuda:0')
        normym     = tensor([1.2389e-06], device='cuda:0')
        one        = tensor(1., device='cuda:0')
        r          = tensor(0.5774, device='cuda:0')
        threshold  = 6.4155305118844185e-06
        x          = tensor([2.0000, 2.0000, 2.0000], device='cuda:0')
        xm         = tensor([0.0000e+00, 0.0000e+00, 2.3842e-07], device='cuda:0')
        xmax       = tensor([2.3842e-07], device='cuda:0')
        xmean      = tensor([2.], device='cuda:0')
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([3.0000, 3.0000, 3.0000], device='cuda:0')
        ym         = tensor([-7.1526e-07, -7.1526e-07,  7.1526e-07], device='cuda:0')
        ymax       = tensor([7.1526e-07], device='cuda:0')
        ymean      = tensor([3.0000], device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.5774, device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.5774, device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.5774, device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.5774, device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.5774, device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.5774, device='cuda:0')
___________________________________________________________________________ TestPearsonr.test_near_constant_input[float64-torch] ___________________________________________________________________________
scipy/stats/tests/test_stats.py:408: in test_near_constant_input
    stats.pearsonr(x, y)
        dtype      = torch.float64
        msg        = 'An input array is nearly constant; the computed'
        npdtype    = <class 'numpy.float64'>
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd35d7c90>
        x          = tensor([2.0000, 2.0000, 2.0000], device='cuda:0', dtype=torch.float64)
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([3.0000, 3.0000, 3.0000], device='cuda:0', dtype=torch.float64)
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd27dba90>
        dtype      = torch.float64
        method     = None
        msg        = 'An input array is nearly constant; the computed correlation coefficient may be inaccurate.'
        n          = 3
        nconst_x   = tensor(True, device='cuda:0')
        nconst_xy  = tensor(True, device='cuda:0')
        nconst_y   = tensor(True, device='cuda:0')
        normxm     = tensor([4.4409e-16], device='cuda:0', dtype=torch.float64)
        normym     = tensor([2.1756e-15], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.8165, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([2.0000, 2.0000, 2.0000], device='cuda:0', dtype=torch.float64)
        xm         = tensor([0.0000e+00, 0.0000e+00, 4.4409e-16], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([4.4409e-16], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([2.], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([3.0000, 3.0000, 3.0000], device='cuda:0', dtype=torch.float64)
        ym         = tensor([-8.8818e-16, -8.8818e-16,  1.7764e-15], device='cuda:0',
       dtype=torch.float64)
        ymax       = tensor([1.7764e-15], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([3.0000], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.8165, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.8165, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.8165, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.8165, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.8165, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.8165, device='cuda:0', dtype=torch.float64)
_____________________________________________________________________________ TestPearsonr.test_very_small_input_values[torch] _____________________________________________________________________________
scipy/stats/tests/test_stats.py:418: in test_very_small_input_values
    r, p = stats.pearsonr(x, y)
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd35d7ad0>
        x          = tensor([0.0044, 0.0048, 0.0039, 0.0038, 0.0034], device='cuda:0',
       dtype=torch.float64)
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([2.4800e-188, 7.4100e-181, 4.0900e-208, 2.0800e-223, 2.6600e-245],
       device='cuda:0', dtype=torch.float64)
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 1.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd2f72a50>
        dtype      = torch.float64
        method     = None
        n          = 5
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([0.0011], device='cuda:0', dtype=torch.float64)
        normym     = tensor([6.6277e-181], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.7273, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([0.0044, 0.0048, 0.0039, 0.0038, 0.0034], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([ 0.0004,  0.0007, -0.0002, -0.0003, -0.0007], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([0.0007], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([0.0041], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([2.4800e-188, 7.4100e-181, 4.0900e-208, 2.0800e-223, 2.6600e-245],
       device='cuda:0', dtype=torch.float64)
        ym         = tensor([-1.4820e-181,  5.9280e-181, -1.4820e-181, -1.4820e-181, -1.4820e-181],
       device='cuda:0', dtype=torch.float64)
        ymax       = tensor([5.9280e-181], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([1.4820e-181], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.7273, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.7273, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.7273, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.7273, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.7273, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.7273, device='cuda:0', dtype=torch.float64)
_____________________________________________________________________________ TestPearsonr.test_very_large_input_values[torch] _____________________________________________________________________________
scipy/stats/tests/test_stats.py:432: in test_very_large_input_values
    r, p = stats.pearsonr(x, y)
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd35d5090>
        x          = tensor([0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+90, 1.0000e+90, 1.0000e+90,
        1.0000e+90], device='cuda:0', dtype=torch.float64)
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([0.0000e+00, 1.0000e+90, 2.0000e+90, 3.0000e+90, 4.0000e+90, 5.0000e+90,
        6.0000e+90], device='cuda:0', dtype=torch.float64)
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 2.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd32fdc50>
        dtype      = torch.float64
        method     = None
        n          = 7
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([1.3093e+90], device='cuda:0', dtype=torch.float64)
        normym     = tensor([5.2915e+90], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.8660, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([0.0000e+00, 0.0000e+00, 0.0000e+00, 1.0000e+90, 1.0000e+90, 1.0000e+90,
        1.0000e+90], device='cuda:0', dtype=torch.float64)
        xm         = tensor([-5.7143e+89, -5.7143e+89, -5.7143e+89,  4.2857e+89,  4.2857e+89,
         4.2857e+89,  4.2857e+89], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([5.7143e+89], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([5.7143e+89], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([0.0000e+00, 1.0000e+90, 2.0000e+90, 3.0000e+90, 4.0000e+90, 5.0000e+90,
        6.0000e+90], device='cuda:0', dtype=torch.float64)
        ym         = tensor([-3.0000e+90, -2.0000e+90, -1.0000e+90,  4.5231e+74,  1.0000e+90,
         2.0000e+90,  3.0000e+90], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([3.0000e+90], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([3.0000e+90], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.8660, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.8660, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.8660, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.8660, device='cuda:0', dtype=torch.float64)
__________________________________________________________________________ TestPearsonr.test_extremely_large_input_values[torch] ___________________________________________________________________________
scipy/stats/tests/test_stats.py:445: in test_extremely_large_input_values
    r, p = stats.pearsonr(x, y)
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd293dc50>
        x          = tensor([2.3000e+200, 4.5000e+200, 6.7000e+200, 8.0000e+200], device='cuda:0',
       dtype=torch.float64)
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([1.2000e+199, 5.5000e+200, 3.3000e+201, 1.0000e+200], device='cuda:0',
       dtype=torch.float64)
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 1.0
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd39a4750>
        dtype      = torch.float64
        method     = None
        n          = 4
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([4.3437e+200], device='cuda:0', dtype=torch.float64)
        normym     = tensor([2.6978e+201], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.3513, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([2.3000e+200, 4.5000e+200, 6.7000e+200, 8.0000e+200], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([-3.0750e+200, -8.7500e+199,  1.3250e+200,  2.6250e+200],
       device='cuda:0', dtype=torch.float64)
        xmax       = tensor([3.0750e+200], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([5.3750e+200], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([1.2000e+199, 5.5000e+200, 3.3000e+201, 1.0000e+200], device='cuda:0',
       dtype=torch.float64)
        ym         = tensor([-9.7850e+200, -4.4050e+200,  2.3095e+201, -8.9050e+200],
       device='cuda:0', dtype=torch.float64)
        ymax       = tensor([2.3095e+201], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([9.9050e+200], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.3513, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.3513, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.3513, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.3513, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.3513, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.3513, device='cuda:0', dtype=torch.float64)
_______________________________________________________________________ TestPearsonr.test_negative_correlation_pvalue_gh17795[torch] _______________________________________________________________________
scipy/stats/tests/test_stats.py:507: in test_negative_correlation_pvalue_gh17795
    test_greater = stats.pearsonr(x, y, alternative='greater')
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd292c590>
        x          = tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], device='cuda:0')
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([-0., -1., -2., -3., -4., -5., -6., -7., -8., -9.], device='cuda:0')
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.0
        alternative = 'greater'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9faa9250>
        dtype      = torch.float32
        method     = None
        n          = 10
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([9.0830], device='cuda:0')
        normym     = tensor([9.0830], device='cuda:0')
        one        = tensor(1., device='cuda:0')
        r          = tensor(-1., device='cuda:0')
        threshold  = 6.4155305118844185e-06
        x          = tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], device='cuda:0')
        xm         = tensor([-4.5000, -3.5000, -2.5000, -1.5000, -0.5000,  0.5000,  1.5000,  2.5000,
         3.5000,  4.5000], device='cuda:0')
        xmax       = tensor([4.5000], device='cuda:0')
        xmean      = tensor([4.5000], device='cuda:0')
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([-0., -1., -2., -3., -4., -5., -6., -7., -8., -9.], device='cuda:0')
        ym         = tensor([ 4.5000,  3.5000,  2.5000,  1.5000,  0.5000, -0.5000, -1.5000, -2.5000,
        -3.5000, -4.5000], device='cuda:0')
        ymax       = tensor([4.5000], device='cuda:0')
        ymean      = tensor([-4.5000], device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(-1., device='cuda:0')
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(-1., device='cuda:0'),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(-1., device='cuda:0')]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(-1., device='cuda:0'),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(-1., device='cuda:0'),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(-1., device='cuda:0')
_________________________________________________________________________ TestPearsonr.test_length3_r_exactly_negative_one[torch] __________________________________________________________________________
scipy/stats/tests/test_stats.py:515: in test_length3_r_exactly_negative_one
    res = stats.pearsonr(x, y)
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd292d790>
        x          = tensor([1, 2, 3], device='cuda:0')
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = tensor([  5,  -4, -13], device='cuda:0')
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 0.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd2da0e10>
        dtype      = torch.float64
        method     = None
        n          = 3
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([1.4142], device='cuda:0', dtype=torch.float64)
        normym     = tensor([12.7279], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(-1.0000, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([1., 2., 3.], device='cuda:0', dtype=torch.float64)
        xm         = tensor([-1.,  0.,  1.], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([1.], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([2.], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([  5.,  -4., -13.], device='cuda:0', dtype=torch.float64)
        ym         = tensor([ 9.,  0., -9.], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([9.], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([-4.], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(-1.0000, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(-1.0000, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(-1.0000, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(-1.0000, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(-1.0000, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(-1.0000, device='cuda:0', dtype=torch.float64)
________________________________________________________________________________ TestPearsonr.test_nd_special_cases[torch] _________________________________________________________________________________
scipy/stats/tests/test_stats.py:649: in test_nd_special_cases
    res = stats.pearsonr(x, y0, axis=1)
        message    = 'An input array is constant'
        rng        = Generator(PCG64) at 0x7A9DD3674900
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd290bb10>
        x          = tensor([[1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
        [0.8902, 0.6780, 0.2770, 0.2558, 0.0457],
        [0.7438, 0.3129, 0.3581, 0.2467, 0.0846]], device='cuda:0',
       dtype=torch.float64)
        x0         = tensor([[0.5498, 0.4304, 0.5973, 0.0502, 0.9180],
        [0.8902, 0.6780, 0.2770, 0.2558, 0.0457],
        [0.7438, 0.3129, 0.3581, 0.2467, 0.0846]], device='cuda:0',
       dtype=torch.float64)
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y0         = tensor([[0.4358, 0.5452, 0.8832, 0.0956, 0.3132],
        [0.9488, 0.9578, 0.5985, 0.7457, 0.0528],
        [0.5912, 0.9622, 0.0335, 0.1115, 0.6736]], device='cuda:0',
       dtype=torch.float64)
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 1.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 1
        const_x    = tensor([ True, False, False], device='cuda:0')
        const_xy   = tensor([ True, False, False], device='cuda:0')
        const_y    = tensor([False, False, False], device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9d00b010>
        dtype      = torch.float64
        method     = None
        msg        = 'An input array is constant; the correlation coefficient is not defined.'
        n          = 5
        nconst_x   = tensor([False, False, False], device='cuda:0')
        nconst_xy  = tensor([False, False, False], device='cuda:0')
        nconst_y   = tensor([False, False, False], device='cuda:0')
        normxm     = tensor([[   nan],
        [0.6891],
        [0.4874]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[0.5841],
        [0.7429],
        [0.7857]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[1.0000, 1.0000, 1.0000, 1.0000, 1.0000],
        [0.8902, 0.6780, 0.2770, 0.2558, 0.0457],
        [0.7438, 0.3129, 0.3581, 0.2467, 0.0846]], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([[ 0.0000,  0.0000,  0.0000,  0.0000,  0.0000],
        [ 0.4609,  0.2487, -0.1523, -0.1736, -0.3837],
        [ 0.3945, -0.0363,  0.0089, -0.1025, -0.2646]], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([[0.0000],
        [0.4609],
        [0.3945]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[1.0000],
        [0.4293],
        [0.3492]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[0.4358, 0.5452, 0.8832, 0.0956, 0.3132],
        [0.9488, 0.9578, 0.5985, 0.7457, 0.0528],
        [0.5912, 0.9622, 0.0335, 0.1115, 0.6736]], device='cuda:0',
       dtype=torch.float64)
        ym         = tensor([[-0.0188,  0.0906,  0.4286, -0.3590, -0.1414],
        [ 0.2881,  0.2971, -0.0623,  0.0850, -0.6079],
        [ 0.1168,  0.4878, -0.4409, -0.3629,  0.1992]], device='cuda:0',
       dtype=torch.float64)
        ymax       = tensor([[0.4286],
        [0.6079],
        [0.4878]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[0.4546],
        [0.6607],
        [0.4744]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([   nan, 0.8490, 0.0233], device='cuda:0', dtype=torch.float64)
________________________________________________________________________________ TestPearsonr.test_array_api[less-0-torch] _________________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'less'
        axis       = 0
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd2917310>
        x          = array([[ 1.25284449, -1.24614903, -0.53612628,  1.26534999,  0.75664989,
         0.87453957, -0.63706588,  0.30498252....7931132 ,  0.03020483,
         1.39215557,  0.11766829,  1.95888549, -0.4528613 , -0.86131051,
        -2.26395995]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[ 1.6628851 , -0.96770994,  0.26301674,  1.11258823, -0.41086372,
        -0.01066277, -1.62733296,  1.15239179....96421457,  0.91451476,
         0.3435411 ,  0.29856825, -0.1628566 ,  1.88448012,  0.90284418,
        -1.18395045]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.0
        alternative = 'less'
        axis       = -1
        axis_int   = 0
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd3b0a890>
        dtype      = torch.float64
        method     = None
        n          = 10
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        normxm     = tensor([[4.0490],
        [3.4271],
        [1.5656],
        [2.5413],
        [3.1563],
        [3.9953],
        [2.6270],
        [2.3219],
        [2.6930],
        [2.3296],
        [3.7420]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[3.4730],
        [3.1413],
        [2.6578],
        [2.9953],
        [3.3712],
        [1.9682],
        [3.4098],
        [3.3884],
        [2.2028],
        [2.6459],
        [3.2847]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[ 1.2528, -1.1193,  2.7997, -0.0138, -0.5785,  2.5810, -0.3692,  1.1962,
         -0.0295, -0.3094],
        [...1.7200, -0.1053,  0.9599,  0.8931, -1.8272,  0.0847,
          0.0590, -2.2640]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[ 7.1185e-01, -1.6603e+00,  2.2587e+00, -5.5479e-01, -1.1195e+00,
          2.0400e+00, -9.1022e-01,  6.5516e-...         1.6217e+00, -1.0986e+00,  8.1326e-01,  7.8761e-01, -1.5354e+00]],
       device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[2.2587],
        [2.3349],
        [1.0459],
        [1.3516],
        [2.0339],
        [2.1153],
        [2.2538],
        [1.7482],
        [1.5295],
        [1.1798],
        [1.6885]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[ 0.5410],
        [-0.1973],
        [-0.3003],
        [ 0.1769],
        [ 0.2040],
        [-0.3847],
    ...43],
        [ 0.2107],
        [-0.0961],
        [-0.0518],
        [-0.7286]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[ 1.6629,  1.3359, -0.6359, -0.2946, -0.2504,  0.9862,  1.8216, -0.9064,
         -0.4087, -1.4262],
        [...0.7145, -2.9010,  0.4268, -0.2500, -0.6780,  0.0285,
          0.4470, -1.1840]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[ 1.4744,  1.1475, -0.8243, -0.4830, -0.4388,  0.7978,  1.6331, -1.0949,
         -0.5971, -1.6147],
        [...0.3590, -2.5455,  0.7823,  0.1055, -0.3225,  0.3840,
          0.8025, -0.8284]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[1.6331],
        [1.5525],
        [1.5650],
        [1.7256],
        [1.6695],
        [1.4283],
        [2.1999],
        [2.6240],
        [1.6404],
        [1.5041],
        [2.5455]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[ 0.1884],
        [ 0.3718],
        [-0.2587],
        [ 0.5371],
        [ 0.3715],
        [ 0.4505],
    ...82],
        [ 0.0106],
        [ 0.2441],
        [-0.2211],
        [-0.3555]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([-0.0584,  0.1020, -0.1269, -0.0048, -0.8419, -0.3980, -0.2122,  0.1346,
        -0.0625, -0.5480,  0.0699], device='cuda:0', dtype=torch.float64)
________________________________________________________________________________ TestPearsonr.test_array_api[less-1-torch] _________________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'less'
        axis       = 1
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28ff490>
        x          = array([[-0.02220825,  0.58438716,  0.46411743,  0.48648984, -1.77563158,
        -1.73566365,  1.50903584, -0.74694314....42117932, -0.75821498,
         0.48163524, -0.25109435, -1.71516097, -1.21558203,  0.22234109,
         1.03320844]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-2.19450993e-02,  2.77240701e-01,  5.03783292e-01,
        -9.47262827e-01,  9.13756817e-01, -1.54470730e+00,
...-4.28800693e-02,
        -1.40325302e+00,  1.60334059e+00,  1.73293130e+00,
        -1.58473452e+00,  1.38707952e+00]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.5
        alternative = 'less'
        axis       = -1
        axis_int   = 1
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9ca21850>
        dtype      = torch.float64
        method     = None
        n          = 11
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        normxm     = tensor([[3.2052],
        [2.9910],
        [3.5046],
        [3.5360],
        [3.9307],
        [2.7232],
        [3.6547],
        [3.2822],
        [3.0801],
        [3.0865]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[2.5882],
        [2.5952],
        [3.7270],
        [3.2071],
        [2.9505],
        [2.6596],
        [4.1944],
        [3.9804],
        [3.1426],
        [4.0637]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[-0.0222,  0.5844,  0.4641,  0.4865, -1.7756, -1.7357,  1.5090, -0.7469,
         -0.5218,  0.4139, -0.6218],
...0.4212, -0.7582,  0.4816, -0.2511, -1.7152,
         -1.2156,  0.2223,  1.0332]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[ 0.1565,  0.7631,  0.6428,  0.6652, -1.5969, -1.5569,  1.6878, -0.5682,
         -0.3430,  0.5926, -0.4430],
...0.6778, -0.5015,  0.7383,  0.0056, -1.4585,
         -0.9589,  0.4790,  1.2899]], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[1.6878],
        [1.6876],
        [1.7699],
        [1.8252],
        [2.7981],
        [1.3628],
        [1.9272],
        [1.7170],
        [1.6777],
        [1.5743]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[-0.1787],
        [-0.0537],
        [-0.1857],
        [-0.0418],
        [-0.4114],
        [ 0.1497],
        [-0.6473],
        [ 0.3700],
        [-0.3124],
        [-0.2567]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[-2.1945e-02,  2.7724e-01,  5.0378e-01, -9.4726e-01,  9.1376e-01,
         -1.5447e+00, -5.5309e-01,  1.2173e+...e-02, -1.4033e+00,  1.6033e+00,  1.7329e+00, -1.5847e+00,
          1.3871e+00]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[ 1.1971e-01,  4.1890e-01,  6.4544e-01, -8.0561e-01,  1.0554e+00,
         -1.4030e+00, -4.1143e-01,  1.3589e+...e-01, -1.5703e+00,  1.4363e+00,  1.5659e+00, -1.7517e+00,
          1.2201e+00]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[1.4030],
        [1.4440],
        [2.2616],
        [2.2861],
        [1.4864],
        [1.5811],
        [2.3150],
        [2.0423],
        [1.8049],
        [1.7517]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[-0.1417],
        [-0.2291],
        [-0.3098],
        [-0.2128],
        [ 0.1740],
        [-0.1308],
        [ 0.1177],
        [-0.3385],
        [-0.0026],
        [ 0.1670]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([-0.0851, -0.1722,  0.1804,  0.0506, -0.0238, -0.5948, -0.3078, -0.1646,
         0.4597, -0.4445], device='cuda:0', dtype=torch.float64)
_______________________________________________________________________________ TestPearsonr.test_array_api[less-None-torch] _______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'less'
        axis       = None
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28ffd90>
        x          = array([[-0.44824269,  0.97436379, -0.42351573, -2.94171864, -0.58202323,
         0.28085284,  0.45430781,  0.31106377....49004227,  0.26625433,
        -0.96454795,  1.0258759 , -0.43427223, -1.34128379, -0.47015297,
         0.64788595]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-9.70088660e-01,  2.88587687e-01, -1.00843070e+00,
        -2.48966632e+00,  4.59649097e-02, -6.93758261e-01,
... 3.15994578e-01,
         6.59154418e-02, -5.85216863e-01,  8.33440131e-01,
         2.59484496e+00, -1.37696849e-01]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 54.0
        alternative = 'less'
        axis       = -1
        axis_int   = -1
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd2824910>
        dtype      = torch.float64
        method     = None
        n          = 110
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([10.8875], device='cuda:0', dtype=torch.float64)
        normym     = tensor([10.1264], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.0808, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([-0.4482,  0.9744, -0.4235, -2.9417, -0.5820,  0.2809,  0.4543,  0.3111,
         0.1421, -1.7510, -0.1047,  1....0,  0.2663,
        -0.9645,  1.0259, -0.4343, -1.3413, -0.4702,  0.6479], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([-0.4445,  0.9781, -0.4198, -2.9380, -0.5783,  0.2846,  0.4581,  0.3148,
         0.1458, -1.7472, -0.1009,  1....3,  0.2700,
        -0.9608,  1.0296, -0.4305, -1.3375, -0.4664,  0.6516], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([2.9380], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([-0.0037], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([-9.7009e-01,  2.8859e-01, -1.0084e+00, -2.4897e+00,  4.5965e-02,
        -6.9376e-01, -6.9495e-01,  3.1959e-01...
         6.5915e-02, -5.8522e-01,  8.3344e-01,  2.5948e+00, -1.3770e-01],
       device='cuda:0', dtype=torch.float64)
        ym         = tensor([-0.7772,  0.4815, -0.8155, -2.2968,  0.2389, -0.5008, -0.5020,  0.5125,
         0.9768, -1.1229, -0.1276, -0....2, -1.5139,
         0.5089,  0.2588, -0.3923,  1.0263,  2.7878,  0.0552], device='cuda:0',
       dtype=torch.float64)
        ymax       = tensor([2.7878], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([-0.1929], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.0808, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.0808, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.0808, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.0808, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.0808, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.0808, device='cuda:0', dtype=torch.float64)
_______________________________________________________________________________ TestPearsonr.test_array_api[greater-0-torch] _______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'greater'
        axis       = 0
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28fcc50>
        x          = array([[ 0.58280528,  1.40732762,  1.83771467, -0.92741555, -0.55993253,
        -0.94497136,  0.98335686,  0.77242536....25110639,  1.26526829,
         0.56374001, -0.78703184,  0.57498194,  1.57060302, -0.23740502,
         1.19575261]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-1.03677722,  2.11278462, -1.21363056, -0.40898022,  1.371498  ,
         1.43571859,  0.07539048,  0.16369815....44539945, -0.57875633,
        -0.0538199 ,  0.96278244,  0.38174599,  1.39144508,  0.66212325,
        -0.47453695]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.0
        alternative = 'greater'
        axis       = -1
        axis_int   = 0
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd3eeea50>
        dtype      = torch.float64
        method     = None
        n          = 10
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        normxm     = tensor([[3.0652],
        [2.2787],
        [2.2938],
        [3.5812],
        [2.6015],
        [3.1080],
        [2.8945],
        [3.5220],
        [3.4725],
        [2.0002],
        [3.3038]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[3.3216],
        [3.8948],
        [2.9030],
        [1.7474],
        [2.5263],
        [2.6399],
        [3.1116],
        [2.5216],
        [3.1296],
        [2.6459],
        [2.3065]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[ 0.5828, -0.2959, -0.7518,  0.0750,  1.7316, -0.5242, -1.6913, -1.2775,
         -0.7687,  0.7232],
        [...0.6099,  0.5591, -1.1895,  1.0491,  1.9093, -0.6670,
         -0.9693,  1.1958]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[ 0.8025, -0.0763, -0.5321,  0.2947,  1.9513, -0.3045, -1.4716, -1.0579,
         -0.5490,  0.9429],
        [...0.3596,  0.3088, -1.4398,  0.7988,  1.6590, -0.9173,
         -1.2196,  0.9455]], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[1.9513],
        [1.1946],
        [1.3628],
        [2.0397],
        [1.4952],
        [1.5523],
        [1.3597],
        [1.5237],
        [1.8142],
        [1.2867],
        [1.6590]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[-0.2197],
        [ 0.4198],
        [ 0.4749],
        [ 0.4020],
        [-0.2300],
        [ 0.3645],
    ...66],
        [-0.2392],
        [ 0.1903],
        [ 0.2136],
        [ 0.2503]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[-1.0368, -0.9876,  1.8760, -0.9882, -1.3740,  1.2065, -0.1930, -1.2058,
         -0.9408,  0.1133],
        [...0.1863,  0.9507, -1.3370,  0.3739, -0.8247,  0.1940,
          0.5080, -0.4745]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[-0.6837, -0.6346,  2.2291, -0.6352, -1.0210,  1.5596,  0.1601, -0.8528,
         -0.5878,  0.4663],
        [...0.0175,  1.1195, -1.1682,  0.5428, -0.6559,  0.3628,
          0.6768, -0.3057]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[2.2291],
        [2.2011],
        [1.8484],
        [1.3435],
        [1.5583],
        [1.5410],
        [1.8077],
        [1.8425],
        [2.0300],
        [1.9792],
        [1.1682]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[-0.3530],
        [ 0.2249],
        [ 0.1555],
        [-0.1019],
        [-0.1868],
        [ 0.0127],
    ...28],
        [ 0.3176],
        [-0.1764],
        [-0.4173],
        [-0.1688]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([-0.2860,  0.0520, -0.0339, -0.3349, -0.0734,  0.1423,  0.5473,  0.2595,
         0.4972,  0.1216, -0.1848], device='cuda:0', dtype=torch.float64)
_______________________________________________________________________________ TestPearsonr.test_array_api[greater-1-torch] _______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'greater'
        axis       = 1
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28fda50>
        x          = array([[-6.33197601e-01,  8.46999061e-01,  6.59769481e-01,
         7.93870740e-01,  1.22089810e+00,  1.91213040e-01,
... 1.24912577e+00,
        -7.58850672e-01,  1.12202338e+00,  2.17771622e+00,
         1.13986898e-01, -1.26385967e+00]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-0.62174662,  0.0971421 , -0.22749994,  0.82154066, -0.16140328,
         0.37289666, -0.91485382, -0.92668902....21140984, -0.02924982,
        -1.14998464,  0.19884276, -0.17506334,  0.11813101, -1.26135631,
         0.78346058]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.5
        alternative = 'greater'
        axis       = -1
        axis_int   = 1
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9cf35810>
        dtype      = torch.float64
        method     = None
        n          = 11
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        normxm     = tensor([[2.3134],
        [3.3703],
        [3.0174],
        [3.5588],
        [2.7893],
        [4.7450],
        [4.1314],
        [3.7639],
        [3.7632],
        [3.2933]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[2.7549],
        [2.9301],
        [3.5768],
        [2.8263],
        [2.5557],
        [4.3706],
        [2.5169],
        [2.6224],
        [2.7146],
        [2.5738]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[-6.3320e-01,  8.4700e-01,  6.5977e-01,  7.9387e-01,  1.2209e+00,
          1.9121e-01, -7.5429e-01,  2.7702e-...e+00, -7.5885e-01,  1.1220e+00,  2.1777e+00,  1.1399e-01,
         -1.2639e+00]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[-0.9913,  0.4889,  0.3016,  0.4357,  0.8628, -0.1669, -1.1124, -0.0811,
         -0.3164,  1.1742, -0.5951],
...0.7944, -0.3736,  0.8646, -1.1434,  0.7375,
          1.7932, -0.2706, -1.6484]], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[1.1742],
        [1.7951],
        [1.6296],
        [2.3410],
        [2.1546],
        [3.0498],
        [2.5522],
        [1.8421],
        [1.8344],
        [1.7932]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[ 0.3581],
        [-0.0078],
        [-0.3707],
        [-0.1180],
        [ 0.1290],
        [ 0.3572],
        [ 0.1914],
        [-0.5043],
        [ 0.2233],
        [ 0.3845]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[-0.6217,  0.0971, -0.2275,  0.8215, -0.1614,  0.3729, -0.9149, -0.9267,
          1.2593,  1.7931, -0.0237],
...0.2114, -0.0292, -1.1500,  0.1988, -0.1751,
          0.1181, -1.2614,  0.7835]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[-0.7552, -0.0363, -0.3610,  0.6881, -0.2949,  0.2394, -1.0483, -1.0602,
          1.1259,  1.6597, -0.1572],
...0.2048,  0.3869, -0.7338,  0.6150,  0.2411,
          0.5343, -0.8452,  1.1996]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[1.6597],
        [1.5898],
        [2.0992],
        [1.4359],
        [1.4451],
        [2.7608],
        [1.6752],
        [1.2973],
        [1.9243],
        [1.7343]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[ 0.1335],
        [ 0.4965],
        [ 0.1229],
        [ 0.1015],
        [-0.2455],
        [ 0.3079],
        [ 0.3285],
        [ 0.3237],
        [-0.0884],
        [-0.4162]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([ 0.5595, -0.3820, -0.0221, -0.5843,  0.0358, -0.1136,  0.4268, -0.0320,
         0.1027, -0.2719], device='cuda:0', dtype=torch.float64)
_____________________________________________________________________________ TestPearsonr.test_array_api[greater-None-torch] ______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'greater'
        axis       = None
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28fe450>
        x          = array([[-0.24431625, -0.84084623, -0.07589802, -0.11144373,  0.21578661,
         1.49903863,  1.42430243,  0.92018406....90672059,  0.66825463,
        -0.46341451,  0.22889851,  2.14067528,  0.24860339, -1.42909962,
        -0.16551465]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[ 1.55576357,  1.28305173, -0.24697905,  1.17170251,  0.80859332,
         0.19826363,  0.22612805, -0.38215114....87020054, -0.37442017,
         0.77160499,  0.71135725,  1.06468037,  0.31461502, -2.02155495,
        -0.00855653]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 54.0
        alternative = 'greater'
        axis       = -1
        axis_int   = -1
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9d9fa9b010>
        dtype      = torch.float64
        method     = None
        n          = 110
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([9.9752], device='cuda:0', dtype=torch.float64)
        normym     = tensor([9.8779], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.0192, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([-0.2443, -0.8408, -0.0759, -0.1114,  0.2158,  1.4990,  1.4243,  0.9202,
         1.2874,  1.0433,  0.6790, -0....7,  0.6683,
        -0.4634,  0.2289,  2.1407,  0.2486, -1.4291, -0.1655], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([-0.3671, -0.9637, -0.1987, -0.2343,  0.0930,  1.3762,  1.3015,  0.7974,
         1.1646,  0.9205,  0.5562, -0....5,  0.5454,
        -0.5862,  0.1061,  2.0179,  0.1258, -1.5519, -0.2883], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([3.6606], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([0.1228], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([ 1.5558,  1.2831, -0.2470,  1.1717,  0.8086,  0.1983,  0.2261, -0.3822,
        -0.6844,  0.0658, -0.2803, -0....2, -0.3744,
         0.7716,  0.7114,  1.0647,  0.3146, -2.0216, -0.0086], device='cuda:0',
       dtype=torch.float64)
        ym         = tensor([ 1.6093,  1.3366, -0.1934,  1.2253,  0.8621,  0.2518,  0.2797, -0.3286,
        -0.6308,  0.1193, -0.2268, -0....7, -0.3209,
         0.8252,  0.7649,  1.1182,  0.3682, -1.9680,  0.0450], device='cuda:0',
       dtype=torch.float64)
        ymax       = tensor([2.1887], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([-0.0535], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.0192, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.0192, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.0192, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.0192, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.0192, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.0192, device='cuda:0', dtype=torch.float64)
______________________________________________________________________________ TestPearsonr.test_array_api[two-sided-0-torch] ______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'two-sided'
        axis       = 0
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd28feb90>
        x          = array([[-0.81837865, -0.39824587, -0.12878998,  0.99050264, -0.49923239,
        -0.51743688,  1.19669591,  0.25104772....96482996, -1.15333051,
         0.93607518,  0.78376363, -0.89119265, -0.32770967, -0.92657213,
        -0.47194279]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[ 0.88872796,  1.15057223,  0.17377089, -1.02033723,  0.00874263,
        -0.58315336,  2.09413673, -0.3772716 ....07241644,  1.05453301,
         1.25245848, -1.07471837,  0.44136057,  0.03251674,  0.8716997 ,
        -0.64639638]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.0
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 0
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd3b16910>
        dtype      = torch.float64
        method     = None
        n          = 10
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False,
        False], device='cuda:0')
        normxm     = tensor([[2.8779],
        [2.6156],
        [2.8723],
        [1.7242],
        [1.6967],
        [2.7431],
        [3.3188],
        [3.8235],
        [2.0598],
        [2.2692],
        [2.6648]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[2.0415],
        [1.6744],
        [2.7180],
        [4.0000],
        [3.2600],
        [2.5410],
        [3.0668],
        [1.8488],
        [2.0288],
        [2.7120],
        [2.0513]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[-0.8184, -1.1140,  0.8830, -1.6956, -0.1717,  0.3178,  0.4442, -0.4181,
         -0.3253,  1.5163],
        [...0.1001, -1.6024,  0.7419, -0.7230, -0.9215,  0.5873,
         -1.0420, -0.4719]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[-6.8021e-01, -9.7583e-01,  1.0212e+00, -1.5574e+00, -3.3536e-02,
          4.5601e-01,  5.8241e-01, -2.7994e-...        -5.4714e-01, -7.4565e-01,  7.6319e-01, -8.6615e-01, -2.9605e-01]],
       device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[1.6544],
        [1.5043],
        [1.3681],
        [0.9175],
        [1.0684],
        [1.5895],
        [2.1368],
        [1.8627],
        [1.2631],
        [1.1018],
        [1.4265]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[-0.1382],
        [-0.1494],
        [ 0.2112],
        [ 0.4735],
        [-0.5166],
        [-0.1564],
    ...89],
        [-0.3737],
        [ 0.3873],
        [-0.1645],
        [-0.1759]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[ 0.8887,  0.2932, -0.4813, -1.2553, -0.4926,  0.1203,  0.8482, -0.5886,
         -0.5360,  0.0642],
        [...1.3312,  0.2100, -0.0282,  0.4930,  0.8388,  0.0129,
         -0.2765, -0.6464]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[ 1.0026,  0.4071, -0.3674, -1.1414, -0.3787,  0.2342,  0.9622, -0.4746,
         -0.4221,  0.1781],
        [...1.2919,  0.2493,  0.0111,  0.5322,  0.8781,  0.0521,
         -0.2372, -0.6071]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[1.1414],
        [0.8536],
        [1.5278],
        [1.9165],
        [2.1754],
        [1.2461],
        [1.7633],
        [1.2179],
        [1.3204],
        [1.3186],
        [1.2919]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[-0.1139],
        [ 0.5515],
        [ 0.1106],
        [ 0.4347],
        [-0.3421],
        [ 0.6630],
    ...08],
        [ 0.1139],
        [ 0.6886],
        [-0.1453],
        [-0.0393]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([ 0.2569, -0.1512, -0.3756,  0.1556, -0.3156,  0.5766,  0.0933, -0.1318,
         0.2480, -0.0556, -0.1262], device='cuda:0', dtype=torch.float64)
______________________________________________________________________________ TestPearsonr.test_array_api[two-sided-1-torch] ______________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'two-sided'
        axis       = 1
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd3cb1dd0>
        x          = array([[ 1.68424768, -1.56085057, -1.0802149 ,  1.04969255, -0.60140003,
        -0.08686482, -1.61103924, -0.60061338....05141612, -0.3521313 ,
         0.15033173,  2.37895364, -0.32511494, -0.18664921, -0.15406169,
        -0.08072992]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-1.57439763, -0.57641921,  0.31417869,  0.89221701, -0.15176559,
        -1.54142646, -0.13442521,  1.00063607....63461387,  0.64302578,
         0.4761511 , -0.65656416,  1.02606876,  1.187437  ,  0.42534095,
        -0.04356603]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 4.5
        alternative = 'two-sided'
        axis       = -1
        axis_int   = 1
        const_x    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_xy   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        const_y    = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd3bc1c10>
        dtype      = torch.float64
        method     = None
        n          = 11
        nconst_x   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_xy  = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        nconst_y   = tensor([False, False, False, False, False, False, False, False, False, False],
       device='cuda:0')
        normxm     = tensor([[3.2984],
        [3.0267],
        [2.8678],
        [2.3997],
        [2.7886],
        [3.9931],
        [4.0369],
        [2.5512],
        [4.3038],
        [3.3412]], device='cuda:0', dtype=torch.float64)
        normym     = tensor([[3.1455],
        [3.9349],
        [2.6807],
        [3.1961],
        [2.4867],
        [3.0249],
        [3.0043],
        [2.8816],
        [2.5340],
        [2.0497]], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([[ 1.6842, -1.5609, -1.0802,  1.0497, -0.6014, -0.0869, -1.6110, -0.6006,
          0.0461, -0.6033,  0.5887],
...1.0514, -0.3521,  0.1503,  2.3790, -0.3251,
         -0.1866, -0.1541, -0.0807]], device='cuda:0', dtype=torch.float64)
        xm         = tensor([[ 1.9366, -1.3085, -0.8279,  1.3020, -0.3491,  0.1655, -1.3587, -0.3483,
          0.2984, -0.3510,  0.8410],
...1.3147, -0.6155, -0.1130,  2.1156, -0.5884,
         -0.4500, -0.4174, -0.3440]], device='cuda:0', dtype=torch.float64)
        xmax       = tensor([[1.9366],
        [1.7248],
        [1.9364],
        [1.1758],
        [1.3934],
        [2.6229],
        [2.6673],
        [1.6188],
        [2.9447],
        [2.1156]], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([[-0.2523],
        [ 0.5429],
        [ 0.0399],
        [-0.1504],
        [ 0.1753],
        [ 0.4399],
        [ 0.0073],
        [ 0.3410],
        [-0.0093],
        [ 0.2633]], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([[-1.5744, -0.5764,  0.3142,  0.8922, -0.1518, -1.5414, -0.1344,  1.0006,
          0.0072, -1.0858,  1.3409],
...0.6346,  0.6430,  0.4762, -0.6566,  1.0261,
          1.1874,  0.4253, -0.0436]], device='cuda:0', dtype=torch.float64)
        ym         = tensor([[-1.4372e+00, -4.3923e-01,  4.5137e-01,  1.0294e+00, -1.4578e-02,
         -1.4042e+00,  2.7623e-03,  1.1378e+...e-02, -1.1085e+00,  5.7415e-01,  7.3552e-01, -2.6581e-02,
         -4.9549e-01]], device='cuda:0', dtype=torch.float64)
        ymax       = tensor([[1.4781],
        [3.4099],
        [1.7523],
        [1.6137],
        [1.2680],
        [1.9101],
        [1.8024],
        [1.5532],
        [1.4155],
        [1.1085]], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([[-0.1372],
        [ 0.0094],
        [ 0.4703],
        [-0.3975],
        [-0.3548],
        [-0.3402],
        [ 0.0293],
        [-0.3803],
        [ 0.3129],
        [ 0.4519]], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor([-0.0241,  0.4226, -0.0329,  0.4160,  0.0671, -0.1617,  0.3120,  0.1541,
         0.1897, -0.2082], device='cuda:0', dtype=torch.float64)
____________________________________________________________________________ TestPearsonr.test_array_api[two-sided-None-torch] _____________________________________________________________________________
scipy/stats/tests/test_stats.py:681: in test_array_api
    res = stats.pearsonr(xp.asarray(x), xp.asarray(y),
        alternative = 'two-sided'
        axis       = None
        self       = <scipy.stats.tests.test_stats.TestPearsonr object at 0x7a9dd3cb0ad0>
        x          = array([[-0.22639395, -1.06346757, -0.50398259, -2.09876581,  0.8493841 ,
        -0.10586449, -0.51912194,  0.52055335....60613993,  0.76467161,
        -0.73527206, -0.52019479, -0.56367561,  0.39934576, -2.32879612,
        -1.84174229]])
        xp         = <module 'torch' from '/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/__init__.py'>
        y          = array([[-0.59088182,  0.487906  , -0.54788892, -0.05627702,  0.2122978 ,
        -0.49114164, -0.33629944,  1.34054938....54446181, -1.22667687,
         0.06588798,  0.37407459, -3.07566835,  0.78261166, -0.22503945,
         0.12370457]])
scipy/stats/_stats_py.py:4909: in pearsonr
    pvalue = _get_pvalue(np.asarray(r), dist, alternative)
        ab         = 54.0
        alternative = 'two-sided'
        axis       = -1
        axis_int   = -1
        const_x    = tensor(False, device='cuda:0')
        const_xy   = tensor(False, device='cuda:0')
        const_y    = tensor(False, device='cuda:0')
        dist       = <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x7a9dd283a790>
        dtype      = torch.float64
        method     = None
        n          = 110
        nconst_x   = tensor(False, device='cuda:0')
        nconst_xy  = tensor(False, device='cuda:0')
        nconst_y   = tensor(False, device='cuda:0')
        normxm     = tensor([10.6620], device='cuda:0', dtype=torch.float64)
        normym     = tensor([10.6182], device='cuda:0', dtype=torch.float64)
        one        = tensor(1., device='cuda:0', dtype=torch.float64)
        r          = tensor(0.0308, device='cuda:0', dtype=torch.float64)
        threshold  = 1.8189894035458565e-12
        x          = tensor([-0.2264, -1.0635, -0.5040, -2.0988,  0.8494, -0.1059, -0.5191,  0.5206,
        -1.4072, -1.5916,  0.8970, -0....1,  0.7647,
        -0.7353, -0.5202, -0.5637,  0.3993, -2.3288, -1.8417], device='cuda:0',
       dtype=torch.float64)
        xm         = tensor([-0.1605, -0.9975, -0.4381, -2.0328,  0.9153, -0.0399, -0.4532,  0.5865,
        -1.3412, -1.5257,  0.9629,  0....2,  0.8306,
        -0.6693, -0.4543, -0.4977,  0.4653, -2.2629, -1.7758], device='cuda:0',
       dtype=torch.float64)
        xmax       = tensor([2.5180], device='cuda:0', dtype=torch.float64)
        xmean      = tensor([-0.0659], device='cuda:0', dtype=torch.float64)
        xp         = <module 'scipy._lib.array_api_compat.torch' from '/home/lucas/dev/myscipy/build-install/lib/python3.11/site-packages/scipy/_lib/array_api_compat/torch/__init__.py'>
        y          = tensor([-0.5909,  0.4879, -0.5479, -0.0563,  0.2123, -0.4911, -0.3363,  1.3405,
        -0.3880, -1.4808,  1.6382, -0....5, -1.2267,
         0.0659,  0.3741, -3.0757,  0.7826, -0.2250,  0.1237], device='cuda:0',
       dtype=torch.float64)
        ym         = tensor([-0.5525,  0.5263, -0.5095, -0.0179,  0.2507, -0.4527, -0.2979,  1.3789,
        -0.3496, -1.4424,  1.6766, -0....9, -1.1883,
         0.1043,  0.4125, -3.0373,  0.8210, -0.1866,  0.1621], device='cuda:0',
       dtype=torch.float64)
        ymax       = tensor([3.5431], device='cuda:0', dtype=torch.float64)
        ymean      = tensor([-0.0384], device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1060: in __array__
    return handle_torch_function(Tensor.__array__, (self,), self, dtype=dtype)
        dtype      = None
        self       = tensor(0.0308, device='cuda:0', dtype=torch.float64)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/overrides.py:1604: in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
        args       = (tensor(0.0308, device='cuda:0', dtype=torch.float64),)
        kwargs     = {'dtype': None}
        mode       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        overloaded_args = [tensor(0.0308, device='cuda:0', dtype=torch.float64)]
        public_api = <function Tensor.__array__ at 0x7a9e75bc32e0>
        relevant_args = (tensor(0.0308, device='cuda:0', dtype=torch.float64),)
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/utils/_device.py:77: in __torch_function__
    return func(*args, **kwargs)
        args       = (tensor(0.0308, device='cuda:0', dtype=torch.float64),)
        func       = <function Tensor.__array__ at 0x7a9e75bc32e0>
        kwargs     = {'dtype': None}
        self       = <torch.utils._device.DeviceContext object at 0x7a9dd41db790>
        types      = (<class 'torch.Tensor'>,)
/home/lucas/mambaforge/envs/scipy-dev/lib/python3.11/site-packages/torch/_tensor.py:1062: in __array__
    return self.numpy()
E   TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
        dtype      = None
        self       = tensor(0.0308, device='cuda:0', dtype=torch.float64)
================================================================================================ XFAILURES =================================================================================================
________________________________________________________________________ Test_ttest_CI.test_confidence_interval[0.2-True-two-sided] ________________________________________________________________________
scipy/stats/tests/test_stats.py:5422: in test_confidence_interval
    pytest.xfail('Discrepancy in `main`; needs further investigation.')
E   _pytest.outcomes.XFailed: Discrepancy in `main`; needs further investigation.
        alternative = 'two-sided'
        equal_var  = True
        self       = <scipy.stats.tests.test_stats.Test_ttest_CI object at 0x7a9dd275ffd0>
        trim       = 0.2
__________________________________________________________________________ Test_ttest_CI.test_confidence_interval[0.2-True-less] ___________________________________________________________________________
scipy/stats/tests/test_stats.py:5422: in test_confidence_interval
    pytest.xfail('Discrepancy in `main`; needs further investigation.')
E   _pytest.outcomes.XFailed: Discrepancy in `main`; needs further investigation.
        alternative = 'less'
        equal_var  = True
        self       = <scipy.stats.tests.test_stats.Test_ttest_CI object at 0x7a9dd275f110>
        trim       = 0.2
_________________________________________________________________________ Test_ttest_CI.test_confidence_interval[0.2-True-greater] _________________________________________________________________________
scipy/stats/tests/test_stats.py:5422: in test_confidence_interval
    pytest.xfail('Discrepancy in `main`; needs further investigation.')
E   _pytest.outcomes.XFailed: Discrepancy in `main`; needs further investigation.
        alternative = 'greater'
        equal_var  = True
        self       = <scipy.stats.tests.test_stats.Test_ttest_CI object at 0x7a9dd275f3d0>
        trim       = 0.2

@mdhaber
Copy link
Contributor Author

mdhaber commented Mar 26, 2024

Didn't we expect this given use of np.asarray to compute p-values?

@lucascolley
Copy link
Member

Didn't we expect this given use of np.asarray to compute p-values?

Yep, I think it just means that we need to @skip_xp_backends(cpu_only=True).

Copy link
Member

@lucascolley lucascolley left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In an env with CuPy, python dev.py test -t scipy.stats.tests.test_stats -b all passes for me. LGTM once there is approval from the stats side!

@tylerjereddy
Copy link
Contributor

I've not been following these closely, but have we changed our policy to allow array API support to be added partially to modules now?

@lucascolley
Copy link
Member

I've not been following these closely, but have we changed our policy to allow array API support to be added partially to modules now?

Partial support is currently released in special, so I assume so.

Copy link
Member

@tupui tupui left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The stats part is fine to me. And Lucas had a look at the Array API side so good to go. We could have moved the utils to the global utils that we have for Array API, though I don't think we will really "forget" so ok like this.

@tupui tupui merged commit 1066153 into scipy:main Mar 29, 2024
@lucascolley
Copy link
Member

We can replace _move_axis_to_end and _clip when array_api_strict is updated.

Upstream issue: data-apis/array-api-strict#25

@mdhaber mdhaber added this to the 1.14.0 milestone May 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
array types Items related to array API support and input array validation (see gh-18286) enhancement A new feature or improvement scipy.stats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

MAINT, BUG (?): pearsonr statistic return type change
5 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy