Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MAINT: 1.11.4 backports #19543

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
9 changes: 5 additions & 4 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ _defaults: &defaults
docker:
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
- image: cimg/python:3.9
- image: cimg/python:3.11
working_directory: ~/repo

commands:
Expand Down Expand Up @@ -77,6 +77,7 @@ jobs:
pip install cython
pip install numpy==1.23.5
pip install -r doc_requirements.txt
pip install "myst-nb<1.0.0"
# `asv` pin because of slowdowns reported in gh-15568
pip install mpmath gmpy2 "asv==0.4.2" pythran ninja meson click rich-click doit pydevtool pooch
pip install pybind11
Expand Down Expand Up @@ -118,7 +119,7 @@ jobs:
name: build docs
no_output_timeout: 25m
command: |
export PYTHONPATH=$PWD/build-install/lib/python3.9/site-packages
export PYTHONPATH=$PWD/build-install/lib/python3.11/site-packages
python dev.py --no-build doc -j2
- store_artifacts:
Expand All @@ -145,7 +146,7 @@ jobs:
name: run asv
no_output_timeout: 30m
command: |
export PYTHONPATH=$PWD/build-install/lib/python3.9/site-packages
export PYTHONPATH=$PWD/build-install/lib/python3.11/site-packages
cd benchmarks
asv machine --machine CircleCI
export SCIPY_GLOBAL_BENCH_NUMTRIALS=1
Expand Down Expand Up @@ -173,7 +174,7 @@ jobs:
no_output_timeout: 25m
command: |
sudo apt-get install -y wamerican-small
export PYTHONPATH=$PWD/build-install/lib/python3.9/site-packages
export PYTHONPATH=$PWD/build-install/lib/python3.11/site-packages
python dev.py --no-build refguide-check
# Upload build output to scipy/devdocs repository, using SSH deploy keys.
Expand Down
6 changes: 6 additions & 0 deletions doc/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,12 @@
message=r'There is no current event loop',
category=DeprecationWarning,
)
# TODO: remove after gh-19228 resolved:
warnings.filterwarnings(
'ignore',
message=r'.*path is deprecated.*',
category=DeprecationWarning,
)

# -----------------------------------------------------------------------------
# HTML output
Expand Down
39 changes: 39 additions & 0 deletions doc/source/release/1.11.4-notes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,12 +12,51 @@ compared to 1.11.3.
Authors
=======
* Name (commits)
* Jake Bowhay (2)
* Ralf Gommers (4)
* Julien Jerphanion (2)
* Nikolay Mayorov (2)
* Melissa Weber Mendonça (1)
* Tirth Patel (1)
* Tyler Reddy (22)
* Dan Schult (3)
* Nicolas Vetsch (1) +

A total of 9 people contributed to this release.
People with a "+" by their names contributed a patch for the first time.
This list of names is automatically generated, and may not be fully complete.



Issues closed for 1.11.4
------------------------

* `#19189 <https://github.com/scipy/scipy/issues/19189>`__: Contradiction in \`pyproject.toml\` requirements?
* `#19228 <https://github.com/scipy/scipy/issues/19228>`__: Doc build fails with Python 3.11
* `#19245 <https://github.com/scipy/scipy/issues/19245>`__: BUG: upcasting of indices dtype from DIA to COO/CSR/BSR arrays
* `#19351 <https://github.com/scipy/scipy/issues/19351>`__: BUG: Regression in 1.11.3 can still fail for \`optimize.least_squares\`...
* `#19357 <https://github.com/scipy/scipy/issues/19357>`__: BUG: build failure with Xcode 15 linker
* `#19359 <https://github.com/scipy/scipy/issues/19359>`__: BUG: DiscreteAliasUrn construction fails with UNURANError for...
* `#19387 <https://github.com/scipy/scipy/issues/19387>`__: BUG: problem importing libgfortran.5.dylib on macOS Sonoma
* `#19403 <https://github.com/scipy/scipy/issues/19403>`__: BUG: scipy.sparse.lil_matrix division by complex number leads...
* `#19437 <https://github.com/scipy/scipy/issues/19437>`__: BUG: can't install scipy on mac m1 with poetry due to incompatible...
* `#19500 <https://github.com/scipy/scipy/issues/19500>`__: DOC: doc build failing
* `#19513 <https://github.com/scipy/scipy/issues/19513>`__: BUG: Python version constraints in releases causes issues for...


Pull requests for 1.11.4
------------------------

* `#19230 <https://github.com/scipy/scipy/pull/19230>`__: DOC, MAINT: workaround for py311 docs
* `#19307 <https://github.com/scipy/scipy/pull/19307>`__: set idx_dtype in sparse dia_array.tocoo
* `#19316 <https://github.com/scipy/scipy/pull/19316>`__: MAINT: Prep 1.11.4
* `#19320 <https://github.com/scipy/scipy/pull/19320>`__: BLD: fix up version parsing issue in cythonize.py for setup.py...
* `#19329 <https://github.com/scipy/scipy/pull/19329>`__: DOC: stats.chisquare: result object contains attribute 'statistic'
* `#19335 <https://github.com/scipy/scipy/pull/19335>`__: BUG: fix pow method for sparrays with power zero
* `#19364 <https://github.com/scipy/scipy/pull/19364>`__: MAINT, BUG: stats: update the UNU.RAN submodule with DAU fix
* `#19379 <https://github.com/scipy/scipy/pull/19379>`__: BUG: Restore the original behavior of 'trf' from least_squares...
* `#19400 <https://github.com/scipy/scipy/pull/19400>`__: BLD: use classic linker on macOS 14 (Sonoma), the new linker...
* `#19408 <https://github.com/scipy/scipy/pull/19408>`__: BUG: Fix typecasting problem in scipy.sparse.lil_matrix truediv
* `#19504 <https://github.com/scipy/scipy/pull/19504>`__: DOC, MAINT: Bump CircleCI Python version to 3.11
* `#19517 <https://github.com/scipy/scipy/pull/19517>`__: MAINT, REL: unpin Python 1.11.x branch
* `#19550 <https://github.com/scipy/scipy/pull/19550>`__: MAINT, BLD: poetry loongarch shims
9 changes: 6 additions & 3 deletions meson.build
Original file line number Diff line number Diff line change
Expand Up @@ -73,9 +73,7 @@ endif
if host_machine.system() == 'os400'
# IBM i system, needed to avoid build errors - see gh-17193
add_project_arguments('-D__STDC_FORMAT_MACROS', language : 'cpp')
add_project_link_arguments('-Wl,-bnotextro', language : 'c')
add_project_link_arguments('-Wl,-bnotextro', language : 'cpp')
add_project_link_arguments('-Wl,-bnotextro', language : 'fortran')
add_project_link_arguments('-Wl,-bnotextro', language : ['c', 'cpp', 'fortran'])
endif

# Adding at project level causes many spurious -lgfortran flags.
Expand All @@ -85,6 +83,11 @@ if ff.has_argument('-Wno-conversion')
add_project_arguments('-Wno-conversion', language: 'fortran')
endif

if host_machine.system() == 'darwin' and cc.has_link_argument('-Wl,-ld_classic')
# New linker introduced in macOS 14 not working yet, see gh-19357 and gh-19387
add_project_link_arguments('-Wl,-ld_classic', language : ['c', 'cpp', 'fortran'])
endif

# Intel compilers default to fast-math, so disable it if we detect Intel
# compilers. A word of warning: this may not work with the conda-forge
# compilers, because those have the annoying habit of including lots of flags
Expand Down
2 changes: 1 addition & 1 deletion scipy/_lib/unuran
6 changes: 4 additions & 2 deletions scipy/optimize/_lsq/least_squares.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,9 @@ def least_squares(
arguments, as shown at the end of the Examples section.
x0 : array_like with shape (n,) or float
Initial guess on independent variables. If float, it will be treated
as a 1-D array with one element.
as a 1-D array with one element. When `method` is 'trf', the initial
guess might be slightly adjusted to lie sufficiently within the given
`bounds`.
jac : {'2-point', '3-point', 'cs', callable}, optional
Method of computing the Jacobian matrix (an m-by-n matrix, where
element (i, j) is the partial derivative of f[i] with respect to
Expand Down Expand Up @@ -822,7 +824,7 @@ def least_squares(
ftol, xtol, gtol = check_tolerance(ftol, xtol, gtol, method)

if method == 'trf':
x0 = make_strictly_feasible(x0, lb, ub, rstep=0)
x0 = make_strictly_feasible(x0, lb, ub)

def fun_wrapped(x):
return np.atleast_1d(fun(x, *args, **kwargs))
Expand Down
21 changes: 17 additions & 4 deletions scipy/optimize/tests/test_least_squares.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

from scipy.optimize import least_squares, Bounds
from scipy.optimize._lsq.least_squares import IMPLEMENTED_LOSSES
from scipy.optimize._lsq.common import EPS, make_strictly_feasible
from scipy.optimize._lsq.common import EPS, make_strictly_feasible, CL_scaling_vector


def fun_trivial(x, a=0):
Expand Down Expand Up @@ -811,17 +811,30 @@ def err(p, x, y):
assert_allclose(res.x, np.array([0.4082241, 0.15530563]), atol=5e-5)


def test_gh_18793():
def test_gh_18793_and_19351():
answer = 1e-12
initial_guess = 1.1e-12

def chi2(x):
return (x-answer)**2

res = least_squares(chi2, x0=initial_guess, bounds=(0, np.inf))
gtol = 1e-15
res = least_squares(chi2, x0=initial_guess, gtol=1e-15, bounds=(0, np.inf))
# Original motivation: gh-18793
# if we choose an initial condition that is close to the solution
# we shouldn't return an answer that is further away from the solution
assert_allclose(res.x, answer, atol=initial_guess-answer)

# Update: gh-19351
# However this requirement does not go well with 'trf' algorithm logic.
# Some regressions were reported after the presumed fix.
# The returned solution is good as long as it satisfies the convergence
# conditions.
# Specifically in this case the scaled gradient will be sufficiently low.

scaling, _ = CL_scaling_vector(res.x, res.grad,
np.atleast_1d(0), np.atleast_1d(np.inf))
assert res.status == 1 # Converged by gradient
assert np.linalg.norm(res.grad * scaling, ord=np.inf) < gtol


def test_gh_19103():
Expand Down
15 changes: 14 additions & 1 deletion scipy/sparse/_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -106,12 +106,25 @@ def power(self, n, dtype=None):

Parameters
----------
n : n is a scalar
n : scalar
n is a non-zero scalar (nonzero avoids dense ones creation)
If zero power is desired, special case it to use `np.ones`

dtype : If dtype is not specified, the current dtype will be preserved.

Raises
------
NotImplementedError : if n is a zero scalar
If zero power is desired, special case it to use
`np.ones(A.shape, dtype=A.dtype)`
"""
if not isscalarlike(n):
raise NotImplementedError("input is not scalar")
if not n:
raise NotImplementedError(
"zero power is not supported as it would densify the matrix.\n"
"Use `np.ones(A.shape, dtype=A.dtype)` for this case."
)

data = self._deduped_data()
if dtype is not None:
Expand Down
7 changes: 6 additions & 1 deletion scipy/sparse/_dia.py
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ def __init__(self, arg1, shape=None, dtype=None, copy=False):
raise ValueError('offset array contains duplicate values')

def __repr__(self):
format = _formats[self.getformat()][1]
format = _formats[self.format][1]
return "<%dx%d sparse matrix of type '%s'\n" \
"\twith %d stored elements (%d diagonals) in %s format>" % \
(self.shape + (self.dtype.type, self.nnz, self.data.shape[0],
Expand Down Expand Up @@ -402,6 +402,11 @@ def tocoo(self, copy=False):
mask &= (self.data != 0)
row = row[mask]
col = np.tile(offset_inds, num_offsets)[mask.ravel()]
idx_dtype = self._get_index_dtype(
arrays=(self.offsets,), maxval=max(self.shape)
)
row = row.astype(idx_dtype, copy=False)
col = col.astype(idx_dtype, copy=False)
data = self.data[mask]
# Note: this cannot set has_canonical_format=True, because despite the
# lack of duplicates, we do not generate sorted indices.
Expand Down
1 change: 1 addition & 0 deletions scipy/sparse/_lil.py
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,7 @@ def _mul_scalar(self, other):
def __truediv__(self, other): # self / other
if isscalarlike(other):
new = self.copy()
new.dtype = np.result_type(self, other)
# Divide every element by this scalar
for j, rowvals in enumerate(new.data):
new.data[j] = [val/other for val in rowvals]
Expand Down
20 changes: 10 additions & 10 deletions scipy/sparse/tests/test_array_api.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,10 +138,16 @@ def test_matmul(A):
assert np.all((A @ A.T).todense() == A.dot(A.T).todense())


@parametrize_square_sparrays
def test_pow(B):
assert (B**0)._is_array, "Expected array, got matrix"
assert (B**2)._is_array, "Expected array, got matrix"
@parametrize_sparrays
def test_power_operator(A):
assert isinstance((A**2), scipy.sparse.sparray), "Expected array, got matrix"

# https://github.com/scipy/scipy/issues/15948
npt.assert_equal((A**2).todense(), (A.todense())**2)

# power of zero is all ones (dense) so helpful msg exception
with pytest.raises(NotImplementedError, match="zero power"):
A**0


@parametrize_sparrays
Expand Down Expand Up @@ -344,12 +350,6 @@ def test_spilu():
npt.assert_allclose(LU.solve(np.array([1, 2, 3, 4])), [1, 0, 0, 0])


@parametrize_sparrays
def test_power_operator(A):
# https://github.com/scipy/scipy/issues/15948
npt.assert_equal((A**2).todense(), (A.todense())**2)


@pytest.mark.parametrize(
"cls,indices_attrs",
[
Expand Down
26 changes: 24 additions & 2 deletions scipy/sparse/tests/test_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -3842,7 +3842,7 @@ def test_has_sorted_indices(self):
indptr = np.array([0, 2])
M = csr_matrix((data, sorted_inds, indptr)).copy()
assert_equal(True, M.has_sorted_indices)
assert type(M.has_sorted_indices) == bool
assert isinstance(M.has_sorted_indices, bool)

M = csr_matrix((data, unsorted_inds, indptr)).copy()
assert_equal(False, M.has_sorted_indices)
Expand Down Expand Up @@ -3874,7 +3874,7 @@ def test_has_canonical_format(self):

M = csr_matrix((data, indices, indptr)).copy()
assert_equal(False, M.has_canonical_format)
assert type(M.has_canonical_format) == bool
assert isinstance(M.has_canonical_format, bool)

# set by deduplicating
M.sum_duplicates()
Expand Down Expand Up @@ -4204,6 +4204,14 @@ def test_scalar_mul(self):
x = x*0
assert_equal(x[0, 0], 0)

def test_truediv_scalar(self):
A = self.spcreator((3, 2))
A[0, 1] = -10
A[2, 0] = 20

assert_array_equal((A / 1j).toarray(), A.toarray() / 1j)
assert_array_equal((A / 9).toarray(), A.toarray() / 9)

def test_inplace_ops(self):
A = lil_matrix([[0, 2, 3], [4, 0, 6]])
B = lil_matrix([[0, 1, 0], [0, 2, 3]])
Expand Down Expand Up @@ -4465,6 +4473,19 @@ def test_tocoo_gh10050(self):
inds_are_sorted = np.all(np.diff(flat_inds) > 0)
assert m.has_canonical_format == inds_are_sorted

def test_tocoo_tocsr_tocsc_gh19245(self):
# test index_dtype with tocoo, tocsr, tocsc
data = np.array([[1, 2, 3, 4]]).repeat(3, axis=0)
offsets = np.array([0, -1, 2], dtype=np.int32)
dia = sparse.dia_array((data, offsets), shape=(4, 4))

coo = dia.tocoo()
assert coo.col.dtype == np.int32
csr = dia.tocsr()
assert csr.indices.dtype == np.int32
csc = dia.tocsc()
assert csc.indices.dtype == np.int32


TestDIA.init_class()

Expand Down Expand Up @@ -4918,6 +4939,7 @@ def cases_64bit():
'test_large_dimensions_reshape': 'test actually requires 64-bit to work',
'test_constructor_smallcol': 'test verifies int32 indexes',
'test_constructor_largecol': 'test verifies int64 indexes',
'test_tocoo_tocsr_tocsc_gh19245': 'test verifies int32 indexes',
}

for cls in TEST_CLASSES:
Expand Down
15 changes: 11 additions & 4 deletions scipy/stats/tests/test_sampling.py
Original file line number Diff line number Diff line change
Expand Up @@ -292,24 +292,24 @@ def test_with_scipy_distribution():
check_discr_samples(rng, pv, dist.stats())


def check_cont_samples(rng, dist, mv_ex):
def check_cont_samples(rng, dist, mv_ex, rtol=1e-7, atol=1e-1):
rvs = rng.rvs(100000)
mv = rvs.mean(), rvs.var()
# test the moments only if the variance is finite
if np.isfinite(mv_ex[1]):
assert_allclose(mv, mv_ex, rtol=1e-7, atol=1e-1)
assert_allclose(mv, mv_ex, rtol=rtol, atol=atol)
# Cramer Von Mises test for goodness-of-fit
rvs = rng.rvs(500)
dist.cdf = np.vectorize(dist.cdf)
pval = cramervonmises(rvs, dist.cdf).pvalue
assert pval > 0.1


def check_discr_samples(rng, pv, mv_ex):
def check_discr_samples(rng, pv, mv_ex, rtol=1e-3, atol=1e-1):
rvs = rng.rvs(100000)
# test if the first few moments match
mv = rvs.mean(), rvs.var()
assert_allclose(mv, mv_ex, rtol=1e-3, atol=1e-1)
assert_allclose(mv, mv_ex, rtol=rtol, atol=atol)
# normalize
pv = pv / pv.sum()
# chi-squared test for goodness-of-fit
Expand Down Expand Up @@ -737,6 +737,13 @@ def pmf(self, x):
with pytest.raises(ValueError, match=msg):
DiscreteAliasUrn(dist)

def test_gh19359(self):
pv = special.softmax(np.ones((1533,)))
rng = DiscreteAliasUrn(pv, random_state=42)
# check the correctness
check_discr_samples(rng, pv, (1532 / 2, (1532**2 - 1) / 12),
rtol=5e-3)


class TestNumericalInversePolynomial:
# Simple Custom Distribution
Expand Down