New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MAINT: SciPy 1.11.3 backports/prep #19279
MAINT: SciPy 1.11.3 backports/prep #19279
Conversation
The algorithm is already equipped with a handful of checks for problem infeasibility, but the tests added here show two different examples that were not caught by any of those checks. It is likely possible to dive in and determine infeasibility in `_lapjvsp` itself, as in both examples, the symptom is the algorithm hitting an infinite loop (but a different infinite loop for each case), but here we simply add the catch-all fix of checking that the problem is feasible before even trying to solve it. This does come with a performance hit, but at least the implementation of Hopcroft--Karp that we're using is fast.
Co-authored-by: Jake Bowhay <60778417+j-bowhay@users.noreply.github.com>
) Closes scipygh-18716 Co-authored-by: Julien Jerphanion <git@jjerphan.xyz>
Closes scipy#15093 The scipy docs define Bounds as inclusive but trust-constr treats bounds as exclusive. According to Nocedal, Jorge, and Stephen J. Wright. "Numerical optimization" Second Edition (2006), p. 393, interior point methods necessarily treat bounds as exclusive. To work around this limitation of interior point methods and create consistent behavior for Bounds, this PR modifies trust-constr so that Bounds passed to trust-constr are expanded by epsilon.
Switched from using assert_ to using assert.
PR scipy#18896 removed make_strictly_feasible because it makes solutions worse if they are close to the boundary. However, the downstream algorithm that eventually solves the optimization problem has some trouble with starting points that are on the boundaries of a bounded optimization problem. This either results in division by zero warnings, see scipy#19102, or in failure of finding the correct solution at all, which is added as a regression test in this commit. This PR brings back make_strictly_feasible but with the smallest possible step from the boundary a floating point allows by setting rstep=0 on the function make_strictly_feasible, which uses np.nextafter to find such a point, hereby respecting the strictly feasible point condition and minimizing boundary effects. See strictly feasible points in interior-point methods for more context https://nmayorov.wordpress.com/2015/06/19/trust-region-reflective-algorithm/
to allow for reproducible builds even when (unneeded) .pyx files are installed and shipped in OS packages Fixes: scipy#19101 This patch was done while working on reproducible builds for openSUSE.
…ad_test. The parameter `cval` in the private helper function `_pad_test` is not used, so remove it. This change is inspired by a bug in Cython 3.0.1 (see SciPy issue scipygh-19132), and this change is a work-around for that bug, but since the parameter is unused and in a private function, there is no need to revert this change.
* BLD: revert to using published wheels [wheel build]
…9156) * BUG: Support sparse arrays in scipy.sparse.csgraph.laplacian * TST: Added unit test for laplacian
* TST: for issues scipy#17289 scipy#15273 * BUG/ENH: fix scipy#17289 scipy#15273 * TST: use correct assert * BENCH: Added a benchmark for signal.freqz * TST/BENCH: improved coverage scipy#17289 scipy#15273 * BENCH: removed vectorized input scipy#17289 scipy#15273 * STY: add missing whitespace [skip ci] --------- Co-authored-by: Jake Bowhay <60778417+j-bowhay@users.noreply.github.com>
* update the SciPy 1.11.3 release notes * update our wheel build config per feedback in scipygh-19274 [skip cirrus]
Can we get gh-18782 and gh-19280 in here? |
* fix a doc build failure related to `numpy.ma.row_stack`; should use `vstack` now re: NumPy namespace changes that are happening
* `node_list` was already made private on `main` in `_traversal.pyx`, so just apply that fix here as well, to fix the linter error * pin `pythran` in gcc-8 GHA job, was causing problem in build with isolation disabled because `0.14.0` is now available
…ipy#19200) This was failing on macOS x86-64 in CI with: ``` E Arrays are not almost equal to 6 decimals E E Mismatched elements: 1 / 25 (4%) E Max absolute difference: 1.52291797e-06 E Max relative difference: 8.81522156e-07 E x: array([[ 2.108927e+00, -1.943767e+00, 7.026353e-02, -4.717525e-02, E 5.251554e-02], E [-1.943767e+00, 2.048149e+00, -6.160049e-02, 4.626880e-02,... E y: array([[ 2.108927e+00, -1.943769e+00, 7.026355e-02, -4.717527e-02, E 5.251558e-02], E [-1.943769e+00, 2.048151e+00, -6.160052e-02, 4.626883e-02,... ``` So it's a very small atol violation (1.52e-6, where decimal=6 implies 1.5e-6). Rather than bump to `decimal=5`, I decided to change the test to `assert_allclose` so the bump could be smaller. [skip ci]
* update SciPy `1.11.3` release notes after more changes/backports [skip cirrus]
* pin `meson-python` version in gcc-8 GHA job, because it has build isolation disabled [skip cirrus]
Added a few more fixes based on CI failures, and a few more backport requests (only 1 from Matt was merged at the time of writing), and updated the original comment accordingly. My decision to pin |
Oops. I just merged gh-19280. Would you like me to submit a PR against this branch adding that? |
Nah, I can cherry-pick, I'm sure there will be more of that to do still. |
gh-19199 also merged. Thanks @tirthasheshpatel! |
* update SciPy `1.11.3` release notes following more backports * `test_int_x0_gh19280()` required a small shim to deal with module import differences vs. `main` [skip cirrus]
I think I'm up-to-date with all requests so far, and I've updated the original PR comment above accordingly. I had to slightly modify Matt's test in I've been keeping the release notes updated as I go, which has the drawback of polluting the commit history a bit, but anyway not a huge deal. |
I checked that with I did note two things to consider, which I've added to the checklist above-- I also added a checklist item for I'll leave this PR open for a day or two more before starting to probe the wheel builds with i.e., the cirrus credits Ralf added, to give folks a chance to review/suggest more backports, etc. |
Seems like a good idea to me, and can't really hurt.
Wheel builds run the full test suite, so it isn't necessary - but if you prefer to add one, that's fine too I think. |
#19286 would be good to have for 1.11.3 as well. :) |
@tylerjereddy here are my proposed updates to |
I can just cherry-pick it over if you keep that branch alive, I've got a list of 3 other backports to include already in any case. |
* MAINT: linux amendments * MAINT: osx amendments * MAINT: win licence amendments * MAINT: licence typo
This macro must not be used in the global scope, but rather in a separate namespace (see, e.g., `boost_math/example/policy_eg_9.cpp`). It caused compilation errors with Clang on Windows in PR 246 on the conda-forge scipy feedback. This bit of code also did not seem necessary, given that these compile flags in `stats/_boost/meson.build` should already achieve the same effect: ``` '-DBOOST_MATH_EVALUATION_ERROR_POLICY=user_error', '-DBOOST_MATH_OVERFLOW_ERROR_POLICY=user_error', ``` That the custom error handling in `func_defs.hpp` is still used can be verified by running for example after modifying the `user_evaluation_error` handler: ``` python dev.py test -t scipy.stats.tests.test_distributions -- -k test_boost_eval_issue_14606 ```
Follow-up to scipygh-8918; the linked CPython code in the discussion on the issue that that PR closes has the same `<float.h>` include. This upstreams a patch from PR 246 to https://github.com/conda-forge/scipy-feedstock Co-authored-by: Uwe L. Korn <uwe.korn@quantco.com>
Fix scipygh-18689 scipygh-18922 python3.12 removes distutils and scipy dev.py uses distutils for the meson based build in debian python. With python3.12 debian has patched sysconfig and no longer requires dependence on distutils.
* update the SciPy 1.11.3 release notes following another series of backports [skip cirrus]
I've updated the PR/original comment above--there are 18 backports listed up there and that matches 18 merged backport-labelled PRs on GitHub at the time of writing. If the regular CI remains green, and I don't see any more backport requests, ~20 is probably enough to think about releasing then, so I'd probably try wheel builds on this PR/branch, and if they succeed, then proceed with the release process in the next day or two. |
* empty commit to test wheel builds for the SciPy `1.11.3` release process before the backport PR is merged [wheel build]
The job The wheel build/test jobs all passed. One thing I don't like is that there still seem to be complaints about I'll proceed with a merge here, and probably get the rel process started. I did one final read through the diff--it was mostly what I expected, perhaps my version pins in CI could be reverted with the latest |
329b980
into
scipy:maintenance/1.11.x
Prep for SciPy
1.11.3
now that NumPy1.26.0
stable is out and Python3.12
stable is imminent per gh-19274.1.11.2
does support Python3.12
already, but was built with a beta version of NumPy in that scenario.Backports Included so far:
TestODR.test_implicit
test failure with tolerance bump #19200_LA
attribute when covariance is singular #19199_fpumode.c
, fixes Clang on Windows issue #19290Possible TODOs:
python dev.py lint
I noticed:scipy/scipy/sparse/csgraph/_traversal.pyx:187:5: 'node_list' defined but unused (try prefixing with underscore?)
python dev.py doc -j 32
I noticed a single warning:scipy/stats/_mstats_basic.py:docstring of scipy.stats._mstats_basic.argstoarray:22: WARNING: py:obj reference target not found: numpy.ma.row_stack
pip install -v .
on a Python3.12
release candidate locally on this branch before probing wheel buildsdev.py
works with3.12
on this branch?