Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare release 7.4.1 #11377

Merged
merged 3 commits into from Sep 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
34 changes: 24 additions & 10 deletions .github/workflows/deploy.yml
@@ -1,45 +1,59 @@
name: deploy

on:
push:
tags:
# These tags are protected, see:
# https://github.com/pytest-dev/pytest/settings/tag_protection
- "[0-9]+.[0-9]+.[0-9]+"
- "[0-9]+.[0-9]+.[0-9]+rc[0-9]+"
workflow_dispatch:
inputs:
version:
description: 'Release version'
required: true
default: '1.2.3'


# Set permissions at the job level.
permissions: {}

jobs:
build:
package:
runs-on: ubuntu-latest
env:
SETUPTOOLS_SCM_PRETEND_VERSION: ${{ github.event.inputs.version }}
timeout-minutes: 10

steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
persist-credentials: false

- name: Build and Check Package
uses: hynek/build-and-inspect-python-package@v1.5

deploy:
if: github.repository == 'pytest-dev/pytest'
needs: [build]
needs: [package]
runs-on: ubuntu-latest
environment: deploy
timeout-minutes: 30
permissions:
id-token: write
steps:
- uses: actions/checkout@v3
- name: Download Package
uses: actions/download-artifact@v3
with:
name: Packages
path: dist

- name: Publish package to PyPI
uses: pypa/gh-action-pypi-publish@v1.8.5

- name: Push tag
run: |
git config user.name "pytest bot"
git config user.email "pytestbot@gmail.com"
git tag --annotate --message=v${{ github.event.inputs.version }} v${{ github.event.inputs.version }} ${{ github.sha }}
git push origin v${{ github.event.inputs.version }}

release-notes:

# todo: generate the content in the build job
Expand All @@ -55,11 +69,11 @@ jobs:
with:
fetch-depth: 0
persist-credentials: false

- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.7"

python-version: "3.10"

- name: Install tox
run: |
Expand Down
36 changes: 22 additions & 14 deletions .github/workflows/test.yml
Expand Up @@ -27,7 +27,19 @@ concurrency:
permissions: {}

jobs:
package:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
persist-credentials: false
- name: Build and Check Package
uses: hynek/build-and-inspect-python-package@v1.5

build:
needs: [package]

runs-on: ${{ matrix.os }}
timeout-minutes: 45
permissions:
Expand Down Expand Up @@ -60,7 +72,6 @@ jobs:
"macos-py310",
"macos-py312",

"docs",
"doctesting",
"plugins",
]
Expand Down Expand Up @@ -159,10 +170,6 @@ jobs:
os: ubuntu-latest
tox_env: "plugins"

- name: "docs"
python: "3.7"
os: ubuntu-latest
tox_env: "docs"
- name: "doctesting"
python: "3.7"
os: ubuntu-latest
Expand All @@ -175,6 +182,12 @@ jobs:
fetch-depth: 0
persist-credentials: false

- name: Download Package
uses: actions/download-artifact@v3
with:
name: Packages
path: dist

- name: Set up Python ${{ matrix.python }}
uses: actions/setup-python@v4
with:
Expand All @@ -188,11 +201,13 @@ jobs:

- name: Test without coverage
if: "! matrix.use_coverage"
run: "tox -e ${{ matrix.tox_env }}"
shell: bash
run: tox run -e ${{ matrix.tox_env }} --installpkg `find dist/*.tar.gz`

- name: Test with coverage
if: "matrix.use_coverage"
run: "tox -e ${{ matrix.tox_env }}-coverage"
shell: bash
run: tox run -e ${{ matrix.tox_env }}-coverage --installpkg `find dist/*.tar.gz`

- name: Generate coverage report
if: "matrix.use_coverage"
Expand All @@ -206,10 +221,3 @@ jobs:
fail_ci_if_error: true
files: ./coverage.xml
verbose: true

check-package:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and Check Package
uses: hynek/build-and-inspect-python-package@v1.5
11 changes: 4 additions & 7 deletions RELEASING.rst
Expand Up @@ -133,14 +133,11 @@ Releasing

Both automatic and manual processes described above follow the same steps from this point onward.

#. After all tests pass and the PR has been approved, tag the release commit
in the ``release-MAJOR.MINOR.PATCH`` branch and push it. This will publish to PyPI::
#. After all tests pass and the PR has been approved, trigger the ``deploy`` job
in https://github.com/pytest-dev/pytest/actions/workflows/deploy.yml.

git fetch upstream
git tag MAJOR.MINOR.PATCH upstream/release-MAJOR.MINOR.PATCH
git push upstream MAJOR.MINOR.PATCH

Wait for the deploy to complete, then make sure it is `available on PyPI <https://pypi.org/project/pytest>`_.
This job will require approval from ``pytest-dev/core``, after which it will publish to PyPI
and tag the repository.

#. Merge the PR. **Make sure it's not squash-merged**, so that the tagged commit ends up in the main branch.

Expand Down
2 changes: 0 additions & 2 deletions changelog/10337.bugfix.rst

This file was deleted.

1 change: 0 additions & 1 deletion changelog/10702.bugfix.rst

This file was deleted.

2 changes: 0 additions & 2 deletions changelog/10811.bugfix.rst

This file was deleted.

1 change: 1 addition & 0 deletions doc/en/announce/index.rst
Expand Up @@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2


release-7.4.1
release-7.4.0
release-7.3.2
release-7.3.1
Expand Down
20 changes: 20 additions & 0 deletions doc/en/announce/release-7.4.1.rst
@@ -0,0 +1,20 @@
pytest-7.4.1
=======================================

pytest 7.4.1 has just been released to PyPI.

This is a bug-fix release, being a drop-in replacement. To upgrade::

pip install --upgrade pytest

The full changelog is available at https://docs.pytest.org/en/stable/changelog.html.

Thanks to all of the contributors to this release:

* Bruno Oliveira
* Florian Bruhin
* Ran Benita


Happy testing,
The pytest Development Team
2 changes: 1 addition & 1 deletion doc/en/builtin.rst
Expand Up @@ -22,7 +22,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
cachedir: .pytest_cache
rootdir: /home/sweet/project
collected 0 items
cache -- .../_pytest/cacheprovider.py:528
cache -- .../_pytest/cacheprovider.py:532
Return a cache object that can persist state between testing sessions.

cache.get(key, default)
Expand Down
17 changes: 17 additions & 0 deletions doc/en/changelog.rst
Expand Up @@ -28,6 +28,23 @@ with advance notice in the **Deprecations** section of releases.

.. towncrier release notes start

pytest 7.4.1 (2023-09-02)
=========================

Bug Fixes
---------

- `#10337 <https://github.com/pytest-dev/pytest/issues/10337>`_: Fixed bug where fake intermediate modules generated by ``--import-mode=importlib`` would not include the
child modules as attributes of the parent modules.


- `#10702 <https://github.com/pytest-dev/pytest/issues/10702>`_: Fixed error assertion handling in :func:`pytest.approx` when ``None`` is an expected or received value when comparing dictionaries.


- `#10811 <https://github.com/pytest-dev/pytest/issues/10811>`_: Fixed issue when using ``--import-mode=importlib`` together with ``--doctest-modules`` that caused modules
to be imported more than once, causing problems with modules that have import side effects.


pytest 7.4.0 (2023-06-23)
=========================

Expand Down
20 changes: 10 additions & 10 deletions doc/en/example/reportingdemo.rst
Expand Up @@ -554,13 +554,13 @@ Here is a nice run of several failures and how ``pytest`` presents things:
E AssertionError: assert False
E + where False = <built-in method startswith of str object at 0xdeadbeef0027>('456')
E + where <built-in method startswith of str object at 0xdeadbeef0027> = '123'.startswith
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef0029>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef002a>()
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef0006>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef0029>()

failure_demo.py:235: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef002b>
self = <failure_demo.TestMoreErrors object at 0xdeadbeef002a>

def test_global_func(self):
> assert isinstance(globf(42), float)
Expand All @@ -571,18 +571,18 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:238: AssertionError
_______________________ TestMoreErrors.test_instance _______________________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef002c>
self = <failure_demo.TestMoreErrors object at 0xdeadbeef002b>

def test_instance(self):
self.x = 6 * 7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef002c>.x
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef002b>.x

failure_demo.py:242: AssertionError
_______________________ TestMoreErrors.test_compare ________________________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef002d>
self = <failure_demo.TestMoreErrors object at 0xdeadbeef002c>

def test_compare(self):
> assert globf(10) < 5
Expand All @@ -592,7 +592,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:245: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________

self = <failure_demo.TestMoreErrors object at 0xdeadbeef002e>
self = <failure_demo.TestMoreErrors object at 0xdeadbeef002d>

def test_try_finally(self):
x = 1
Expand All @@ -603,7 +603,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:250: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef002f>
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef002e>

def test_single_line(self):
class A:
Expand All @@ -618,7 +618,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:261: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef0030>
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef002f>

def test_multiline(self):
class A:
Expand All @@ -637,7 +637,7 @@ Here is a nice run of several failures and how ``pytest`` presents things:
failure_demo.py:268: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________

self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef0031>
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef0030>

def test_custom_repr(self):
class JSON:
Expand Down
2 changes: 1 addition & 1 deletion doc/en/getting-started.rst
Expand Up @@ -22,7 +22,7 @@ Install ``pytest``
.. code-block:: bash

$ pytest --version
pytest 7.4.0
pytest 7.4.1

.. _`simpletest`:

Expand Down
4 changes: 2 additions & 2 deletions doc/en/how-to/tmp_path.rst
Expand Up @@ -51,8 +51,8 @@ Running this would result in a passed test except for the last
d = tmp_path / "sub"
d.mkdir()
p = d / "hello.txt"
p.write_text(CONTENT)
assert p.read_text() == CONTENT
p.write_text(CONTENT, encoding="utf-8")
assert p.read_text(encoding="utf-8") == CONTENT
assert len(list(tmp_path.iterdir())) == 1
> assert 0
E assert 0
Expand Down
11 changes: 6 additions & 5 deletions doc/en/reference/reference.rst
Expand Up @@ -1887,11 +1887,12 @@ All the command-line flags can be obtained by running ``pytest --help``::
tests. Optional argument: glob (default: '*').
--cache-clear Remove all cache contents at start of test run
--lfnf={all,none}, --last-failed-no-failures={all,none}
With ``--lf``, determines whether to execute tests when there
are no previously (known) failures or when no
cached ``lastfailed`` data was found.
``all`` (the default) runs the full test suite again.
``none`` just emits a message about no known failures and exits successfully.
With ``--lf``, determines whether to execute tests
when there are no previously (known) failures or
when no cached ``lastfailed`` data was found.
``all`` (the default) runs the full test suite
again. ``none`` just emits a message about no known
failures and exits successfully.
--sw, --stepwise Exit on test failure and continue from last failing
test next time
--sw-skip, --stepwise-skip
Expand Down