Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REGR: fix read_parquet with column of large strings (avoid overflow from concat) #55691

Merged
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 1 addition & 0 deletions doc/source/whatsnew/v2.1.2.rst
Expand Up @@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrameGroupBy.agg` and :meth:`SeriesGroupBy.agg` where if the option ``compute.use_numba`` was set to True, groupby methods not supported by the numba engine would raise a ``TypeError`` (:issue:`55520`)
- Fixed performance regression with wide DataFrames, typically involving methods where all columns were accessed individually (:issue:`55256`, :issue:`55245`)
- Fixed regression in :func:`merge_asof` raising ``TypeError`` for ``by`` with datetime and timedelta dtypes (:issue:`55453`)
- Fixed regression in :func:`read_parquet` when reading a file with a string column consisting of more than 2 GB of string data and using the ``"string"`` dtype (:issue:`55606`)

.. ---------------------------------------------------------------------------
.. _whatsnew_212.bug_fixes:
Expand Down
10 changes: 8 additions & 2 deletions pandas/core/arrays/string_.py
Expand Up @@ -228,11 +228,17 @@ def __from_arrow__(
# pyarrow.ChunkedArray
chunks = array.chunks

results = []
for arr in chunks:
arr = arr.to_numpy(zero_copy_only=False)
arr = ensure_string_array(arr, na_value=libmissing.NA)
results.append(arr)

if len(chunks) == 0:
arr = np.array([], dtype=object)
else:
arr = pyarrow.concat_arrays(chunks).to_numpy(zero_copy_only=False)
arr = ensure_string_array(arr, na_value=libmissing.NA)
arr = np.concatenate(results)

# Bypass validation inside StringArray constructor, see GH#47781
new_string_array = StringArray.__new__(StringArray)
NDArrayBacked.__init__(
Expand Down
11 changes: 11 additions & 0 deletions pandas/tests/io/test_parquet.py
Expand Up @@ -1141,6 +1141,17 @@ def test_infer_string_large_string_type(self, tmp_path, pa):
)
tm.assert_frame_equal(result, expected)

@pytest.mark.slow
def test_string_column_above_2GB(self, tmp_path, pa):
# https://github.com/pandas-dev/pandas/issues/55606
# above 2GB of string data
v1 = b"x" * 100000000
v2 = b"x" * 147483646
df = pd.DataFrame({"strings": [v1] * 20 + [v2] + ["x"] * 20}, dtype="string")
df.to_parquet(tmp_path / "test.parquet")
result = read_parquet(tmp_path / "test.parquet")
assert result["strings"].dtype == "string"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test is quite slow (around 20s for me) and uses a lot of memory (> 5 GB), so I am not sure we should add it ... (our "slow" tests are still run by default, so this would be annoying when running the tests locally)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in favor of not adding this test given the potential CI load. Maybe an ASV since this is "performance" related too given the memory trigger if you think that makes sense.

At minimum, would be good to comment in pandas/core/arrays/string_.py why the modification was made

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a comment about it, and "removed" the test: I left the code here, to make it easier to run that test in the future by just uncommenting (or if we enable some high_memory mark that would be disabled by default)

Adding a ASV sounds useful, but wouldn't prevent catching a regression, as also for ASV we would use a smaller dataset. So leaving that out of the PR here.



class TestParquetFastParquet(Base):
def test_basic(self, fp, df_full):
Expand Down