Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Vec instead of Slice in ColumnReader #5177

Closed
tustvold opened this issue Dec 6, 2023 · 6 comments
Closed

Use Vec instead of Slice in ColumnReader #5177

tustvold opened this issue Dec 6, 2023 · 6 comments
Assignees
Labels
enhancement Any new improvement worthy of a entry in the changelog parquet Changes to the parquet crate parquet-derive

Comments

@tustvold
Copy link
Contributor

tustvold commented Dec 6, 2023

Is your feature request related to a problem or challenge? Please describe what you are trying to do.

Currently ColumnValueDecoderImpl and by extension ColumnReader accepts slices of [T::T] where T: DataType.

This was preserved by #1041 which extracted generics to allow using owned buffer constructions instead for the arrow read path, whilst preserving the existing API for non-arrow readers.

However, preserving this API has a couple of fairly substantial drawbacks:

  • A lot of the test coverage in the parquet crate uses the arrow APIs which use different implementations of ColumnValueDecoder
  • The finite capacity of the output buffers introduces challenges related to record truncation - GenericColumnReader::read_records Yields Truncated Records #5150
  • The generics are pretty arcane and require some gymnastics to allow for slices that don't have a size separate from their capacity
  • Buffers must be pre-allocated and zeroed ahead of time, which is not only an unnecessary overhead, but for list will likely necessitate re-allocation once the correct number of values is ascertained

Describe the solution you'd like

I would like to update ColumnValueDecoderImpl to accept Vec<T> instead of [T::T]. This would not only simplify RecordReader, and improve its performance for nested data, but would eliminate issues like #5150

Describe alternatives you've considered

Additional context

@tustvold tustvold added the enhancement Any new improvement worthy of a entry in the changelog label Dec 6, 2023
@tustvold tustvold self-assigned this Dec 6, 2023
tustvold added a commit to tustvold/arrow-rs that referenced this issue Dec 6, 2023
tustvold added a commit to tustvold/arrow-rs that referenced this issue Dec 8, 2023
tustvold added a commit to tustvold/arrow-rs that referenced this issue Dec 8, 2023
tustvold added a commit that referenced this issue Dec 15, 2023
* Use Vec in ColumnReader (#5177)

* Update parquet_derive
@tustvold tustvold added the parquet Changes to the parquet crate label Jan 5, 2024
@tustvold
Copy link
Contributor Author

tustvold commented Jan 5, 2024

label_issue.py automatically added labels {'parquet'} from #5193

@tustvold
Copy link
Contributor Author

tustvold commented Jan 5, 2024

label_issue.py automatically added labels {'parquet-derive'} from #5193

@pacman82
Copy link

Hello together,

maintainer of odbc2parquet here. While there are certainly good reasons for this request, I mourn the loss of the old API. It allowed me to directly fill the buffers which were already bound to an existing cursor via the ODBC C-API. I can not use a Vec since reallocating/resizing it would invalidate the pointers. As of now I see no other way to use the newest parquet version other than to introduce an extra allocation.

Don't have any measurements around this, but hurts me a little.

Just some feedback. Keep up the good work.

@pacman82
Copy link

I should add I only have to pessimize insertion and also only if the parquet and ODBC type are binary identical for required columns. So its not the most important thing in the world. Yet it was nice that in this case no additional copy had been needed.

@tustvold
Copy link
Contributor Author

tustvold commented Jan 12, 2024

allowed me to directly fill the buffers which were already bound to an existing cursor via the ODBC C-API

If it makes you feel any better, this would likely be subtly incorrect for data with repetition levels 😄

Don't have any measurements around this, but hurts me a little.

I would be interested in any numbers where this represents a regression, mainly because #5193 represented a non-trivial performance uplift in many cases. I'd be willing to consider alternative suggestions to fixing #5150 if this represents a major regression for your workloads.

@pacman82
Copy link

If it makes you feel any better, this would likely be subtly incorrect for data with repetition levels 😄

Only had flat, table like data, so I might have been lucky in the past.

I would be interested in any numbers where this represents a regression, mainly because #5193 represented a non-trivial performance uplift in many cases. I'd be willing to consider alternative suggestions to fixing #5150 if this represents a major regression for your workloads.

I have little reason to doubt you. Some tests are still failing, but I will just update. I would not have the time to set a benchmark up. Even if I would, there are so many variables (type of column, number of rows etc.) that its outcome unless mindbogglingly significant would not allow any conclusions. Nice to hear that you had some nice speedups though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Any new improvement worthy of a entry in the changelog parquet Changes to the parquet crate parquet-derive
Projects
None yet
Development

No branches or pull requests

2 participants