Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading Quite OK Image (QOI) files is slow compared to PNG and qoi #7922

Closed
ivanstepanovftw opened this issue Mar 31, 2024 · 17 comments · Fixed by #7937
Closed

Reading Quite OK Image (QOI) files is slow compared to PNG and qoi #7922

ivanstepanovftw opened this issue Mar 31, 2024 · 17 comments · Fixed by #7937

Comments

@ivanstepanovftw
Copy link

ivanstepanovftw commented Mar 31, 2024

What did you do?

image = Image.open(image_filepath)

while using PyTorch data loader with 8 workers.

What did you expect to happen?

Fast

What actually happened?

Slow

What are your OS, Python and Pillow versions?

  • OS: Linux
  • Python: 3.10.13
  • Pillow: 10.2.0

Reading PNG images with Pillow is faster.

For me, current workaround is to use https://github.com/kodonnell/qoi from PyPI (qoi package). It is faster when I switched to it. To compare:

image = qoi.read(image_filepath)

Image sizes are regular, like 800x480, 1280x600.

@radarhere
Copy link
Member

Hi. I think it may not be the most useful metric to say 'QOI reading should at least be as fast as PNG reading', since if we manage to make QOI reading faster, then someone could say 'PNG reading should be at least as fast as QOI reading', and it is theoretically a neverending need to be faster.

Rather than comparing it to PNG reading, could you give us an idea for how much faster you would like QOI reading to be than it is now? If we were to make it twice as fast, I would consider that to be a massive win, but it may not be as much as you are hoping for.

Also, if you could attach a specific QOI image that we can use when testing speed, that would allow us to better cater to your situation.

@ivanstepanovftw
Copy link
Author

it is theoretically a neverending need to be faster.

Performance of QOI encoding and decoding is O(n), where n is total pixels. I have give you example above using qoi Python wrapper for original qoi library, which is faster than Pillow decoding in my tests. Is something else performed while reading and decoding the compressed QOI?

Here is the reproducer for you:

import timeit

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import qoi
import seaborn as sns
import torch
from PIL import Image

# Convert PNG/JPG image to numpy array
img_path = "/home/i/Downloads/PNG_transparency_demonstration_1.png"  # original image
qoi_path = "/home/i/Downloads/PNG_transparency_demonstration_1.qoi"  # saved qoi file
img = Image.open(img_path)
rgb_array = np.array(img)
_ = qoi.write(qoi_path, rgb_array)

# Define functions to time reading QOI file using Pillow and qoi
def read_png_with_pillow_into_tensor():
    img = Image.open(img_path)
    return torch.tensor(np.asarray(img))

def read_qoi_with_pillow_into_tensor():
    img = Image.open(qoi_path)
    return torch.tensor(np.asarray(img))

def read_qoi_with_qoi_into_tensor():
    img = qoi.read(qoi_path)
    return torch.tensor(img)

# Time the functions
number = 50
read_png_with_pillow_into_tensor_times = timeit.repeat(read_png_with_pillow_into_tensor, number=number)
print(f"read_png_with_pillow_into_tensor_times: mean={np.mean(read_png_with_pillow_into_tensor_times)}, min={np.min(read_png_with_pillow_into_tensor_times)}, count={len(read_png_with_pillow_into_tensor_times)}")
read_qoi_with_pillow_into_tensor_times = timeit.repeat(read_qoi_with_pillow_into_tensor, number=number)
print(f"read_qoi_with_pillow_into_tensor_times: mean={np.mean(read_qoi_with_pillow_into_tensor_times)}, min={np.min(read_qoi_with_pillow_into_tensor_times)}, count={len(read_qoi_with_pillow_into_tensor_times)}")
read_qoi_with_qoi_into_tensor_times = timeit.repeat(read_qoi_with_qoi_into_tensor, number=number)
print(f"read_qoi_with_qoi_into_tensor_times: mean={np.mean(read_qoi_with_qoi_into_tensor_times)}, min={np.min(read_qoi_with_qoi_into_tensor_times)}, count={len(read_qoi_with_qoi_into_tensor_times)}")

data = {
    "Method": ["read_png_with_pillow_into_tensor_times"] * len(read_png_with_pillow_into_tensor_times)
              + ["read_qoi_with_pillow_into_tensor_times"] * len(read_qoi_with_pillow_into_tensor_times)
              + ["read_qoi_with_qoi_into_tensor_times"] * len(read_qoi_with_qoi_into_tensor_times),
    "Time": read_png_with_pillow_into_tensor_times + read_qoi_with_pillow_into_tensor_times + read_qoi_with_qoi_into_tensor_times
}

print(f"{data=}")

df = pd.DataFrame(data)

# Plotting
plt.figure(figsize=(10, 6), dpi=180)
sns.violinplot(x="Method", y="Time", data=df)
plt.title("Image Reading Times Comparison")
plt.ylabel("Time (seconds)")
plt.xlabel("Method")
plt.show()

Output:

read_png_with_pillow_into_tensor_times: mean=0.38110024845227597, min=0.37416256219148636, count=5
read_qoi_with_pillow_into_tensor_times: mean=6.902672660909593, min=6.831185481045395, count=5
read_qoi_with_qoi_into_tensor_times: mean=0.06399821890518069, min=0.06361464504152536, count=5
data={'Method': ['read_png_with_pillow_into_tensor_times', 'read_png_with_pillow_into_tensor_times', 'read_png_with_pillow_into_tensor_times', 'read_png_with_pillow_into_tensor_times', 'read_png_with_pillow_into_tensor_times', 'read_qoi_with_pillow_into_tensor_times', 'read_qoi_with_pillow_into_tensor_times', 'read_qoi_with_pillow_into_tensor_times', 'read_qoi_with_pillow_into_tensor_times', 'read_qoi_with_pillow_into_tensor_times', 'read_qoi_with_qoi_into_tensor_times', 'read_qoi_with_qoi_into_tensor_times', 'read_qoi_with_qoi_into_tensor_times', 'read_qoi_with_qoi_into_tensor_times', 'read_qoi_with_qoi_into_tensor_times'], 'Time': [0.3935986291617155, 0.3756745522841811, 0.38304916163906455, 0.3790163369849324, 0.37416256219148636, 7.0226039863191545, 6.841565617360175, 6.831185481045395, 6.875711226835847, 6.942296992987394, 0.06361464504152536, 0.06427709199488163, 0.06379142077639699, 0.06462411116808653, 0.06368382554501295]}

image

@ivanstepanovftw
Copy link
Author

You can use any image, for example, https://en.wikipedia.org/wiki/PNG#/media/File:PNG_transparency_demonstration_1.png
Please update hardcoded path in the reproducer accordingly.

@hugovk
Copy link
Member

hugovk commented Mar 31, 2024

For me, current workaround is to use kodonnell/qoi from PyPI (qoi package). It is faster when I switched to it.

Using a specialised library is a perfectly valid option :)


Thank you for the script! I get similar results:

Figure_1

But If I comment out the return lines so we only time opening the images:

def read_png_with_pillow_into_tensor():
    img = Image.open(img_path)
    # return torch.tensor(np.asarray(img))

def read_qoi_with_pillow_into_tensor():
    img = Image.open(qoi_path)
    # return torch.tensor(np.asarray(img))

def read_qoi_with_qoi_into_tensor():
    img = qoi.read(qoi_path)
    # return torch.tensor(img)

Pillow is much faster at opening than QOI:

Figure_2

Reinstating the np.asarray for Pillow (but not QOI):

def read_png_with_pillow_into_tensor():
    img = Image.open(img_path)
    np.asarray(img)
    # return torch.tensor(np.asarray(img))

def read_qoi_with_pillow_into_tensor():
    img = Image.open(qoi_path)
    np.asarray(img)
    # return torch.tensor(np.asarray(img))

def read_qoi_with_qoi_into_tensor():
    img = qoi.read(qoi_path)
    # return torch.tensor(img)

We see the slowness is really the np.asarray for the Pillow reading the .qoi:

Figure_3

And for some reason PyTorch doesn't seem to need the QOI image converting to a NumPy array first.

@nulano
Copy link
Contributor

nulano commented Mar 31, 2024

@hugovk It could still be that it's the Pillow decoder causing the slowness - can you test Image.open(...).load()?

(I can't test myself as I'm travelling for a few days)

@hugovk
Copy link
Member

hugovk commented Mar 31, 2024

Yep, there it is:

def read_png_with_pillow_into_tensor():
    img = Image.open(img_path).load()
    # return torch.tensor(np.asarray(img))

def read_qoi_with_pillow_into_tensor():
    img = Image.open(qoi_path).load()
    # return torch.tensor(np.asarray(img))

def read_qoi_with_qoi_into_tensor():
    img = qoi.read(qoi_path)
    # return torch.tensor(img)

Figure_4

@Yay295
Copy link
Contributor

Yay295 commented Mar 31, 2024

The Pillow QOI decoder is entirely Python, so that's probably why it's slower.

https://github.com/python-pillow/Pillow/blob/main/src/PIL/QoiImagePlugin.py

@radarhere
Copy link
Member

I created #7925 to speed up QOI loading somewhat.

I think our QoiDecoder is a straightforward implementation of the specification, and I doubt we're going to achieve the 10x speedup being requested here by improving the logic.

Switching to a C decoder to speed things up is a possibility, but... do we want to?

#1938 (comment)

The decoders are all in C, a fast but unsafe language. Python is significantly slower, but safe. This is a tradeoff that we should be able to make. (in either direction)

@ivanstepanovftw
Copy link
Author

The reason why QOI even became popular is because it is simple, stable, fast and energy efficient. I do not want someone to stumble upon this issue and blaming QOI to be useless compared to PNG, without knowing that QOI in Pillow implemented entirely in Python.

@wiredfool
Copy link
Member

And as a counterpoint, RLE encoding in C is the historical source of many OOB read and writes.

@aclark4life
Copy link
Member

Switching to a C decoder to speed things up is a possibility, but... do we want to?

No, at least not in response to this issue, as you seemed to satisfy the original request by @Max1Truc in #6852, thank you for that! If @ivanstepanovftw would like to send a PR for review, that's a different discussion. 🤔

I do not want someone to stumble upon this issue and blaming QOI to be useless compared to PNG, without knowing that QOI in Pillow implemented entirely in Python.

As it so happens, this is the first I've heard of QOI and QOI support available in Pillow. Pillow was first released in 1995 and QOI first release in 2021, 26 years later. As such, I don't think it's reasonable to expect a general purpose image library like Pillow to excel at performance in processing images in a format invented 26 years after the general purpose image library was first released.

That it works at all is quite impressive and as @radarhere said about comparing QOI to PNG, I don't believe that protecting QOI's reputation of being "simple, stable, fast and energy efficient" is going to be a high priority for Pillow, and I don't expect QOI's reputation to be impacted as a result.

That said, Pillow's processing of QOI files in Python, not C, could probably be clarified here: https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html#qoi.

@ivanstepanovftw
Copy link
Author

@aclark4life, never late to learn about new technologies! ✨

I don't want to accept that high priority for Pillow is to be 29 years old library.
As everywhere, there sure should be planning and discussion before implementing image decoding entirely in Python.

I am against closing this as resolved by accepting pull request #7937, because QOI decoding is still slow and the performance issue will not go away.

@hugovk
Copy link
Member

hugovk commented Apr 3, 2024

Performance is important. Security is important. C can be memory-unsafe, and as mentioned takes up a lot of our time dealing with CVEs and security fixes, and there's not many people working on this project.

Using a specialised library like https://github.com/kodonnell/qoi is perfectly valid option. Experts in the QOI format can spend more time maintaining that and its wrapped QOI library.

It might not be a direct Pillow plugin, but I'd welcome the community to turn that into a plugin or create a new one (docs).

@ivanstepanovftw
Copy link
Author

I will not contribute any code for Pillow while you telling me that pull request would not be accepted because C is unsafe.

@ivanstepanovftw
Copy link
Author

ivanstepanovftw commented Apr 3, 2024

There is a QOI implementation in memory safe, secure and fast Rust https://github.com/aldanor/qoi-rust

Not that safe TBH. Search for repo:aldanor/qoi-rust unsafe.

And another one implementation in Rust https://github.com/ChevyRay/qoi_rs, but I also see unsafe directive and I am not sure why it is needed. Search for repo:ChevyRay/qoi_rs unsafe.

@hugovk
Copy link
Member

hugovk commented Apr 3, 2024

I will not contribute any code for Pillow while you telling me that pull request would not be accepted because C is unsafe.

Sure, I'm not suggesting a PR to Pillow, rather a third-party plugin along the lines of these:

@wiredfool
Copy link
Member

@ivanstepanovftw Read the room -- We're less than a week out of the initial discovery of the xz thing, and a significant issue there was pressure on the maintainers. Image decoders are a known danger zone. (see all the iMessage issues). You've gotten the attention of all the maintainers here, the ones who would have to respond to oss-fuzz, or some less principled attacker finding a buffer overflow in the code. This has happened enough in various RLE type encodings that this is definitely not the first rodeo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants