-
Notifications
You must be signed in to change notification settings - Fork 221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inference bottleneck #248
Comments
The truth is that, yes, the CPU backend isn't as optimized as it could be; perhaps it's the im2col kernel since it overuses memory accesses. In all ML software, the main bottleneck is always matrix multiplications. Diffusers and other inference engines have many years of advancement over ggml, optimized code in every way. Performance improvements in ML are largely directed towards enhancing PyTorch library. |
where is this main bottleneck cpu code? |
Was thinking to create a similar issue, hope my cuda case not an offtop. My hardware: |
@SA-j00u The bottleneck mainly comes from the MUL_MAT operator. You can profile your run with @ring-c If you are using CUDA, that is the normal behavior. If you want to offload your execution to CPU, you will experience communication latency via PCI, which is normally bad unless you need an extra memory due to a large model size. |
@wonkyoc dont think this is normal. There like 50% free vram in this scenario. There is not enough load on GPU, CPU calculations not fast enough to load GPU. |
@FSSRepo I understand the repo is a bit infant compared to Pytorch and therefore, the slow inference is because of under-optimization. Yet, the one thing that I do not understand is llama.cpp/whisper.cpp are quite comparable (or even better than) to Pytorch. This tells me ggml might not be the main cause, but the stable diffusion's distinctive inference style might be. Anyway, I will leave this issue for a bit and close anytime this week since this won't be solved soon. |
I found that this issue was actually created from my end although diffusers is still better. For some reason, I used |
yep debug version is sloooooooow |
But you say there is still a difference of ~5 seconds here vs 1.43 seconds in diffusers? I think in that case, that is a very interesting benchmark and it would be good to keep this issue open to track the speed difference and give some visibility to possible improvements. Maybe just update the first post with some accurate benchmarks based on a release build. Or create a new issue with the accurate results. |
i think some bottleneck is C compiler (that not as active mainterlaced as c++ compilers) i replace all possible variables types to register in ggml.c also -OFast produce different results compared to -O3 !!!! |
@JohnAlcatraz I just updated the first post and am reopening the issue. |
What I have experienced is that the inference of cpp on cpu is way too slow compared to the latest diffusers. Especially, only the sampling in UNet takes about 30s, which is approx. 23x slower than diffusers.
Results: just a single step w/ DPM++ 24 threads
stable-diffusion.cpp takes
32.95vs. diffusers 1.43I want to discuss this in detail. I saw the author's comment:
If this is true, does the slow inference root from ggml? the CPU seems to be well-utilized in matrix multiplication. I have compared a single thread vs. 2/4/12/24 threads and saw the MUL_MAT scales but the MUL_MAT itself is inherently slow. This is quite abnormal to me because what I have in mind is that cpp is supposed to be faster than python in nature.
My testing environment:
Update
I found that w/o
-O3
compilation would lead to slow inference. The default compilation of stable-diffusion.cpp is to have-O3
but somehow I excluded the flag when I tested. The updated result is around 5 sec but still slower than diffusers.The text was updated successfully, but these errors were encountered: