Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Crash on an endbr64 instruction. #24

Open
RnMss opened this issue Apr 14, 2023 · 8 comments
Open

Crash on an endbr64 instruction. #24

RnMss opened this issue Apr 14, 2023 · 8 comments
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@RnMss
Copy link

RnMss commented Apr 14, 2023

My build crashes inferencing with a model with "Illegal Instruction".
I debugged it and seems to crash on an endbr64 instruction. I think my CPU doesn't support the instruction set.
Is there a building option to turn off the instruction set?

Version: Master, commit e84c446d9533dabef2d8d60735d5924db63362ff

Command to reproduce
python rwkv/chat_with_bot.py ../models/xxxxxxx.bin

It crashed with "Illegal Instruction"

I debugged the program:

> gdb python 
(gdb) handle SIGILL stop
(gdb) run rwkv/chat_with_bot.py ../models/xxxx.bin
...
[New Thread 0x7fff6fa49640 (LWP 738136)]
Loading 20B tokenizer
System info: AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | VSX = 0 | 
Loading RWKV model

Thread 1 "python" received signal SIGILL, Illegal instruction.
0x00007fffde693135 in ggml_init () from /*****/rwkv.cpp/librwkv.so
(gdb) disassemble
Dump of assembler code for function ggml_init:
   0x00007fffde692fd0 <+0>:	endbr64 
   0x00007fffde692fd4 <+4>:	push   %r15
   0x00007fffde692fd6 <+6>:	mov    $0x1,%eax
   0x00007fffde692fdb <+11>:	push   %r14
...
@saharNooby
Copy link
Collaborator

Hi! Please try to build and run llama.cpp and see if it works.

If it crashes too with similar error, report the problem with llama.cpp to their repo. They would fix it quicker, since their repo is more popular, and then I can port the fix here.

If it does not crash, we would need to compare the code of llama.cpp and rwkv.cpp and guess what can cause the issue.

@RnMss
Copy link
Author

RnMss commented Apr 16, 2023

I tried llama.cpp, and it worked without a crash.
Tested on models: opt-1.3b and Chinese-Alpaca-LoRA-13B
llama.cpp version: master-53dbba7

@saharNooby
Copy link
Collaborator

I took a look at llama.cpp version of ggml. Unfortunately, my and their repo are now too diverged to make sense of any comparisons. Sorry for asking you to test llama.cpp, I'll stop asking users to do that from now on.

As for the issue, I don't have any ideas how to fix this.

@RnMss
Copy link
Author

RnMss commented Apr 17, 2023

I tried adding compile flags -fcf-protection=none, which is said to disable the CET instruction set like endbr64, but it does not help.

It doesn't make sense. I roughly read the code but didn't see anything close to that. The disassembly looks rather real, not like some random data. I'm dooooomed.

@saharNooby
Copy link
Collaborator

@RnMss I've updated ggml to the latest version. Please try again, don't forget to update git submodules (or better -- clone from scratch git clone --recursive https://github.com/saharNooby/rwkv.cpp.git).

@RnMss
Copy link
Author

RnMss commented Apr 17, 2023

It still doest not work on my CPU. I'll try on Windows later.

Model Tested: https://huggingface.co/BlinkDL/rwkv-4-raven/blob/main/RWKV-4-Raven-14B-v8-Eng87%25-Chn10%25-Jpn1%25-Other2%25-20230412-ctx4096.pth

@saharNooby saharNooby added bug Something isn't working help wanted Extra attention is needed labels Apr 26, 2023
@EricLeeaaaaa
Copy link

Got the same problem in docker nvcr.io/nvidia/pytorch:23.05-py3, tokenizers-0.13.3

@izzatzr
Copy link

izzatzr commented Oct 9, 2023

try recompile the repo with disable the AVX instruction flag on cmakelist.txt @RnMss . this step works for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants