-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Segmentation fault (core dumped) when generating #1292
Comments
google directed me to this issue. My issue is maybe not relevant to your What I notied is that this started recently and only difference is new kernel version which maching yours 6.5.0-1015-aws. It would be great if you can try to boot your instance with 1014 just to confirm if it works there. |
Thank you very much. I will try doing it your way. |
Update: |
Might be same issue as #1319 should be fixed in v0.2.59. I'll re-open if it's not. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Expected Behavior
Please provide a detailed written description of what you were trying to do, and what you expected

llama-cpp-python
to do.I am coding project chatbot using model mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf, i use Flask to code BE and React to code FE.
The model will stream words as usual.
Current Behavior
Please provide a detailed written description of what
llama-cpp-python
did, instead.While i chat with bot, sometime the error will generate and server is down:
Environment and Context
Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.
$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 32
On-line CPU(s) list: 0-31
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 16
Socket(s): 1
Stepping: 1
CPU max MHz: 3000.0000
CPU min MHz: 1200.0000
BogoMIPS: 4600.03
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdts
cp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid a
perfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse
4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave av
x f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault
invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invp
cid rtm rdseed adx xsaveopt
Virtualization features:
Hypervisor vendor: Xen
Virtualization type: full
Caches (sum of all):
L1d: 512 KiB (16 instances)
L1i: 512 KiB (16 instances)
L2: 4 MiB (16 instances)
L3: 45 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0-31
Vulnerabilities:
Gather data sampling: Not affected
Itlb multihit: KVM: Mitigation: VMX unsupported
L1tf: Mitigation; PTE Inversion
Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Hos
t state unknown
Meltdown: Mitigation; PTI
Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Hos
t state unknown
Retbleed: Not affected
Spec rstack overflow: Not affected
Spec store bypass: Vulnerable
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer saniti
zation
Spectre v2: Mitigation; Retpolines, STIBP disabled, RSB filling, PBRSB-eIB
RS Not affected
Srbds: Not affected
Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Hos
t state unknown
$ uname -a
Linux ip-172-31-2-143 6.5.0-1015-aws #15~22.04.1-Ubuntu SMP Tue Feb 20 20:12:08 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Failure Information (for bugs)
Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.
I use llm-cpp-python to create chatbot, my server is p3.8xlarge with 245GB RAM and 4 GPU T4 16GB VRAM. But i just use llm-cpp-python with only CPU to load model (because when i try use both CPU and GPU, it give 1 bug that i can't handle).
I use model mixtral-8x7b-instruct-v0.1.Q5_K_M.gguf to infer. When i chat, the model generate normally, but after a few turn chat, the server is crash because error segmentation fault.
Steps to Reproduce
Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.
Failure Logs
Example environment info:
The text was updated successfully, but these errors were encountered: