-
Notifications
You must be signed in to change notification settings - Fork 10.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale buf_size linearly with n_ctx #213
Conversation
This appear to solve ggml-org#153 where error of "ggml_new_tensor_impl: not enough space in the context's memory pool" is thrown in interactive mode. At least the out of memory error come from `ctx0` used here. Although I am not familiar with the code base enough to tell if this is indeed the cause.
This definitely increases the memory in the right place. |
This does not happen out of interactive mode, so I don't think that this is right. This would mean 1.5 GB more memory usage for 2048 context size so it is significant. |
I looked at it more and I dont think this is the way to solve this. We should instead "determine the required inference memory per token" https://github.com/ggerganov/llama.cpp/blob/master/main.cpp#L891 edit: I just saw that it is set, but only after the first run. https://github.com/ggerganov/llama.cpp/blob/master/main.cpp#L749 , which means the reallocation logic has an error. @slaren the 7B model does not run into this issue. |
Edit again: after testing the 30B model again, I realized that the bug only happens in an Interactive session. |
I've been hitting this with the 65B model in oneshot mode with a 2048-token context. So it's not just interactive sessions that are affected. This is fairly easily reproducible with the following:
(Note: smaller batch sizes take longer before aborting, but still suffer the same issue.) I suspect that the actual required buffer size is a linear function of the context length, but with a nonzero constant term. Maybe add debug code to print out the actual high watermark at the end? |
I forgot to mention that I ran 30B with ctx2048 :) |
Increasing batch size also makes llama.cpp run out of memory, so any solution that only considers the context size and not the batch size is likely wrong. |
Ah. That explains why I was seeing larger batch sizes OOM faster. |
Thinking more about this, does it really matter what is the initial value of The "proper" solution to this would be to analyze the code painstakingly to be able to accurately predict how much memory is needed in the context, but that is not going to be easy and it is going to break any time that changes are made. Does it really matter anyway? Any memory reserved for the context won't be committed to physical memory until the pages are touched anyway. Alternatively we could catch the out of memory errors from ggml and realloc the buffer as needed. |
@slaren earlier i wrote:
|
Throwing some ideas about actual reasons behind the bug. I think it's the classic integer division gotcha: if batch size 'N' > 1 then there will be loss of fraction. Just after these lines I added:
And tracked calls after the first one that sets the value:
As expected, actual memory usage differs from predicted and sometimes bigger. That's not good. |
Looking further, it also slowly creeps up as prompt being read(batch size = 4)
In a hindsight that makes sense. Attention mechanism performs more operations when context grows. Today's code makes wrong assumption by expecting that memory usage per token will remain static. It clearly grows by 65536 in each iteration. I think total memory usage at full context can be inferred from measurements taken by multiple |
I haven't looked; how does LLaMA handle prompts that are smaller than context size? E.g. a 1024-token prompt with 2048-token context size. Does it just truncate to an effective context size of 1024? Or does it pad somehow? |
To add another observation, the amount of memory increase per iteration seems to scale quadratically with batch size. For the 65B 4bit model. A batch size of 4 gives:
Where batch size of 8 gives:
Where That could explain the previous observation that larger batch size OOMs faster. |
Right, batch processing at least must construct This is also a good avenue for optimization. 'Scientific' code like this is only good for GPUs where compute units share instruction decoder and cannot diverge, e.g. if 9 cores do 1 iteration in a loop and 1 core does 10 iteration then they all must do 10 iterations because they execute same instructions in a lockstep. Assuming same model on CPU is a huge waste of time and memory as we can see. |
ask chatGPT for a solution lol |
please try #438 and see if it fixes it. |
This appear to solve #153 where error of
ggml_new_tensor_impl: not enough space in the context's memory pool
is thrown in interactive mode, if using a larger context size.At least the out of memory error come from
ctx0
used here. Although I am not familiar with the code base enough to tell if this is indeed the cause.