-
Notifications
You must be signed in to change notification settings - Fork 9.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor/online repacking #10446
base: master
Are you sure you want to change the base?
Refactor/online repacking #10446
Conversation
- remove from "file" tensor type - allow only with dynamic repack
ggml/src/ggml-cpu/ggml-cpu-hbm.cpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure it is good, can not test.
And may not work/build on master branch either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could probably be removed, the normal CPU buffer type calls ggml_aligned_malloc
which already uses HBM. So at the moment this buffer type serves no purpose.
int64_t const matmul_num_cols = type_traits_cpu[type].ncols; | ||
ggml_gemv_t const gemv = type_traits_cpu[type].gemv; | ||
//int64_t const matmul_num_cols = type_traits_cpu[type].ncols; | ||
//ggml_gemv_t const gemv = type_traits_cpu[type].gemv; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Look for me it have not be write for dynamic repack but only with "native" Q4_0_N_M packing.
leave it commented it need some work to be usable on dynamic repacking.
// move to ggml-cpu-traits... | ||
static const struct ggml_cpu_tensor_traits* ggml_cpu_get_tensor_traits( | ||
const struct ggml_tensor * src0) | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to have here a "src0->extra" that is not part of the CPU backend?
ie: can the ggml_compute_zzzz be call with weight that is part of an other backend/device/backend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see the point. As I already told you, the way this is intended to be handled is with ggml_backend_sched
.
To be clear, I am not opposed to making changes to the design if you can come up with a better way to do things, but I am not seeing an argument in favor of this. IMO there are clear advantages to keeping backends independent of each other, and more generally, in reducing coupling between the different components to a minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think my question was not clear.
In the ggml_compute_forward_mul_mat we test that the buffer is a "aarch64" before use extra.
Can I remove this test and be sure that if extra exist at this point it have be set by the CPU backend?
Is this test there to differentiate between the (future) different cpu extra buffer?
If so if we can have the same struct (or base class...) for all cpu buffer type then we can remove this test.
My question was not to make it possible, but to see if we can simplify this function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some reasons reasons why we should not do this:
- The RPC backend does not support backends that use extras. This is not completely unavoidable, but currently this is done because the cost of calling
init_tensor
on every tensor is too high since it requires a round trip between the server and the client, so the RPC backend skips these calls. - The CPU backend should be able to use buffers of iGPU backends that do not modify the weights to avoid extra copies. For example, the CPU backend can use Metal backend buffers since it uses host buffers. Making it require an extra would break that.
- And the same applies the other way around. The BLAS backend can use CPU buffers without copies because it can assume that the tensors stored in the default CPU buffer are in standard ggml layout. If we just use one buffer type and use the extras, this would no longer be possible.
So I think it is better to keep tensor conversions in different buffer types.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think it is better to keep tensor conversions in different buffer types.
Yes, I don't what to remove the fact that the tensor is in the "aarch64" backend, I just want to know if I can simplify this function like that:
static const struct ggml_cpu_tensor_traits* ggml_cpu_get_tensor_traits(const struct ggml_tensor * src0) {
if (src0->extra != NULL) {
return (struct ggml_cpu_tensor_traits*)src0->extra;
}
return NULL;
}
For what I see it is possible here, but I may have missed something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, you should check the buffer type before using the extra because other backends that use host memory may want to set an extra. The CPU backend can use all buffer types that return true to is_host
, and the only requirements for a buffer to be considered a "host buffer" is that tensor->data
points to an address in system memory, and the tensor data is stored in standard ggml layout.
But you can rename the aarch64 buffer type to some generic name like "ggml_cpu_repack_buffer_type" and reuse it for the AMX or other repackings.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The CPU backend should be able to use buffers of iGPU backends that do not modify the weights to avoid extra copies. For example, the CPU backend can use Metal backend buffers since it uses host buffers. Making it require an extra would break that.
Maybe that's what i missed. Is it possible that a weight is initialise by the Metal backend and added an extra for these needs. And the CPU backend receive it.
But if it can be, we will have 2 backends that use the same weight on the same buffer_type that may want to register there own "extra"...
We may have to be more restrict like:
static const struct ggml_cpu_tensor_traits* ggml_cpu_get_tensor_traits(const struct ggml_tensor * src0) {
if (src0->buffer
&& src0->buffer->usage==GGML_BACKEND_BUFFER_USAGE_WEIGHTS
&& src0->extra != NULL) {
return (struct ggml_cpu_tensor_traits*)src0->extra;
}
return NULL;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, you should check the buffer type before using the extra because other backends that use host memory may want to set an extra. The CPU backend can use all buffer types that return true to
is_host
, and the only requirements for a buffer to be considered a "host buffer" is thattensor->data
points to an address in system memory, and the tensor data is stored in standard ggml layout.But you can rename the aarch64 buffer type to some generic name like "ggml_cpu_repack_buffer_type" and reuse it for the AMX or other repackings.
👍 OK I see. (did not read it before last reply) .
- hbm - "aarch64"
36a0406
to
655a3fb
Compare
Overall looks good. I am not sure about removing support for current Q4_0_x_x models, but I guess if we are going to do it, it is better to do it sooner than later. |
yes it will be the main/difficult choice :
|
@slaren I still need your expertise so as not to make too many mistakes. I was looking for where llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c Line 7497 in 9336db4
for me look to be in this function: llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c Lines 13220 to 13223 in 9336db4
Am I right? If yes, look for me that the size is not calculated correctly for llamafile and Q4_0 repacking:
llama.cpp/ggml/src/ggml-cpu/ggml-cpu.c Lines 13277 to 13284 in 9336db4
Note: I'm trying to make it more generic to make it easier to reintegrate the AMX backend so maybe not useful to fix it for now. |
It's OK if we over-allocate a bit of memory for
Isn't |
Yes it is the case for Q4_0_M_N for, so not critical for now. Even if internally it is more a Q8_0_N:
But may not work with other/future case. |
If we remove the old API and make the CPU backend accessible only through ggml-backend, then there will be a context that can be used to store the work buffer. Then the work buffer could simply be a |
So you confirm that for now this is where the size is calculated. |
Yes, the size is calculated in the function |
It is WIP. not for merge (like that)
goal: consolidation of the cpu backend for reintegration of the AMX backend.
I steel have some question so only here for comment / idea.