-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SwiGLU support - llama3 feature branch #755
base: llama3
Are you sure you want to change the base?
Conversation
floatX* l_residual3 = acts.residual3 + l * B * T * C; | ||
floatX* scratch = (floatX*)acts.output; // used for non-cudnn attention, fcproj, attproj, etc. | ||
|
||
// now do the forward pass | ||
#ifdef ENABLE_CUDNN | ||
float* l_att = (float*)acts.att + l * B * NH * T; // cuDNN needs a smaller FP32 tensor | ||
matmul_forward_cublaslt(l_qkvr, l_ln1, l_qkvw, l_qkvb, B, T, C, 3*C, main_stream); | ||
matmul_forward_cublaslt(l_qkvr, l_ln1, l_qkvw, l_qkvb, B, T, C, 3*C, main_stream, model->act_func); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to make the act_func parameter of matmul_forward_cublaslt() default to NULL, and not pass it here for all the non-GELU/SwiGLU matmul calls, since it's not actually used?
// recompute >= 1 means we recompute gelu. in this case, | ||
// l_fch_gelu is just a buffer, so re-compute the gelu from l_fch here | ||
gelu_forward(l_fch_gelu, l_fch_pre_gelu, B*T*4*C, main_stream); | ||
if (strcmp(model->act_func, "gelu") == 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it make sense to do a call to a single activation_forward() call, and rename gelu.cuh to activation.cuh since it now also does SwiGLU?
Assuming we want to keep this flexible at all, but it feels like experimenting with different activation functions is an easy thing that'd be worth doing a little bit more in the future (e.g. loss impact & GPU throughput of RELU and RELU squared for example).
Right now we'd just pass all the arguments of swiglu_forward() since gelu_forward()'s arguments are a subset (that won't scale if adding more complicated activation functions but I think that's probably OK/still useful?)
@@ -1004,11 +1059,12 @@ float gpt2_calculate_grad_norm(GPT2 *model, MultiGpuConfig* multi_gpu_config) { | |||
// grads_memory only contains the averaged gradients at the local shards, | |||
// so we only calculate the grad norm at the grads_memory belonging to the local shards | |||
for (int i = 0; i < NUM_PARAMETER_TENSORS; i++) { | |||
if (!gated_ffn && (i == 12 || i == 13)) { continue; } // skip the gated ffn weights |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this just for the case when the activation function is not SwiGLU, so there are no gated ffn weights? If so the comment is a bit misleading (it initially made me think we want to skip those completely for the grad norm calculation even with SwiGLU)
Implemented SwiGLU - swish GLU activation function from the "GLU Variants Improve Transformer" paper.
Note: there is an increase in memory footprint as a consequence of adding an additional FC layer (params/grads/optimizer states + 2 new activation buffers
(L+1)*B*T*4*C
in total for the activations). I'm sure we can optimize this, can be done in the subsequent PR as well.Tests:
I ran an A/B experiment: trained a 124M GPT-2 on 10B tokens (FineWeb subset) with:
a) GELU
b) SwiGLU (note: currently SwiGLU model actually has 152M params)
Results:
Conclusion: SwiGLU converges to a lower loss, so I'm confident the implementation is correct, but unclear whether the perf comes from SwiGLU or more params (152M vs 124M).
After normalizing for the number of params (multiplying the inner FFN module dimension by 2/3) I get:
Conclusion: SwiGLU does start converging faster than GELU but they end up at a pretty much same loss. Actually the trend seems to go in favor of GELU looking at the gradient of the curves.
Given that SwiGLU complicates the code, it's unclear to me that it offers a clear advantage over GELU/ReLU/etc. In general i doubt any of these activation functions are more powerful than ReLU. You just want more params / compute. :)
Next steps (can be done in follow-up PRs):
Appendix:
In case it helps here is a diagram I drew to make it easier for me to implement the backward pass: