Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[branchformer] simplified branchformer #2482

Merged
merged 6 commits into from
Apr 17, 2024
Merged

Conversation

Mddct
Copy link
Collaborator

@Mddct Mddct commented Apr 15, 2024

  • sdpa
  • mqa mga
  • gradient checkpoint
  • refactor

@Mddct Mddct changed the title [transformer] simplified branchformer [branchformer] simplified branchformer Apr 15, 2024
@Mddct
Copy link
Collaborator Author

Mddct commented Apr 15, 2024

稍等 需要验证

@Mddct
Copy link
Collaborator Author

Mddct commented Apr 17, 2024

gradient checkpoint 似乎和layer drop冲突,(ddp 下会报错)

@Mddct
Copy link
Collaborator Author

Mddct commented Apr 17, 2024

重新封装了layer dropout的逻辑,并且添加了相应注意事项:

Limitations:
1 can work with ddp when layer's gradient checkpoint disabled
2 can't work with ddp when layer's gradient checkpoint enables
3 can work with fsdp

@Mddct
Copy link
Collaborator Author

Mddct commented Apr 17, 2024

截屏2024-04-17 11 07 16

@xingchensong
Copy link
Member

重新封装了layer dropout的逻辑,并且添加了相应注意事项:

Limitations:
1 can work with ddp when layer's gradient checkpoint disabled
2 can't work with ddp when layer's gradient checkpoint enables
3 can work with fsdp

试试deepspeed行不行

@Mddct
Copy link
Collaborator Author

Mddct commented Apr 17, 2024

重新封装了layer dropout的逻辑,并且添加了相应注意事项:

Limitations:
1 can work with ddp when layer's gradient checkpoint disabled
2 can't work with ddp when layer's gradient checkpoint enables
3 can work with fsdp

试试deepspeed行不行

ok

@Mddct
Copy link
Collaborator Author

Mddct commented Apr 17, 2024

deepspeed stage1 works:
截屏2024-04-17 11 21 39

stage2 works
截屏2024-04-17 11 25 59

stage3 works:
截屏2024-04-17 11 33 39

@Mddct Mddct mentioned this pull request Apr 17, 2024
1 task
@Mddct Mddct requested a review from xingchensong April 17, 2024 04:00
@Mddct Mddct merged commit 2b67e6c into main Apr 17, 2024
6 checks passed
@Mddct Mddct deleted the Mddct-simple-branchformer branch April 17, 2024 05:10
Zth9730 pushed a commit to Zth9730/wenet that referenced this pull request Aug 7, 2024
add casual model

fix typo

rm ckpt

add topk topp sampler

fix positoin

[train_engine] support fsdp (wenet-e2e#2412)

* [train_engine] support fsdp

* [train_engine] support fsdp

* unify scaler and amp

* fp32&&fp16 works in fsdp env

* fix fsdp in cv auto cast

* try to fix wenet.join fsdp

* implementing zero1 under fsdp is almost equivalent to deepspeed's zero1

* fix clip_and_grad_

* fix train summary

* all wenet xxxformer works (-paraformer -transducer)

* try to fix nan

* add barrier for cv

* add destroy group for end of all train

* refactor wrap methods and ckpt works

* fix ckpt

* fix cv in dtype != float32

* fix ckpt in model mode

* fix bf16 amp

* refactor scaler and autocast, fix fp32 fp16 bf16 for fsdp

* fix fp32 nullcontext to nullcontext()

* modify after review

* fix lint

* fix lint

LoRA support (wenet-e2e#2049)

* support lora for v3.0.1

* format code and update lora attention && encoder

* fix bug when lora_list is None

---------

Co-authored-by: Xingchen Song(宋星辰) <[email protected]>

[env] update python version and deepspeed version (wenet-e2e#2462)

* [env] update python version and deepspeed version

* [env] fix lint

fix rope pos embdining (wenet-e2e#2463)

* fix rope pos embdining

* fix dropout

* fix comment

[transformer] add multi warmup and learning rate for different modules (wenet-e2e#2449)

* [transformer] add multi warmup and learning rate for different modules

* fix typo

* it works in warmuplr

* fix lr in tensorboard in step mode

* fix cv log

* cv works

* refactor cv log

* add helper lrs_to_string

* fix lrstr

* fix ddp multiple lr

* fix initial step

* revert to -1

* fix sub params dup

* fix step

* fix step

* fix log

* add assert for scheduler

* add comment for log

---------

Co-authored-by: Xingchen Song(宋星辰) <[email protected]>

add generate

add toto

support sft & pretrain training forward

gemm conversion works

support init casual model

[whisper] limit language to Chinese (wenet-e2e#2470)

[train] convert tensor to scalar (wenet-e2e#2471)

[workflow] upgrad python version to 3.10 (wenet-e2e#2472)

* [workflow] upgrad python version to 3.10

* [workflow] try to pass

refactor cache behaviour in training mode (reduce compute cost and memory) (wenet-e2e#2473)

all gemma model works

fix ut

fix ut (wenet-e2e#2477)

* fix ut

* fix py version

[transformer] Make MoE runnable (wenet-e2e#2474)

[transformer] fix mqa (wenet-e2e#2478)

enable mmap in torch.load (wenet-e2e#2479)

[example] Add deespeed configs of different stages for illustration (wenet-e2e#2485)

[example] Fix prefetch and step_save (wenet-e2e#2486)

[ctl] simplified ctl (wenet-e2e#2483)

* [ctl] simplified ctl

* [ctl] unify

[branchformer] simplified branchformer (wenet-e2e#2482)

* [transformer] simplified branchformer

* fix yaml

* support mqa  gradiengt ckpt sdpa

* fix gradient checkponit

* add deepspeed comment in layer dropout

* fix comment

[e_branchformer] simplified e_branchformer (wenet-e2e#2484)

* [e_branchformer] simplified ctl

* try to fix ut

* try to fix ut

* fix activation

* fix att args

* e-branformer works

[transformer] refactor cache (wenet-e2e#2481)

* [transformer] refactor cache

* fix ut

* unify cache type in branchformer and ebranchformer

fix cache

fix gradient ckpt in branchformer/ebranformer (wenet-e2e#2488)

fix search after refactor cache (wenet-e2e#2490)

generate works!

unify chat pattern

convert llama3 works

[transformer] set use_reentrant=False for gradient ckpt (wenet-e2e#2491)

[transformer] fix warning: ignore(True) has been deprecated (wenet-e2e#2492)

* [transformer] fix warning: ignore(True) has been deprecated

* [transformer] fix warning: ignore(True) has been deprecated

[log] avoid reduntant logging (wenet-e2e#2493)

fix w1 w2 w3 in feedforward

add 70b temporarily

mv LLM to wenet

support llm dataset

unify config

add dataset yaml in script

support llm dataset

dynamic static bucket works

[transformer] refacgtor mqa repeat (wenet-e2e#2497)

[transformer] fix mqa in cross att (wenet-e2e#2498)

[deepspeed] update json config (wenet-e2e#2499)

training works

pretrain works

refactor covert

fix flash att in generate

llama works

fix llama3

fix speed

try fix ut

support stop tokens in gen and support ppl

support stop tokens in gen and support ppl
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants