Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide preset configs in library #37

Merged
merged 11 commits into from
Nov 11, 2023
Merged

Provide preset configs in library #37

merged 11 commits into from
Nov 11, 2023

Conversation

jerinphilip
Copy link
Owner

@jerinphilip jerinphilip commented Nov 11, 2023

Currently, the Python calling code looks like:

config = Config()
model = Model(config, package)

This is not informative or self-contained enough of the intentions. This PR tries to change this to:

tiny: Config = slimt.preset.tiny()
model = Model(tiny, package)

base: Config = slimt.preset.base()
model = Model(base, package)

It's mostly cosmetic, with now the ability to indicate whether we're using a tiny configuration or a base configuration. This may also allow getting the borked deen base model to work. The data-members on Config are exposed for read-write via python.

Broken en-de-base
$ slimt translate -m en-de-base <<< "Hello world"
[warn] Failed to ingest expected load of Wemb_QuantMultA
[warn] Failed to ingest expected load of special:model.yml
[warn] Failed to complete expected load of decoder_l2_ffn_ffn_ln_bias
[warn] Failed to complete expected load of decoder_l2_rnn_ffn_ln_scale
[warn] Failed to complete expected load of decoder_l2_ffn_b2
[warn] Failed to complete expected load of decoder_l2_ffn_W2
[warn] Failed to complete expected load of decoder_l2_ffn_b1
[warn] Failed to complete expected load of decoder_l2_context_bo
[warn] Failed to complete expected load of decoder_l2_context_Wv_QuantMultA
[warn] Failed to complete expected load of decoder_l2_context_Wk_QuantMultA
[warn] Failed to complete expected load of decoder_l2_ffn_ffn_ln_scale
[warn] Failed to complete expected load of decoder_l2_context_Wk
[warn] Failed to complete expected load of decoder_l2_context_Wq_QuantMultA
[warn] Failed to complete expected load of decoder_l2_context_bq
[warn] Failed to complete expected load of decoder_l2_context_Wq
[warn] Failed to complete expected load of decoder_l2_context_Wo_QuantMultA
[warn] Failed to complete expected load of decoder_l2_context_Wo
[warn] Failed to complete expected load of decoder_l2_context_Wo_ln_scale
[warn] Failed to complete expected load of decoder_l2_ffn_W2_QuantMultA
[warn] Failed to complete expected load of decoder_l2_rnn_bf
[warn] Failed to complete expected load of decoder_l2_rnn_Wf
[warn] Failed to complete expected load of decoder_l2_context_bv
[warn] Failed to complete expected load of decoder_l2_rnn_W_QuantMultA
[warn] Failed to complete expected load of decoder_l2_rnn_ffn_ln_bias
[warn] Failed to complete expected load of decoder_l2_rnn_W
[warn] Failed to complete expected load of decoder_l2_ffn_W1_QuantMultA
[warn] Failed to complete expected load of decoder_l2_context_bk
[warn] Failed to complete expected load of decoder_l2_context_Wo_ln_bias
[warn] Failed to complete expected load of decoder_l2_context_Wv
[warn] Failed to complete expected load of decoder_l2_ffn_W1
[warn] Failed to complete expected load of decoder_l2_rnn_Wf_QuantMultA

Will probably need to figure out what is happening with the model, it does not work still - but at least the configurability is a net plus.

@jerinphilip jerinphilip changed the title Isolate presets from config Provide preset configs in library Nov 11, 2023
@jerinphilip jerinphilip marked this pull request as ready for review November 11, 2023 16:43
@jerinphilip jerinphilip merged commit 09b5fed into main Nov 11, 2023
6 checks passed
@jerinphilip jerinphilip deleted the config-preset branch November 11, 2023 19:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant