You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As a user of the package I'd like to be able to choose whether to download model weights ahead of (run-)time. For example during CI post-/deployment, or for some cases it might even make sense to bake files into docker images.
The same functionality (being able to ahead-of-time prepare files) would also be useful for testing and development of both lllms itself and dependents. Would love to be able to use the package itself to download its own test models.
Other functionality I think a CLI could be the right choice for is managing whats in the cache/on disk; commands that allow listing, removal and inspecting files / printing metadata. These as well would be useful both for development (where I possibly want to share one single model cache across projects) and for updating deployed applications (where I possibly want to change the used models, and delete the old ones).
The third concern that seems interesting is to utilize the package as a development/fallback dependency for OAI without having to install it as a project dependency. I'd like to be able to npx lllms serve models.config to develop applications offline, or to save money.
To make that possible there needs to be a definition/config somewhere that specifies which models are used. I'm thinking the simplest way would be to just export a ModelServerOptions object from a JS/TS config file. Not sure if there should be a default file that lllms prepare and other commands should pick up from CWD and if so, how that should be named - Or maybe there's other alternatives. I feel like static config formats (a json file, or as part of package.json) would actually increase complexity, and make code less copypastable. Things like tool definitions could not be supported in a static config file as well.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
There are a few usecases for which a command line interface may be helpful.
As a user of the package I'd like to be able to choose whether to download model weights ahead of (run-)time. For example during CI post-/deployment, or for some cases it might even make sense to bake files into docker images.
The same functionality (being able to ahead-of-time prepare files) would also be useful for testing and development of both lllms itself and dependents. Would love to be able to use the package itself to download its own test models.
Other functionality I think a CLI could be the right choice for is managing whats in the cache/on disk; commands that allow listing, removal and inspecting files / printing metadata. These as well would be useful both for development (where I possibly want to share one single model cache across projects) and for updating deployed applications (where I possibly want to change the used models, and delete the old ones).
The third concern that seems interesting is to utilize the package as a development/fallback dependency for OAI without having to install it as a project dependency. I'd like to be able to
npx lllms serve models.config
to develop applications offline, or to save money.To make that possible there needs to be a definition/config somewhere that specifies which models are used. I'm thinking the simplest way would be to just export a ModelServerOptions object from a JS/TS config file. Not sure if there should be a default file that
lllms prepare
and other commands should pick up from CWD and if so, how that should be named - Or maybe there's other alternatives. I feel like static config formats (a json file, or as part of package.json) would actually increase complexity, and make code less copypastable. Things like tool definitions could not be supported in a static config file as well.Appreciate suggestions/ideas/contributions! :)
Beta Was this translation helpful? Give feedback.
All reactions