Releases: jalammar/ecco
Releases · jalammar/ecco
v0.1.2: hotfix
v0.1.1 - hotfixes
v0.1.0 - Support for T5, local models, integrated gradients, Captum integrations, and beam search decoding
Big update to Ecco! Massive contributions from @JoaoLages and @SSamDav.
- Added support of encoder-decoder models like T5
- Using Captum for feature attribution adds support to these new methods: (IntegratedGradients, Saliency, InputXGradient, DeepLift, DeepLiftShap, GuidedBackprop, GuidedGradCam, Deconvolution, LRP). This replaces the previous implementation within Ecco for Saliency and InputXGradients.
- Added support for Beam Search generation
- Added support for importing local models. Very useful for analyzing finetuned models.
- Added better support for various tokenizers in visualizations. Some more work needed on this front, still.
v0.0.15
v0.0.14
- Adds a documentation portal
- LM now has a call() function so MLMs like BERT can be supported (without requiring text generation). Closes #18
call() and other functions now all support a batch dimension. The exception is "generate" which works on a single input sequence and not a batch. Closes #19 - Set up ground-work towards #6. BERT is now supported for activation collection and an earlier version of NMF factorization. EccoJS needs to clean up partial token characters like (##). Or, better yet, eccojs should remain dumb and we give it the tokens cleaned up and it only worries about displaying them.
- Part of the groundwork to support additional models is the model-config.yml file which should lay out how to connect ecco.LM with the underlying language model
v0.0.13 Hugging Face Transformers v4 Support
- Added Hugging Face Transformers v4 support. Closing #30.
v0.0.12
- Larger GPT2 models can now work with long sequences in GPU without running out of memory.
- Neuron activations: ability to specify capturing activations from certain layers
Thanks to contributor @nostalgebraist
v0.0.10
v0.0.9-alpha
- Started working on docs
- More tests. Github actions CI/CD