Skip to content

Commit

Permalink
mention dejavu for acceleration example
Browse files Browse the repository at this point in the history
  • Loading branch information
yujiepan-work committed Jul 15, 2024
1 parent 49cc512 commit 9dc4b89
Showing 1 changed file with 1 addition and 1 deletion.
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
### Activation Sparsity (experimental feature)

The `sparsify_activations` algorithm is a post-training method designed to introduce sparsity into the activations of a neural network. This process reduces the number of active neurons during inference by masking out neurons based on their magnitude relative to a calibrated static threshold.
The `sparsify_activations` algorithm is a post-training method designed to introduce sparsity into the activations of a neural network. This process reduces the number of active neurons during inference by masking out neurons based on their magnitude relative to a calibrated static threshold. Typically this can help accelerate inference for Transformer-based Large Language Models on edge devices; one such example is [Liu et al., 2023](https://arxiv.org/abs/2310.17157).

The algorithm sparsifies the input of a layer by applying the following function:

Expand Down

0 comments on commit 9dc4b89

Please sign in to comment.