Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Basic Attention attribution #148

Merged
merged 35 commits into from
Jan 16, 2023
Merged

Basic Attention attribution #148

merged 35 commits into from
Jan 16, 2023

Conversation

lsickert
Copy link
Collaborator

@lsickert lsickert commented Nov 23, 2022

Description

This PR adds the base-class for attribution methods based on attention as well as two basic attention attribution methods (aggregated attention and last-layer attention).

It also includes a small fix regarding the rounding of outputs in the cli tableview

It reverts the previous upgrade of pytorch to ^1.13.0 because of an issue with installing the dependency on certain platforms such as OSX (see related issues in pytorch: issue1, issue2

Related Issue

108

Type of Change

  • 📚 Examples / docs / tutorials / dependencies update
  • 🔧 Bug fix (non-breaking change which fixes an issue)
  • 🥂 Improvement (non-breaking change which improves an existing feature)
  • 🚀 New feature (non-breaking change which adds functionality)
  • 💥 Breaking change (fix or feature that would cause existing functionality to change)
  • 🔐 Security fix

Checklist

  • I've read the CODE_OF_CONDUCT.md document.
  • I've read the CONTRIBUTING.md guide.
  • I've updated the code style using make codestyle.
  • I've written tests for all new methods and classes that I created.
  • I've written the docstring in Google format for all the methods and classes that I used.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @lsickert, thank you for submitting a PR! We will respond as soon as possible.

@lsickert lsickert changed the title Basic Attention attribution [WIP] Basic Attention attribution Nov 23, 2022
pyproject.toml Outdated Show resolved Hide resolved
@lsickert lsickert linked an issue Nov 26, 2022 that may be closed by this pull request
@gsarti
Copy link
Member

gsarti commented Dec 5, 2022

Note: it's good to have the summary issue linked here, but we don't want to close it just yet! :)

@lsickert lsickert added the enhancement New feature or request label Dec 12, 2022
@lsickert
Copy link
Collaborator Author

lsickert commented Jan 2, 2023

@gsarti I was working on the decoder-only models now, and came across some inconsistencies across the different models. For example both GPT and Transformer XL will only include attentions as parameter in their forward-pass output, whereas GPT2 will include both attentions as well as cross_attentions. For now I only use the attentions parameter to generate the attributions since it is present in all decoer-only models, but I am not sure if we should also use the cross-attentions in the models where they are present as well.

@gsarti
Copy link
Member

gsarti commented Jan 2, 2023

Hi @lsickert, good question! The cross-attentions are defined for GPT2 and other decoder-only to support their usage as components of an encoder-decoder as part of the EncoderDecoderModel abstraction in 🤗 transformers. If the model is loaded as a decoder-only, it should only have regular self-attention, so you can assume these are the only one we are interested in for that case!

@lsickert
Copy link
Collaborator Author

lsickert commented Jan 2, 2023

Hi @lsickert, good question! The cross-attentions are defined for GPT2 and other decoder-only to support their usage as components of an encoder-decoder as part of the EncoderDecoderModel abstraction in 🤗 transformers. If the model is loaded as a decoder-only, it should only have regular self-attention, so you can assume these are the only one we are interested in for that case!

Ah perfect. Yes I assumed something like that but was not entirely sure. I think then the decoder-only support is now done for the basic attention functions. I will still need to write tests tomorrow and finish up the docstrings and other small things, but apart from that I think the branch is ready for merging.

inseq/attr/feat/attention_attribution.py Outdated Show resolved Hide resolved
inseq/attr/feat/attention_attribution.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
inseq/attr/feat/attention_attribution.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
inseq/attr/feat/ops/basic_attention.py Outdated Show resolved Hide resolved
@gsarti
Copy link
Member

gsarti commented Jan 3, 2023

Also, some usage issues I identified:

  1. Using last_layer_attention with encoder-decoder models produces the following error, which does not occur for aggregated_attention:
out = model.attribute("The cafeteria had 23 apples. They used 20 for lunch. How many apples do they have left?")

RuntimeError: stack expects each tensor to be equal size, but got [1] at entry 0 and [1, 21] at entry 1. This looks to me like the error you were getting when you just started working on the attention attribution methods.

  1. Running attribution with decoder-only models with any attention method produces the following error:
/usr/local/lib/python3.8/dist-packages/inseq/data/attribution.py in <listcomp>(.0)
    115         sources = None
    116         if attr.source_attributions is not None:
--> 117             sources = [drop_padding(attr.source[seq_id], pad_id) for seq_id in range(num_sequences)]
    118         targets = [
    119             drop_padding([a.target[seq_id][0] for a in attributions], pad_id) for seq_id in range(num_sequences)

TypeError: 'NoneType' object is not subscriptable

is it possible that you are not setting source attributions to None in the decoder-only case?

  1. If I use facebook/wmt19-en-de for a translation with aggregated_attention (which works for other enc-dec) I get forward() missing 1 required positional argument: 'input_ids'. I believe this is due to a problem with _extract_forward_pass_args which should be solved anyways when we drop the unnecessary method to conform to the approach used for step scores (see review above)

@lsickert
Copy link
Collaborator Author

lsickert commented Jan 3, 2023

Also, some usage issues I identified:

1. Using `last_layer_attention` with encoder-decoder models produces the following error, which does not occur for `aggregated_attention`:
out = model.attribute("The cafeteria had 23 apples. They used 20 for lunch. How many apples do they have left?")

RuntimeError: stack expects each tensor to be equal size, but got [1] at entry 0 and [1, 21] at entry 1. This looks to me like the error you were getting when you just started working on the attention attribution methods.

2. Running attribution with decoder-only models with any attention method produces the following error:
/usr/local/lib/python3.8/dist-packages/inseq/data/attribution.py in <listcomp>(.0)
    115         sources = None
    116         if attr.source_attributions is not None:
--> 117             sources = [drop_padding(attr.source[seq_id], pad_id) for seq_id in range(num_sequences)]
    118         targets = [
    119             drop_padding([a.target[seq_id][0] for a in attributions], pad_id) for seq_id in range(num_sequences)

TypeError: 'NoneType' object is not subscriptable

is it possible that you are not setting source attributions to None in the decoder-only case?

3. If I use `facebook/wmt19-en-de` for a translation with `aggregated_attention` (which works for other enc-dec) I get `forward() missing 1 required positional argument: 'input_ids'`. I believe this is due to a problem with `_extract_forward_pass_args` which should be solved anyways when we drop the unnecessary method to conform to the approach used for step scores (see review above)

Yes, the second point was my bad and should be fixed already. I did not notice that the changes to attention_attribution.py were not yet staged. For the other points I will take a look.

@lsickert
Copy link
Collaborator Author

lsickert commented Jan 4, 2023

The first error is fixed by making sure the dimensional size of the tensors stays the same for all tokens. Very interesting behavior, though, since I remember, I specifically had to put in the torch.squeeze operation for the last_layer_attention to work. Maybe something changed in torch 1.13, which made this unnecessary now.

@lsickert
Copy link
Collaborator Author

lsickert commented Jan 9, 2023

@gsarti I think all open points should be addressed now. Please feel free to test already while I clean up a bit and work on updating and creating the docstrings tomorrow.

@gsarti
Copy link
Member

gsarti commented Jan 10, 2023

Thank you for the update! After giving it some more thought, I decided to opt for a single centralized class for basic attention attribution. The decision was mainly driven to avoid confusion on which class to use, and aimed at enabling more flexibility with the choice of heads and layers for aggregation.

The Attention method found at the last commit improves upon the previous classes by enabling the choice of a single element (single int), a range (with (start_idx, end_idx)) or a set of custom valid indices (as [idx_1, idx_2, ...]) for both attention heads and model layers. Moreover, the aggregation procedure has been centralized, and the definition of custom user-defined aggregation functions beyond the default ones has been enabled.

Example of default usage:

import inseq

model = inseq.load_model("facebook/wmt19-en-de", "attention")
out = model.attribute("The developer argued with the designer because her idea cannot be implemented.")

The default behavior is set to minimize unnecessary parameter definitions. In the default case above, the result is the average across all attention heads of the final layer. Here's a more complex usage:

import inseq

model = inseq.load_model("facebook/wmt19-en-de", "attention")
out = model.attribute(
	"The developer argued with the designer because her idea cannot be implemented.",
	layers=(0, 5),
	heads=[0, 2, 5, 7],
	aggregate_heads_fn = "max"
)

In the case above, the outcome is a matrix of maximum attention weights of heads 0, 2, 5 and 7 after averaging their weights across the first 5 layers of the model.

Remaining todos:

  • Document AttentionAttribution more comprehensively in the docs and docstrings.
  • Add tests for AttentionAttribution, minimally for one enc-dec and one dec-only model, testing multiple aggregation strategies if possible.

@gsarti gsarti mentioned this pull request Jan 12, 2023
@gsarti
Copy link
Member

gsarti commented Jan 14, 2023

Added some tests for attention attribution, fixed the typing issue of FullAttentionOutput (we were passing it as a tuple to _aggregate_layers but we were calling torch.stack before that, moved the latter inside the function) and added some further checks in the aggregation, we should be good for the merge now!

@lsickert
Copy link
Collaborator Author

@gsarti I think we were working on the same remaining points right now.

I got a bit confused about the torch.stack outside of the aggregation function, which is why I edited the typing wrongly, but I would also say that it is better to call it inside the function and just pass the raw output from the model in there.

I am currently still finishing on a test for those aggregation functions specifically (outside of the normal pipeline), but then I would also agree that we are good to go.

@gsarti gsarti changed the title [WIP] Basic Attention attribution Basic Attention attribution Jan 16, 2023
@gsarti
Copy link
Member

gsarti commented Jan 16, 2023

@lsickert feel free to merge as soon as CI is passing! 🎉

@lsickert lsickert merged commit 7ed9d79 into main Jan 16, 2023
@lsickert lsickert deleted the attention-attribution branch January 16, 2023 15:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants