Skip to content

Commit

Permalink
Merge branch 'main' into dev/interface
Browse files Browse the repository at this point in the history
  • Loading branch information
calpt authored Jan 9, 2025
2 parents 074ca66 + 303c34b commit 5596bf1
Show file tree
Hide file tree
Showing 24 changed files with 763 additions and 87 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/adapter_docs_build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@v2
with:
python-version: 3.8
python-version: "3.10"
- name: Install
run: |
pip install setuptools==57.4.0
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/tests_torch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,8 @@ jobs:
submodules: true
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
python-version: "3.10"
- uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
Expand All @@ -53,8 +53,8 @@ jobs:
submodules: true
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
python-version: "3.10"
- uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
Expand All @@ -76,8 +76,8 @@ jobs:
submodules: true
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
python-version: "3.10"
- uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
Expand All @@ -99,8 +99,8 @@ jobs:
submodules: true
- uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
python-version: "3.10"
- uses: actions/cache@v4
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('setup.py') }}
Expand Down
2 changes: 2 additions & 0 deletions docs/adapter_composition.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,8 @@ model.active_adapters = ac.Fuse("d", "e", "f")

To learn how training an _AdapterFusion_ layer works, check out [this Colab notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/03_Adapter_Fusion.ipynb) from the `adapters` repo.

To save and upload the full composition setup with adapters and fusion layer in one line of code, check out the docs on [saving and loading adapter compositions](loading.md#saving-and-loading-adapter-compositions).

### Retrieving AdapterFusion attentions

Finally, it is possible to retrieve the attention scores computed by each fusion layer in a forward pass of the model.
Expand Down
36 changes: 36 additions & 0 deletions docs/loading.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,3 +94,39 @@ We will go through the different arguments and their meaning one by one:
To load the adapter using a custom name, we can use the `load_as` parameter.

- Finally, `set_active` will directly activate the loaded adapter for usage in each model forward pass. Otherwise, you have to manually activate the adapter via `set_active_adapters()`.

## Saving and loading adapter compositions

In addition to saving and loading individual adapters, you can also save, load and share entire [compositions of adapters](adapter_composition.md) with a single line of code.
_Adapters_ provides three methods for this purpose that work very similar to those for single adapters:

- [`save_adapter_setup()`](adapters.ModelWithHeadsAdaptersMixin.save_adapter_setup) to save an adapter composition along with prediction heads to the local file system.
- [`load_adapter_setup()`](adapters.ModelWithHeadsAdaptersMixin.load_adapter_setup) to load a saved adapter composition from the local file system or the Model Hub.
- [`push_adapter_setup_to_hub()`](adapters.hub_mixin.PushAdapterToHubMixin.push_adapter_setup_to_hub) to upload an adapter setup along with prediction heads to the Model Hub. See our [Hugging Face Model Hub guide](huggingface_hub.md) for more.

As an example, this is how you would save and load an AdapterFusion setup of three adapters with a prediction head:

```python
# Create an AdapterFusion
model = AutoAdapterModel.from_pretrained("bert-base-uncased")
model.load_adapter("sentiment/sst-2@ukp", config=SeqBnConfig(), with_head=False)
model.load_adapter("nli/multinli@ukp", config=SeqBnConfig(), with_head=False)
model.load_adapter("sts/qqp@ukp", config=SeqBnConfig(), with_head=False)
model.add_adapter_fusion(["sst-2", "mnli", "qqp"])
model.add_classification_head("clf_head")
adapter_setup = Fuse("sst-2", "mnli", "qqp")
head_setup = "clf_head"
model.set_active_adapters(adapter_setup)
model.active_head = head_setup

# Train AdapterFusion ...

# Save
model.save_adapter_setup("checkpoint", adapter_setup, head_setup=head_setup)

# Push to Hub
model.push_adapter_setup_to_hub("<user>/fusion_setup", adapter_setup, head_setup=head_setup)

# Re-load
# model.load_adapter_setup("checkpoint", set_active=True)
```
5 changes: 5 additions & 0 deletions docs/methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,11 @@ _Papers:_
* [Adapters Strike Back](https://arxiv.org/pdf/2406.06820) (Steitz and Roth., 2024)
* [AdapterHub: A Framework for Adapting Transformers](https://arxiv.org/pdf/2007.07779.pdf) (Pfeiffer et al., 2020)

```{eval-rst}
.. note::
The two parameters ``original_ln_before`` and ``original_ln_after`` inside bottleneck adapters control both the addition of the residual input and the application of the pretrained layer norm. If the original model does not apply a layer norm function at a specific position of the forward function (e.g after the FFN layer), the two bottleneck parameters of the adapter set at that same position will only control the application of the residual input.
```

## Language Adapters - Invertible Adapters

_Configuration class_: [`SeqBnInvConfig`](adapters.SeqBnInvConfig), [`DoubleSeqBnInvConfig`](adapters.DoubleSeqBnInvConfig)
Expand Down
2 changes: 1 addition & 1 deletion docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ model = AutoAdapterModel.from_pretrained(example_path)
model.load_adapter(example_path)
```

Similar to how the weights of the full model are saved, the `save_adapter()` will create a file for saving the adapter weights and a file for saving the adapter configuration in the specified directory.
Similar to how the weights of the full model are saved, [`save_adapter()`](adapters.ModelWithHeadsAdaptersMixin.save_adapter) will create a file for saving the adapter weights and a file for saving the adapter configuration in the specified directory.

Finally, if we have finished working with adapters, we can restore the base Transformer to its original form by deactivating and deleting the adapter:

Expand Down
2 changes: 1 addition & 1 deletion hf_transformers
Submodule hf_transformers updated 679 files
15 changes: 13 additions & 2 deletions notebooks/ViT_AdapterPlus_FineTuning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,18 @@
"source": [
"### Loading the `ViT` model and the `AdapterPlusConfig`\n",
"\n",
"Here we load the `vit-base-patch16-224-in21k` model similar to the one used in the `AdapterConfig` paper. We will load the model using the `adapters` `AutoAdapterModel` and add the corresponding `AdapterPlusConfig`. To read more about the config, you can check out the docs page [here](https://docs.adapterhub.ml/methods#bottleneck-adapters) under `AdapterPlusConfig`"
"Here we load the `vit-base-patch16-224-in21k` model similar to the one used in the `AdapterConfig` paper. We will load the model using the `adapters` `AutoAdapterModel` and add the corresponding `AdapterPlusConfig`. To read more about the config, you can check out the docs page [here](https://docs.adapterhub.ml/methods#bottleneck-adapters) under `AdapterPlusConfig`.\n",
"\n",
"#### Important Note\n",
"\n",
"Please note that some configurations of the adapters parameters `original_ln_after`, `original_ln_before`, and \n",
"`residual_before_ln` may result in performance issues when training. \n",
"\n",
"In the general case:\n",
"\n",
"1) At least one of `original_ln_before` or `original_ln_after` should be set to `True` in order to ensure that the original residual\n",
" connection from pre-training is preserved. \n",
"2) If `original_ln_after` is set to `False`, `residual_before_ln` must also be set to `False` to ensure convergence during training."
]
},
{
Expand All @@ -218,7 +229,7 @@
"from adapters import AdapterPlusConfig\n",
"\n",
"model = ViTAdapterModel.from_pretrained(model_name_or_path)\n",
"config = AdapterPlusConfig(original_ln_after=True)\n",
"config = AdapterPlusConfig()\n",
"\n",
"model.add_adapter(\"adapterplus_config\", config)\n",
"model.add_image_classification_head(\"adapterplus_config\", num_labels=num_classes)\n",
Expand Down
6 changes: 4 additions & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@
"isort>=5.5.4",
"Jinja2==2.11.3",
"nltk",
"packaging",
"parameterized",
"pillow",
"protobuf",
Expand All @@ -60,7 +61,7 @@
"timeout-decorator",
"torch",
"torchvision",
"transformers~=4.46.3",
"transformers~=4.47.1",
]


Expand Down Expand Up @@ -136,11 +137,12 @@ def deps_list(*pkgs):
# when modifying the following list, make sure to update src/transformers/dependency_versions_check.py
install_requires = [
deps["transformers"],
deps["packaging"],
]

setup(
name="adapters",
version="1.0.1",
version="1.1.0.dev0",
author="The AdapterHub team and community contributors",
author_email="[email protected]",
description="A Unified Library for Parameter-Efficient and Modular Transfer Learning",
Expand Down
2 changes: 1 addition & 1 deletion src/adapters/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.

__version__ = "1.0.1"
__version__ = "1.1.0.dev0"

from typing import TYPE_CHECKING

Expand Down
43 changes: 41 additions & 2 deletions src/adapters/composition.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import itertools
import sys
import warnings
from collections.abc import Sequence
from typing import List, Optional, Set, Tuple, Union
Expand Down Expand Up @@ -45,6 +46,31 @@ def parallel_channels(self):
def flatten(self) -> Set[str]:
return set(itertools.chain(*[[b] if isinstance(b, str) else b.flatten() for b in self.children]))

def _get_save_kwargs(self):
return None

def to_dict(self):
save_dict = {
"type": self.__class__.__name__,
"children": [
c.to_dict() if isinstance(c, AdapterCompositionBlock) else {"type": "single", "children": [c]}
for c in self.children
],
}
if kwargs := self._get_save_kwargs():
save_dict["kwargs"] = kwargs
return save_dict

@classmethod
def from_dict(cls, data):
children = []
for child in data["children"]:
if child["type"] == "single":
children.append(child["children"][0])
else:
children.append(cls.from_dict(child))
return getattr(sys.modules[__name__], data["type"])(*children, **data.get("kwargs", {}))


class Parallel(AdapterCompositionBlock):
def __init__(self, *parallel_adapters: List[str]):
Expand All @@ -66,26 +92,36 @@ def __init__(self, *stack_layers: List[Union[AdapterCompositionBlock, str]]):


class Fuse(AdapterCompositionBlock):
def __init__(self, *fuse_stacks: List[Union[AdapterCompositionBlock, str]]):
def __init__(self, *fuse_stacks: List[Union[AdapterCompositionBlock, str]], name: Optional[str] = None):
super().__init__(*fuse_stacks)
self._name = name

# TODO-V2 pull this up to all block classes?
@property
def name(self):
return ",".join([c if isinstance(c, str) else c.last() for c in self.children])
if self._name:
return self._name
else:
return ",".join([c if isinstance(c, str) else c.last() for c in self.children])


class Split(AdapterCompositionBlock):
def __init__(self, *split_adapters: List[Union[AdapterCompositionBlock, str]], splits: Union[List[int], int]):
super().__init__(*split_adapters)
self.splits = splits if isinstance(splits, list) else [splits] * len(split_adapters)

def _get_save_kwargs(self):
return {"splits": self.splits}


class BatchSplit(AdapterCompositionBlock):
def __init__(self, *split_adapters: List[Union[AdapterCompositionBlock, str]], batch_sizes: Union[List[int], int]):
super().__init__(*split_adapters)
self.batch_sizes = batch_sizes if isinstance(batch_sizes, list) else [batch_sizes] * len(split_adapters)

def _get_save_kwargs(self):
return {"batch_sizes": self.batch_sizes}


class Average(AdapterCompositionBlock):
def __init__(
Expand All @@ -105,6 +141,9 @@ def __init__(
else:
self.weights = [1 / len(average_adapters)] * len(average_adapters)

def _get_save_kwargs(self):
return {"weights": self.weights}


# Mapping each composition block type to the allowed nested types
ALLOWED_NESTINGS = {
Expand Down
11 changes: 10 additions & 1 deletion src/adapters/configuration/adapter_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -374,10 +374,19 @@ class ParBnConfig(BnConfig):
class AdapterPlusConfig(BnConfig):
"""
The AdapterPlus config architecture proposed by Jan-Martin O, Steitz and Stefan Roth. See https://arxiv.org/pdf/2406.06820
Please note that some configurations of the adapters parameters `original_ln_after`, `original_ln_before`, and
`residual_before_ln` may result in performance issues when training.
In the general case:
1) At least one of `original_ln_before` or `original_ln_after` should be set to True in order to ensure that the original residual
connection from pre-training is preserved.
2) If `original_ln_after` is set to `False`, `residual_before_ln` must also be set to `False` to ensure convergence during training.
"""

original_ln_after: bool = False
residual_before_ln: bool = True
original_ln_before: bool = True
residual_before_ln: bool = False
stochastic_depth: float = 0.1
init_weights: str = "houlsby"
scaling: Union[float, str] = "channel"
Expand Down
30 changes: 22 additions & 8 deletions src/adapters/configuration/model_adapters_config.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import copy
import logging
from collections.abc import Collection, Mapping
from typing import List, Optional, Union
from typing import List, Optional, Tuple, Union

from .. import __version__
from ..composition import AdapterCompositionBlock
Expand All @@ -27,6 +27,7 @@ def __init__(self, **kwargs):

self.fusions: Mapping[str, str] = kwargs.pop("fusions", {})
self.fusion_config_map = kwargs.pop("fusion_config_map", {})
self.fusion_name_map = kwargs.pop("fusion_name_map", {})

# TODO-V2 Save this with config?
self.active_setup: Optional[AdapterCompositionBlock] = None
Expand Down Expand Up @@ -131,7 +132,7 @@ def add(self, adapter_name: str, config: Optional[Union[str, dict]] = None):
self.adapters[adapter_name] = config_name
logger.info(f"Adding adapter '{adapter_name}'.")

def get_fusion(self, fusion_name: Union[str, List[str]]) -> Optional[dict]:
def get_fusion(self, fusion_name: Union[str, List[str]]) -> Tuple[Optional[dict], Optional[list]]:
"""
Gets the config dictionary for a given AdapterFusion.
Expand All @@ -140,6 +141,7 @@ def get_fusion(self, fusion_name: Union[str, List[str]]) -> Optional[dict]:
Returns:
Optional[dict]: The AdapterFusion configuration.
Optional[list]: The names of the adapters to fuse.
"""
if isinstance(fusion_name, list):
fusion_name = ",".join(fusion_name)
Expand All @@ -149,20 +151,31 @@ def get_fusion(self, fusion_name: Union[str, List[str]]) -> Optional[dict]:
config = self.fusion_config_map.get(config_name, None)
else:
config = ADAPTERFUSION_CONFIG_MAP.get(config_name, None)

if fusion_name in self.fusion_name_map:
adapter_names = self.fusion_name_map[fusion_name]
else:
adapter_names = fusion_name.split(",")

return config, adapter_names
else:
config = None
return config
return None, None

def add_fusion(self, fusion_name: Union[str, List[str]], config: Optional[Union[str, dict]] = None):
def add_fusion(
self, adapter_names: List[str], config: Optional[Union[str, dict]] = None, fusion_name: Optional[str] = None
):
"""
Adds a new AdapterFusion.
Args:
fusion_name (Union[str, List[str]]): The name of the AdapterFusion or the adapters to fuse.
adapter_names (List[str]): The names of the adapters to fuse.
config (Optional[Union[str, dict]], optional): AdapterFusion config. Defaults to None.
fusion_name (Optional[str], optional): The name of the AdapterFusion. If not specified, will default to comma-separated adapter names.
"""
if isinstance(fusion_name, list):
fusion_name = ",".join(fusion_name)
if fusion_name is None:
fusion_name = ",".join(adapter_names)
else:
self.fusion_name_map[fusion_name] = adapter_names
if fusion_name in self.fusions:
raise ValueError(f"An AdapterFusion with the name '{fusion_name}' has already been added.")
if config is None:
Expand Down Expand Up @@ -218,6 +231,7 @@ def to_dict(self):
output_dict["fusion_config_map"][k] = v.to_dict()
else:
output_dict["fusion_config_map"][k] = copy.deepcopy(v)
output_dict["fusion_name_map"] = copy.deepcopy(self.fusion_name_map)
return output_dict

def __eq__(self, other):
Expand Down
Loading

0 comments on commit 5596bf1

Please sign in to comment.