Skip to content

Commit

Permalink
Merge master into develop (#254)
Browse files Browse the repository at this point in the history
* Synchronize master and develop workflows (#236)

* Synchronize master and develop workflows

* comment

* Added OpenDR citation (#238)

* Added OpenDR citation

* Update README.md

* Fixes bibtex name (#241)

Fixes citation name

* Fix clang (#250)

* Integration of heart anomaly detection self-attention neural bag of features  (#246)

* added sanbof models

* added attention models to ci test

Co-authored-by: ad-daniel <[email protected]>

* Make `test release` docker target the specific branch when the label is run manually (#252)

* Fix

* Better approach

* Fix

* Update CODEOWNERS (#253)

Co-authored-by: Nikolaos Passalis <[email protected]>
Co-authored-by: Kateryna Chumachenko <[email protected]>
Co-authored-by: Stefania Pedrazzi <[email protected]>
  • Loading branch information
4 people authored May 2, 2022
1 parent 00a9fb4 commit 5f6778b
Show file tree
Hide file tree
Showing 13 changed files with 227 additions and 58 deletions.
10 changes: 8 additions & 2 deletions .github/workflows/publisher.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,8 +42,11 @@ jobs:
- uses: actions/checkout@v2
with:
submodules: true
- name: Get branch name
id: branch-name
uses: tj-actions/[email protected]
- name: Build Docker Image
run: docker build --tag opendr-toolkit:cpu_$OPENDR_VERSION --file Dockerfile .
run: docker build --tag opendr-toolkit:cpu_$OPENDR_VERSION --build-arg branch=${{ steps.branch-name.outputs.current_branch }} --file Dockerfile .
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
Expand All @@ -59,8 +62,11 @@ jobs:
- uses: actions/checkout@v2
with:
submodules: true
- name: Get branch name
id: branch-name
uses: tj-actions/[email protected]
- name: Build Docker Image
run: docker build --tag opendr-toolkit:cuda_$OPENDR_VERSION --file Dockerfile-cuda .
run: docker build --tag opendr-toolkit:cuda_$OPENDR_VERSION --build-arg branch=${{ steps.branch-name.outputs.current_branch }} --file Dockerfile-cuda .
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
Expand Down
5 changes: 4 additions & 1 deletion .github/workflows/tests_suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -146,9 +146,12 @@ jobs:
- uses: actions/checkout@v2
with:
submodules: true
- name: Get branch name
id: branch-name
uses: tj-actions/[email protected]
- name: Build image
run: |
docker build --tag opendr/opendr-toolkit:cpu_test --file Dockerfile .
docker build --tag opendr/opendr-toolkit:cpu_test --build-arg branch=${{ steps.branch-name.outputs.current_branch }} --file Dockerfile .
docker save opendr/opendr-toolkit:cpu_test > cpu_test.zip
- name: Upload image artifact
uses: actions/upload-artifact@v2
Expand Down
38 changes: 36 additions & 2 deletions .github/workflows/tests_suite_develop.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
name: Test Suite (develop)

# note: this workflow is only triggered by the nightly scheduled run.
# it is identical to master's workflow, but targets the develop branch.
on:
schedule:
- cron: '0 23 * * *'
Expand Down Expand Up @@ -28,7 +30,7 @@ jobs:
- os: ubuntu-20.04
DEPENDENCIES_INSTALLATION: "sudo apt -y install clang-format-10 cppcheck"
- os: macos-10.15
DEPENDENCIES_INSTALLATION: "brew install clang-format cppcheck"
DEPENDENCIES_INSTALLATION: "brew install clang-format@11 cppcheck; ln /usr/local/bin/clang-format-11 /usr/local/bin/clang-format"
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v2
Expand Down Expand Up @@ -135,6 +137,7 @@ jobs:
- name: Upload wheel as artifact
uses: actions/upload-artifact@v2
with:
name: wheel-artifact
path:
dist/*.tar.gz
build-docker:
Expand All @@ -149,10 +152,17 @@ jobs:
- name: Build image
run: |
docker build --tag opendr/opendr-toolkit:cpu_test --file Dockerfile .
- name: Get branch name
id: branch-name
uses: tj-actions/[email protected]
- name: Build image
run: |
docker build --tag opendr/opendr-toolkit:cpu_test --build-arg branch=${{ steps.branch-name.outputs.current_branch }} --file Dockerfile .
docker save opendr/opendr-toolkit:cpu_test > cpu_test.zip
- name: Upload image artifact
uses: actions/upload-artifact@v2
with:
name: docker-artifact
path:
cpu_test.zip
test-wheel:
Expand Down Expand Up @@ -341,7 +351,31 @@ jobs:
path: artifact
- name: Test docker
run: |
docker load < ./artifact/artifact/cpu_test.zip
docker load < ./artifact/docker-artifact/cpu_test.zip
docker run --name toolkit -i opendr/opendr-toolkit:cpu_test bash
docker start toolkit
docker exec -i toolkit bash -c "source bin/activate.sh && source tests/sources/tools/control/mobile_manipulation/run_ros.sh && python -m unittest discover -s tests/sources/tools/${{ matrix.package }}"
delete-docker-artifacts:
needs: [build-docker, test-docker]
if: ${{ always() }}
strategy:
matrix:
os: [ubuntu-20.04]
runs-on: ${{ matrix.os }}
steps:
- name: Delete docker artifacts
uses: geekyeggo/delete-artifact@v1
with:
name: docker-artifact
delete-wheel-artifacts:
needs: [build-wheel, test-wheel]
if: ${{ always() }}
strategy:
matrix:
os: [ubuntu-20.04]
runs-on: ${{ matrix.os }}
steps:
- name: Delete wheel artifacts
uses: geekyeggo/delete-artifact@v1
with:
name: wheel-artifact
2 changes: 1 addition & 1 deletion CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -2,4 +2,4 @@
# the repo. Unless a later match takes precedence,
# @global-owner1 and @global-owner2 will be requested for
# review when someone opens a pull request.
* @passalis @omichel @ad-daniel
* @passalis @omichel @ad-daniel @stefaniapedrazzi
4 changes: 3 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
FROM ubuntu:20.04

ARG branch

# Install dependencies
RUN apt-get update && \
apt-get --yes install git sudo
Expand All @@ -12,7 +14,7 @@ RUN chmod +x /tini
ENTRYPOINT ["/tini", "--"]

# Clone the repo and install the toolkit
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr -b $branch
WORKDIR "/opendr"
RUN ./bin/install.sh

Expand Down
4 changes: 3 additions & 1 deletion Dockerfile-cuda
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
FROM nvidia/cuda:11.2.0-cudnn8-devel-ubuntu20.04

ARG branch

# Install dependencies
RUN apt-get update && \
apt-get --yes install git sudo apt-utils
Expand All @@ -15,7 +17,7 @@ RUN sudo apt-get --yes install build-essential

# Clone the repo and install the toolkit
ENV OPENDR_DEVICE gpu
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr
RUN git clone --depth 1 --recurse-submodules -j8 https://github.com/opendr-eu/opendr -b $branch
WORKDIR "/opendr"
RUN ./bin/install.sh

Expand Down
12 changes: 12 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,18 @@ OpenDR has the following roadmap:
## How to contribute
Please follow the instructions provided in the [wiki](https://github.com/tasostefas/opendr_internal/wiki).

## How to cite us
If you use OpenDR for your research, please cite the following paper that introduces OpenDR architecture and design:
<pre>
@article{opendr2022,
title={OpenDR: An Open Toolkit for Enabling High Performance, Low Footprint Deep Learning for Robotics},
author={Passalis, Nikolaos and Pedrazzi, Stefania and Babuska, Robert and Burgard, Wolfram and Dias, Daniel and Ferro, Francesco and Gabbouj, Moncef and Green, Ole and Iosifidis, Alexandros and Kayacan, Erdal and Kober, Jens and Michel, Olivier and Nikolaidis, Nikos and Nousi, Paraskevi and Pieters, Roel and Tzelepi, Maria and Valada, Abhinav and Tefas, Anastasios},
journal={arXiv preprint arXiv:2203.00403},
year={2022}
}
</pre>



## Acknowledgments
*OpenDR project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 871449.*
Expand Down
4 changes: 2 additions & 2 deletions docs/reference/attention-neural-bag-of-feature-learner.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ AttentionNeuralBagOfFeatureLearner(self, in_channels, series_length, n_class, n_
- **quantization_type**: *{"nbof", "tnbof"}, default="nbof"*
Specifies the type of quantization layer.
There are two types of quantization layer: the logistic neural bag-of-feature layer ("nbof") or the temporal logistic bag-of-feature layer ("tnbof").
- **attention_type**: *{"spatial", "temporal"}, default="spatial"*
- **attention_type**: *{"spatial", "temporal", "spatialsa", "temporalsa", "spatiotemporal"}, default="spatial"*
Specifies the type of attention mechanism.
There are two types of attention: the spatial attention mechanism that focuses on the different codewords ("spatial") or the temporal attention mechanism that focuses on different temporal instances ("temporal").
There are two types of attention: the spatial attention mechanism that focuses on the different codewords ("spatial") or the temporal attention mechanism that focuses on different temporal instances ("temporal"). Additionaly, there are three self-attention based mecahnisms as described [here](https://arxiv.org/abs/2201.11092).
- **lr_scheduler**: *callable, default=`opendr.perception.heart_anomaly_detection.attention_neural_bag_of_feature.attention_neural_bag_of_feature_learner.get_cosine_lr_scheduler(2e-4, 1e-5)`*
Specifies the function that computes the learning rate, given the total number of epochs `n_epoch` and the current epoch index `epoch_idx`.
That is, the optimizer uses this function to determine the learning rate at a given epoch index.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,4 @@ This module provides the implementation of the Attention Neural Bag-of-Features
## Sources

The algorithm is implemented according to the paper [Attention-based Neural Bag-of-Features Learning For Sequence Data](https://arxiv.org/abs/2005.12250).
Additionally, three self-attention mechanisms as described in [Self-Attention Neural Bag-of-Features](https://arxiv.org/abs/2201.11092) are implemented.
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
import torch
import torch.nn as nn
import torch.nn.functional as F
from opendr.perception.heart_anomaly_detection.attention_neural_bag_of_feature.algorithm.samodels import SelfAttention


class ResidualBlock(nn.Module):
Expand Down Expand Up @@ -257,19 +258,34 @@ def __init__(self, in_channels, series_length, n_codeword, att_type, n_class, dr
# nbof block
in_channels, series_length = self.compute_intermediate_dimensions(in_channels, series_length)
self.quantization_block = NBoF(in_channels, n_codeword)

self.att_type = att_type
out_dim = n_codeword
# attention block
self.attention_block = Attention(n_codeword, series_length, att_type)
if att_type in ['spatiotemporal', 'spatialsa', 'temporalsa']:
self.attention_block = SelfAttention(n_codeword, series_length, att_type)
self.attention_block2 = SelfAttention(n_codeword, series_length, att_type)
self.attention_block3 = SelfAttention(n_codeword, series_length, att_type)
out_dim = n_codeword*3
else:
self.attention_block = Attention(n_codeword, series_length, att_type)

# classifier
self.classifier = nn.Sequential(nn.Linear(in_features=n_codeword, out_features=512),
self.classifier = nn.Sequential(nn.Linear(in_features=out_dim, out_features=512),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(in_features=512, out_features=n_class))

def forward(self, x):
x = self.resnet_block(x)
x = self.attention_block(self.quantization_block(x)).mean(-1)
x = self.quantization_block(x)
if self.att_type in ['spatiotemporal', 'spatialsa', 'temporalsa']:
x1 = self.attention_block(x)
x2 = self.attention_block2(x)
x3 = self.attention_block3(x)
x = torch.cat([x1, x2, x3], axis=1)
else:
x = self.attention_block(x)
x = x.mean(-1)
x = self.classifier(x)
return x

Expand Down Expand Up @@ -298,22 +314,45 @@ def __init__(self, in_channels, series_length, n_codeword, att_type, n_class, dr
# tnbof block
in_channels, series_length = self.compute_intermediate_dimensions(in_channels, series_length)
self.quantization_block = TNBoF(in_channels, n_codeword)
out_dim = n_codeword * 2

# attention block
self.short_attention_block = Attention(n_codeword, series_length - int(series_length / 2), att_type)
self.long_attention_block = Attention(n_codeword, series_length, att_type)
self.att_type = att_type
if att_type in ['spatiotemporal', 'spatialsa', 'temporalsa']:
out_dim = out_dim * 3
self.short_attention_block = SelfAttention(n_codeword, series_length - int(series_length / 2), att_type)
self.long_attention_block = SelfAttention(n_codeword, series_length, att_type)
self.short_attention_block2 = SelfAttention(n_codeword, series_length - int(series_length / 2), att_type)
self.long_attention_block2 = SelfAttention(n_codeword, series_length, att_type)
self.short_attention_block3 = SelfAttention(n_codeword, series_length - int(series_length / 2), att_type)
self.long_attention_block3 = SelfAttention(n_codeword, series_length, att_type)
else:
self.short_attention_block = Attention(n_codeword, series_length - int(series_length / 2), att_type)
self.long_attention_block = Attention(n_codeword, series_length, att_type)

# classifier
self.classifier = nn.Sequential(nn.Linear(in_features=n_codeword * 2, out_features=512),
self.classifier = nn.Sequential(nn.Linear(in_features=out_dim, out_features=512),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(in_features=512, out_features=n_class))

def forward(self, x):
x = self.resnet_block(x)
x_short, x_long = self.quantization_block(x)
x_short = self.short_attention_block(x_short).mean(-1)
x_long = self.long_attention_block(x_long).mean(-1)
if self.att_type in ['spatialsa', 'temporalsa', 'spatiotemporal']:
x_short1 = self.short_attention_block(x_short)
x_long1 = self.long_attention_block(x_long)
x_short2 = self.short_attention_block2(x_short)
x_long2 = self.long_attention_block2(x_long)
x_short3 = self.short_attention_block3(x_short)
x_long3 = self.long_attention_block3(x_long)
x_short = torch.cat([x_short1, x_short2, x_short3], axis=1)
x_long = torch.cat([x_long1, x_long2, x_long3], axis=1)
else:
x_short = self.short_attention_block(x_short)
x_long = self.long_attention_block(x_long)
x_short = x_short.mean(-1)
x_long = x_long.mean(-1)
x = torch.cat([x_short, x_long], dim=-1)
x = self.classifier(x)
return x
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
import torch
import torch.nn as nn
import torch.nn.functional as F


class SelfAttention(nn.Module):
def __init__(self, n_codeword, series_length, att_type):
super(SelfAttention, self).__init__()

assert att_type in ['spatialsa', 'temporalsa', 'spatiotemporal']

self.att_type = att_type
self.hidden_dim = 128

self.n_codeword = n_codeword
self.series_length = series_length

if self.att_type == 'spatiotemporal':
self.w_s = nn.Linear(n_codeword, self.hidden_dim)
self.w_t = nn.Linear(series_length, self.hidden_dim)
elif self.att_type == 'spatialsa':
self.w_1 = nn.Linear(series_length, self.hidden_dim)
self.w_2 = nn.Linear(series_length, self.hidden_dim)
elif self.att_type == 'temporalsa':
self.w_1 = nn.Linear(n_codeword, self.hidden_dim)
self.w_2 = nn.Linear(n_codeword, self.hidden_dim)
self.drop = nn.Dropout(0.2)
self.alpha = nn.Parameter(data=torch.Tensor(1), requires_grad=True)

def forward(self, x):
# dimension order of x: batch_size, in_channels, series_length

# clip the value of alpha to [0, 1]
with torch.no_grad():
self.alpha.copy_(torch.clip(self.alpha, 0.0, 1.0))

if self.att_type == 'spatiotemporal':
q = self.w_t(x)
x_s = x.transpose(-1, -2)
k = self.w_s(x_s)
qkt = q @ k.transpose(-2, -1)*(self.hidden_dim**-0.5)
mask = F.sigmoid(qkt)
x = x * self.alpha + (1.0 - self.alpha) * x * mask

elif self.att_type == 'temporalsa':
x1 = x.transpose(-1, -2)
q = self.w_1(x1)
k = self.w_2(x1)
mask = F.softmax(q @ k.transpose(-2, -1)*(self.hidden_dim**-0.5), dim=-1)
mask = self.drop(mask)
temp = mask @ x1
x1 = x1 * self.alpha + (1.0 - self.alpha) * temp
x = x1.transpose(-2, -1)

elif self.att_type == 'spatialsa':
q = self.w_1(x)
k = self.w_2(x)
mask = F.softmax(q @ k.transpose(-2, -1)*(self.hidden_dim**-0.5), dim=-1)
mask = self.drop(mask)
temp = mask @ x
x = x * self.alpha + (1.0 - self.alpha) * temp

return x
Loading

0 comments on commit 5f6778b

Please sign in to comment.