Skip to content

Miracle2333/Obj2Seq_PaddlePaddle

Repository files navigation

Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks

这是一个 Obj2Seq 的 Paddle 的实现版本

首先需要解压 COCO 并创建软连接, 请将以下路径改成所需要的:

# ---------- images ----------
ls /home/aistudio/data
!unzip -qq /home/aistudio/data/data103218/train2017.zip -d /home/aistudio/data/data103218/
unzip -qq /home/aistudio/data/data103218/val2017.zip   -d   /home/aistudio/data/data103218/

# ---------- annotation ----------
unzip /home/aistudio/data/data103218/annotations_trainval2017.zip  -d   /home/aistudio/data/data103218/


cd Obj2Seq_paddle
mkdir -p data
ln -s /home/aistudio/data/data103218/ data/coco

训练:

python -m paddle.distributed.launch main.py --auto_resume  --cfg configs/detection_r50_seqhead.yaml   --output_dir output

验证:

python -m paddle.distributed.launch main.py --auto_resume  --cfg configs/detection_r50_seqhead.yaml   --output_dir output  --eval  

如需关闭 fleet 来调试 或 打开 amp 请打开 config.py 中的 246 行内容:

_C.TRAIN.fleet = True
_C.TRAIN.amp = False

目前在 AIStudio 上开启 AMP 会存在 loss 为 NaN 的问题, 不建议使用

目前与原仓库没对齐的部分:

  • 在数据预处理是, 没有 RandomSizeCrop, 单卡没问题, 训练集多卡一旦有会 SegmentFault 的问题, 尽管已经修复了一版, 但可能依旧存在别的问题, 所以在预处理中删除了
  • Prompt Indicator 部分的输出会与最终的置信度相乘, 但实际证明无法训练, 则将 config 的 combine_method 改成 none, 而不是 multiply
  • 在 head 的部分 models/predictors/classifiers/label_classifier.py 50 行 self.b 加上偏置 b 被注释掉, 发现其导致输出置信度很低, loss不下降 (已修改)
  • 由于显存限制,在 MLP 中 1024 节点改成了 512 节点,Decoder少了一个 DecoderLayer

TO Be Continue...

Introduction

This repository is an official implementation of the Obj2Seq. Obj2Seq takes objects as basic units, and regards most object-level visual tasks as sequence generation problems of objects. It first recognizes objects of given categories, and then generates a sequence to describe each of these objects. Obj2Seq is able to flexibly determine input categories and the definition of output sequences to satisfy customized requirements, and be easily extended to different visual tasks.

Obj2Seq: Arxiv | Github | Gitee

Obj2Seq

Main Results

All results are trained with a ResNet-50 backbone.

Object Detection

Epochs Params(M) $AP$ Model
DeformableDETR$^\dagger$ 50 40 44.6 model
Obj2Seq 50 40 45.7 model
+ iterative box refine 50 42 46.7 model

$^\dagger$ We convert official DeformableDETR checkpoint with this script.

Human Pose Estimation

Epochs Params(M) $AP_{box}$ $AP_{kps}$ Config/Model
Baseline 50 40 55.4 57.9 model
Obj2Seq 50 40 55.4 61.2 model
Obj2Seq 150 40 58.1 65.1 model

You may also download these models from BaiduNetdisk.

Instructions

See GET_STARTED.md.

Citation

If you find this project useful for your research, please consider citing this paper.

@inproceedings{
chen2022objseq,
title={Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks},
author={Zhiyang Chen and Yousong Zhu and Zhaowen Li and Fan Yang and Wei Li and Haixin Wang and Chaoyang Zhao and Liwei Wu and Rui Zhao and Jinqiao Wang and Ming Tang},
booktitle={Advances in Neural Information Processing Systems},
editor={Alice H. Oh and Alekh Agarwal and Danielle Belgrave and Kyunghyun Cho},
year={2022},
url={https://openreview.net/forum?id=cRNl08YWRKq}
}

Acknowledgement

Our repository is mainly built upon DETR, Deformable-DETR and Anchor-DETR. We also refer

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages