Skip to content

Commit

Permalink
Merge pull request #1566 from open-mmlab/dev-1.x
Browse files Browse the repository at this point in the history
bump version to v1.0.0rc5
  • Loading branch information
liuwenran authored Jan 4, 2023
2 parents 1735f5c + 740757f commit e7c0539
Show file tree
Hide file tree
Showing 510 changed files with 19,283 additions and 5,513 deletions.
10 changes: 7 additions & 3 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,15 @@ workflows:
mmedit/.* lint_only false
requirements/.* lint_only false
tests/.* lint_only false
tools/.* lint_only false
configs/.* lint_only false
.circleci/.* lint_only false
tools/.* lint_only true
configs/.* lint_only true
docs/.* lint_only true
.dev_scripts/.* lint_only true
base-revision: 1.x
.github/.* lint_only true
demo/.* lint_only true
projects/.* lint_only true
base-revision: dev-1.x
# this is the path of the configuration we should trigger once
# path filtering and pipeline parameter value updates are
# complete. In this case, we are using the parent dynamic
Expand Down
16 changes: 4 additions & 12 deletions .circleci/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -59,13 +59,11 @@ jobs:
- run:
name: Install mmediting dependencies
command: |
pip install 'opencv-python!=4.7.0.68'
pip install git+https://github.com/open-mmlab/mmengine.git@main
pip install -U openmim
mim install 'mmcv >= 2.0.0rc1'
mim install 'mmdet >= 3.0.0rc2'
pip install -r requirements/tests.txt
pip install git+https://github.com/openai/CLIP.git
pip install imageio-ffmpeg
- run:
name: Build and install
command: |
Expand Down Expand Up @@ -105,13 +103,11 @@ jobs:
- run:
name: Install mmedit dependencies
command: |
docker exec mmedit pip install 'opencv-python!=4.7.0.68'
docker exec mmedit pip install -e /mmengine
docker exec mmedit pip install -U openmim
docker exec mmedit mim install 'mmcv >= 2.0.0rc1'
docker exec mmedit mim install 'mmdet >= 3.0.0rc2'
docker exec mmedit pip install -r requirements/tests.txt
docker exec mmedit pip install git+https://github.com/openai/CLIP.git
docker exec mmedit pip install imageio-ffmpeg
- run:
name: Build and install
command: |
Expand All @@ -131,7 +127,6 @@ workflows:
branches:
ignore:
- dev-1.x
- test-1.x
- 1.x
pr_stage_test:
when:
Expand All @@ -144,7 +139,6 @@ workflows:
branches:
ignore:
- dev-1.x
- test-1.x
- 1.x
- build_cpu:
name: minimum_version_cpu
Expand All @@ -155,8 +149,8 @@ workflows:
- lint
- build_cpu:
name: maximum_version_cpu
torch: 1.12.1
torchvision: 0.13.1
torch: 1.13.0
torchvision: 0.14.0
python: 3.9.0
requires:
- minimum_version_cpu
Expand Down Expand Up @@ -187,5 +181,3 @@ workflows:
branches:
only:
- dev-1.x
- test-1.x
- 1.x
34 changes: 27 additions & 7 deletions .dev_scripts/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,19 @@
# Scripts for developing MMEditing

- [1. Check UT](#check-ut)
- [2. Test all the models](#test-benchmark)
- [1. Check UT](#1-check-ut)
- [2. Test all the models](#2-test-all-the-models)
- [3. Train all the models](#3-train-all-the-models)
- [3.1 Train for debugging](#31-train-for-debugging)
- [3.2 Train for FP32](#32-train-for-fp32)
- [3.3 Train for FP16](#33-train-for-fp16)
- [4. Monitor your training](#4-monitor-your-training)
- [5. Train with a list of models](#5-train-with-a-list-of-models)
- [6. Train with skipping a list of models](#6-train-with-skipping-a-list-of-models)
- [7. Automatically check links](#automatically-check-links)
- [7. Train failed or canceled jobs](#7-train-failed-or-canceled-jobs)
- [8. Deterministic training](#8-deterministic-training)
- [9. Automatically check links](#9-automatically-check-links)
- [10. Calculate flops](#10-calculate-flops)
- [11. Update model idnex](#11-update-model-index)

## 1. Check UT

Expand Down Expand Up @@ -224,12 +228,28 @@ python .dev_scripts/train_benchmark.py mm_lol --job-name xzn --models pix2pix --
Use the following script to check whether the links in documentations are valid:

```shell
python3 .github/scripts/doc_link_checker.py --target docs/zh_cn
python3 .github/scripts/doc_link_checker.py --target README_zh-CN.md
python3 .github/scripts/doc_link_checker.py --target docs/en
python3 .github/scripts/doc_link_checker.py --target README.md
python .dev_scripts/doc_link_checker.py --target docs/zh_cn
python .dev_scripts/doc_link_checker.py --target README_zh-CN.md
python .dev_scripts/doc_link_checker.py --target docs/en
python .dev_scripts/doc_link_checker.py --target README.md
```

You can specify the `--target` by a file or a directory.

**Notes:** DO NOT use it in CI, because requiring too many http requirements by CI will cause 503 and CI will propabaly fail.

## 10. Calculate flops

To summarize the flops of different models, you can run the following commands:

```bash
python .dev_scripts/benchmark_valid_flop.py --flops --flops-str
```

## 11. Update model index

To update model-index according to `README.md`, please run the following commands,

```bash
python .dev_scripts/update_model_index.py
```
46 changes: 37 additions & 9 deletions .dev_scripts/doc_link_checker.py
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
# Copyright (c) MegFlow. All rights reserved.
# Copyright (c) OpenMMLab. All rights reserved.
# /bin/python3

import argparse
import os
import re

import requests
from tqdm import tqdm


def make_parser():
parser = argparse.ArgumentParser('Doc link checker')
parser.add_argument(
'--http', default=False, type=bool, help='check http or not ')
parser.add_argument(
'--target',
default='./docs',
type=str,
help='the directory or file to check')
parser.add_argument(
'--ignore', type=str, nargs='+', default=[], help='input image size')
return parser


pattern = re.compile(r'\[.*?\]\(.*?\)')


def analyze_doc(home, path):
print('analyze {}'.format(path))
problem_list = []
code_block = 0
with open(path) as f:
Expand Down Expand Up @@ -51,11 +54,31 @@ def analyze_doc(home, path):
end = item.find(')')
ref = item[start + 1:end]

if ref.startswith('http') or ref.startswith('#'):
if ref.startswith('http'):
if ref.startswith(
'https://download.openmmlab.com/'
) or ref.startswith('http://download.openmmlab.com/'):
resp = requests.head(ref)
if resp.status_code == 200:
continue
else:
problem_list.append(ref)
else:
continue

if ref.startswith('#'):
continue

if ref == '<>':
continue

if '.md#' in ref:
ref = ref[ref.find('#'):]
fullpath = os.path.join(home, ref)
ref = ref[:ref.find('#')]
if ref.startswith('/'):
fullpath = os.path.join(
os.path.dirname(__file__), '../', ref[1:])
else:
fullpath = os.path.join(home, ref)
if not os.path.exists(fullpath):
problem_list.append(ref)
else:
Expand All @@ -68,11 +91,16 @@ def analyze_doc(home, path):
raise Exception('found link error')


def traverse(target):
def traverse(args):
target = args.target
if os.path.isfile(target):
analyze_doc(os.path.dirname(target), target)
return
for home, dirs, files in os.walk(target):
target_files = list(os.walk(target))
target_files.sort()
for home, dirs, files in tqdm(target_files):
if home in args.ignore:
continue
for filename in files:
if filename.endswith('.md'):
path = os.path.join(home, filename)
Expand All @@ -82,4 +110,4 @@ def traverse(target):

if __name__ == '__main__':
args = make_parser().parse_args()
traverse(args.target)
traverse(args)
8 changes: 1 addition & 7 deletions .dev_scripts/download_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,9 +74,7 @@ def download(args):
model_index.build_models_with_collections()
models = OrderedDict({model.name: model for model in model_index.models})

http_prefix_long = 'https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmediting/' # noqa
http_prefix_short = 'https://download.openmmlab.com/mmediting/'
http_prefix_gen = 'https://download.openmmlab.com/mmgen/'

# load model list
if args.model_list:
Expand Down Expand Up @@ -109,12 +107,8 @@ def download(args):

model_weight_url = model_info.weights

if model_weight_url.startswith(http_prefix_long):
model_name = model_weight_url[len(http_prefix_long):]
elif model_weight_url.startswith(http_prefix_short):
if model_weight_url.startswith(http_prefix_short):
model_name = model_weight_url[len(http_prefix_short):]
elif model_weight_url.startswith(http_prefix_gen):
model_name = model_weight_url[len(http_prefix_gen):]
elif model_weight_url == '':
print(f'{model_info.Name} weight is missing')
return None
Expand Down
8 changes: 1 addition & 7 deletions .dev_scripts/test_benchmark.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,16 +99,10 @@ def create_test_job_batch(commands, model_info, args, port, script_name):
assert config.exists(), f'{fname}: {config} not found.'

http_prefix_short = 'https://download.openmmlab.com/mmediting/'
http_prefix_long = 'https://openmmlab-share.oss-cn-hangzhou.aliyuncs.com/mmediting/' # noqa
http_prefix_gen = 'https://download.openmmlab.com/mmgen/'
model_weight_url = model_info.weights

if model_weight_url.startswith(http_prefix_long):
model_name = model_weight_url[len(http_prefix_long):]
elif model_weight_url.startswith(http_prefix_short):
if model_weight_url.startswith(http_prefix_short):
model_name = model_weight_url[len(http_prefix_short):]
elif model_weight_url.startswith(http_prefix_gen):
model_name = model_weight_url[len(http_prefix_gen):]
elif model_weight_url == '':
print(f'{fname} weight is missing')
return None
Expand Down
23 changes: 18 additions & 5 deletions .dev_scripts/update_model_index.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def dump_yaml_and_check_difference(obj, file):

if osp.isfile(file):
file_exists = True
print(f' exist {file}')
# print(f' exist {file}')
with open(file, 'r', encoding='utf-8') as f:
str_orig = f.read()
else:
Expand Down Expand Up @@ -144,20 +144,26 @@ def parse_md(md_file):
Name=collection_name,
Metadata={'Architecture': []},
README=readme,
Paper=[])
Paper=[],
Task=[],
Year=0,
)
models = []
# force utf-8 instead of system defined
with open(md_file, 'r', encoding='utf-8') as md:
lines = md.readlines()
i = 0
name = lines[0][2:]
year = re.sub('[^0-9]', '', name.split('(', 1)[-1])
name = name.split('(', 1)[0].strip()
collection['Metadata']['Architecture'].append(name)
collection['Name'] = name
collection_name = name
is_liif = collection_name.upper() == 'LIIF'
task_line = lines[4]
task = task_line.strip().split(':')[-1].strip()
collection['Task'] = task.lower().split(', ')
collection['Year'] = int(year)
while i < len(lines):
# parse reference
if lines[i].startswith('> ['):
Expand All @@ -177,16 +183,19 @@ def parse_md(md_file):
# import ipdb
# ipdb.set_trace()
if 'Config' not in cols and 'Download' not in cols:
warnings.warn(f"Lack 'Config' or 'Download' in line {i+1}")
warnings.warn("Lack 'Config' or 'Download' in"
f'line {i+1} in {md_file}')
i += 1
continue

if 'Method' in cols:
config_idx = cols.index('Method')
elif 'Config' in cols:
config_idx = cols.index('Config')
else:
print(cols)
raise ValueError('Cannot find config Table.')

checkpoint_idx = cols.index('Download')
try:
flops_idx = cols.index('FLOPs')
Expand All @@ -210,6 +219,8 @@ def parse_md(md_file):
left = line[config_idx].index('](') + 2
right = line[config_idx].index(')', left)
config = line[config_idx][left:right].strip('./')
config = osp.join(
osp.dirname(md_file), osp.basename(config))
elif line[config_idx].find('△') == -1:
j += 1
continue
Expand Down Expand Up @@ -315,7 +326,7 @@ def parse_md(md_file):
i += 1

if len(models) == 0:
warnings.warn('no model is found in this md file')
warnings.warn(f'no model is found in {md_file}')

result = {'Collections': [collection], 'Models': models}
yml_file = md_file.replace('README.md', 'metafile.yml')
Expand Down Expand Up @@ -363,9 +374,11 @@ def update_model_index():
sys.exit(0)

file_modified = False
# pbar = tqdm.tqdm(range(len(file_list)), initial=0, dynamic_ncols=True)
for fn in file_list:
print(f'process {fn}')
file_modified |= parse_md(fn)
# pbar.update(1)
# pbar.set_description(f'processing {fn}')

file_modified |= update_model_index()

Expand Down
Loading

0 comments on commit e7c0539

Please sign in to comment.