Skip to content

Commit

Permalink
Merge branch 'master' of https://github.com/ultralytics/yolov5 into f…
Browse files Browse the repository at this point in the history
…eature/DDP_fixed
  • Loading branch information
yizhi.chen committed Jul 14, 2020
2 parents c9558a9 + a1c8406 commit 5bf8beb
Show file tree
Hide file tree
Showing 21 changed files with 551 additions and 511 deletions.
13 changes: 13 additions & 0 deletions .github/ISSUE_TEMPLATE/-question.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
---
name: "❓Question"
about: Ask a general question
title: ''
labels: question
assignees: ''

---

## ❔Question


## Additional context
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,10 @@ COPY . /usr/src/app
# t=ultralytics/yolov5:latest && sudo docker build -t $t . && sudo docker push $t

# Pull and Run
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t bash
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host $t

# Pull and Run with local directory access
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t bash
# t=ultralytics/yolov5:latest && sudo docker pull $t && sudo docker run -it --ipc=host --gpus all -v "$(pwd)"/coco:/usr/src/coco $t

# Kill all
# sudo docker kill "$(sudo docker ps -q)"
Expand Down
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This repository represents Ultralytics open-source research into future object d
| [YOLOv5m](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 43.4 | 43.4 | 62.4 | 3.0ms | 333 || 21.8M | 39.4B
| [YOLOv5l](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 46.6 | 46.7 | 65.4 | 3.9ms | 256 || 47.8M | 88.1B
| [YOLOv5x](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | **48.4** | **48.4** | **66.9** | 6.1ms | 164 || 89.0M | 166.4B
| [YOLOv3-SPP](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B
| [YOLOv3-SPP](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) | 45.6 | 45.5 | 65.2 | 4.5ms | 222 || 63.0M | 118.0B


** AP<sup>test</sup> denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results in the table denote val2017 accuracy.
Expand Down Expand Up @@ -54,10 +54,11 @@ $ pip install -U -r requirements.txt

Inference can be run on most common media formats. Model [checkpoints](https://drive.google.com/open?id=1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J) are downloaded automatically if available. Results are saved to `./inference/output`.
```bash
$ python detect.py --source file.jpg # image
$ python detect.py --source 0 # webcam
file.jpg # image
file.mp4 # video
./dir # directory
0 # webcam
path/ # directory
path/*.jpg # glob
rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream
```
Expand Down Expand Up @@ -93,10 +94,11 @@ $ python train.py --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size

## Reproduce Our Environment

To access an up-to-date working environment (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled), consider a:
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

- **Google Cloud** Deep Learning VM with $300 free credit offer: See our [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Google Colab Notebook** with 12 hours of free GPU time. <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
- **Google Colab Notebook** with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>
- **Kaggle Notebook** with free GPU: [https://www.kaggle.com/ultralytics/yolov5](https://www.kaggle.com/ultralytics/yolov5)
- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)
- **Docker Image** https://hub.docker.com/r/ultralytics/yolov5. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) ![Docker Pulls](https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker)


Expand Down
6 changes: 3 additions & 3 deletions data/coco.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# COCO 2017 dataset http://cocodataset.org
# Download command: bash yolov5/data/get_coco2017.sh
# Train command: python train.py --data ./data/coco.yaml
# Dataset should be placed next to yolov5 folder:
# Train command: python train.py --data coco.yaml
# Default dataset location is next to /yolov5:
# /parent_folder
# /coco
# /yolov5


# train and val datasets (image directory or *.txt file with image paths)
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../coco/train2017.txt # 118k images
val: ../coco/val2017.txt # 5k images
test: ../coco/test-dev2017.txt # 20k images for submission to https://competitions.codalab.org/competitions/20794
Expand Down
12 changes: 6 additions & 6 deletions data/coco128.yaml
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
# COCO 2017 dataset http://cocodataset.org - first 128 training images
# Download command: python -c "from yolov5.utils.google_utils import gdrive_download; gdrive_download('1n_oKgR81BJtqk75b00eAjdv03qVCQn2f','coco128.zip')"
# Train command: python train.py --data ./data/coco128.yaml
# Dataset should be placed next to yolov5 folder:
# Download command: python -c "from yolov5.utils.google_utils import *; gdrive_download('1n_oKgR81BJtqk75b00eAjdv03qVCQn2f', 'coco128.zip')"
# Train command: python train.py --data coco128.yaml
# Default dataset location is next to /yolov5:
# /parent_folder
# /coco128
# /yolov5


# train and val datasets (image directory or *.txt file with image paths)
train: ../coco128/images/train2017/
val: ../coco128/images/train2017/
# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../coco128/images/train2017/ # 128 images
val: ../coco128/images/train2017/ # 128 images

# number of classes
nc: 80
Expand Down
5 changes: 3 additions & 2 deletions data/get_coco2017.sh
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
#!/bin/bash
# COCO 2017 dataset http://cocodataset.org
# Download command: bash yolov5/data/get_coco2017.sh
# Train command: python train.py --data ./data/coco.yaml
# Dataset should be placed next to yolov5 folder:
# Train command: python train.py --data coco.yaml
# Default dataset location is next to /yolov5:
# /parent_folder
# /coco
# /yolov5


# Download labels from Google Drive, accepting presented query
filename="coco2017labels.zip"
fileid="1cXZR_ckHki6nddOmcysCuuJFM--T-Q6L"
Expand Down
3 changes: 2 additions & 1 deletion data/get_voc.sh
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/
# Download command: bash ./data/get_voc.sh
# Train command: python train.py --data voc.yaml
# Dataset should be placed next to yolov5 folder:
# Default dataset location is next to /yolov5:
# /parent_folder
# /VOC
# /yolov5


start=`date +%s`

# handle optional download dir
Expand Down
9 changes: 5 additions & 4 deletions data/voc.yaml
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
# PASCAL VOC dataset http://host.robots.ox.ac.uk/pascal/VOC/
# Download command: bash ./data/get_voc.sh
# Train command: python train.py --data voc.yaml
# Dataset should be placed next to yolov5 folder:
# Default dataset location is next to /yolov5:
# /parent_folder
# /VOC
# /yolov5

# train and val datasets (image directory or *.txt file with image paths)
train: ../VOC/images/train/
val: ../VOC/images/val/

# train and val data as 1) directory: path/images/, 2) file: path/images.txt, or 3) list: [path1/images/, path2/images/]
train: ../VOC/images/train/ # 16551 images
val: ../VOC/images/val/ # 4952 images

# number of classes
nc: 20
Expand Down
11 changes: 5 additions & 6 deletions detect.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

import torch.backends.cudnn as cudnn

from utils import google_utils
from models.experimental import *
from utils.datasets import *
from utils.utils import *

Expand All @@ -20,8 +20,7 @@ def detect(save_img=False):
half = device.type != 'cpu' # half precision only supported on CUDA

# Load model
google_utils.attempt_download(weights)
model = torch.load(weights, map_location=device)['model'].float().eval() # load FP32 model
model = attempt_load(weights, map_location=device) # load FP32 model
imgsz = check_img_size(imgsz, s=model.stride.max()) # check img_size
if half:
model.half() # to FP16
Expand Down Expand Up @@ -129,15 +128,15 @@ def detect(save_img=False):

if save_txt or save_img:
print('Results saved to %s' % os.getcwd() + os.sep + out)
if platform == 'darwin': # MacOS
if platform == 'darwin' and not opt.update: # MacOS
os.system('open ' + save_path)

print('Done. (%.3fs)' % (time.time() - t0))


if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--weights', type=str, default='weights/yolov5s.pt', help='model.pt path')
parser.add_argument('--weights', nargs='+', type=str, default='yolov5s.pt', help='model.pt path(s)')
parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam
parser.add_argument('--output', type=str, default='inference/output', help='output folder') # output folder
parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
Expand All @@ -146,7 +145,7 @@ def detect(save_img=False):
parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
parser.add_argument('--view-img', action='store_true', help='display results')
parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class')
parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3')
parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
parser.add_argument('--augment', action='store_true', help='augmented inference')
parser.add_argument('--update', action='store_true', help='update all models')
Expand Down
20 changes: 12 additions & 8 deletions hubconf.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,18 @@ def create(name, pretrained, channels, classes):
pytorch model
"""
config = os.path.join(os.path.dirname(__file__), 'models', '%s.yaml' % name) # model.yaml path
model = Model(config, channels, classes)
if pretrained:
ckpt = '%s.pt' % name # checkpoint filename
google_utils.attempt_download(ckpt) # download if not found locally
state_dict = torch.load(ckpt, map_location=torch.device('cpu'))['model'].float().state_dict() # to FP32
state_dict = {k: v for k, v in state_dict.items() if model.state_dict()[k].shape == v.shape} # filter
model.load_state_dict(state_dict, strict=False) # load
return model
try:
model = Model(config, channels, classes)
if pretrained:
ckpt = '%s.pt' % name # checkpoint filename
google_utils.attempt_download(ckpt) # download if not found locally
state_dict = torch.load(ckpt, map_location=torch.device('cpu'))['model'].float().state_dict() # to FP32
state_dict = {k: v for k, v in state_dict.items() if model.state_dict()[k].shape == v.shape} # filter
model.load_state_dict(state_dict, strict=False) # load
return model
except Exception as e:
help_url = 'https://github.com/ultralytics/yolov5/issues/36'
print('%s\nCache maybe be out of date. Delete cache and retry. See %s for help.' % (e, help_url))


def yolov5s(pretrained=False, channels=3, classes=80):
Expand Down
22 changes: 21 additions & 1 deletion models/experimental.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# This file contains experimental modules

from models.common import *
from utils import google_utils


class CrossConv(nn.Module):
Expand Down Expand Up @@ -118,4 +119,23 @@ def forward(self, x, augment=False):
y = []
for module in self:
y.append(module(x, augment)[0])
return torch.cat(y, 1), None # ensembled inference output, train output
# y = torch.stack(y).max(0)[0] # max ensemble
# y = torch.cat(y, 1) # nms ensemble
y = torch.stack(y).mean(0) # mean ensemble
return y, None # inference, train output


def attempt_load(weights, map_location=None):
# Loads an ensemble of models weights=[a,b,c] or a single model weights=[a] or weights=a
model = Ensemble()
for w in weights if isinstance(weights, list) else [weights]:
google_utils.attempt_download(w)
model.append(torch.load(w, map_location=map_location)['model'].float().fuse().eval()) # load FP32 model

if len(model) == 1:
return model[-1] # return model
else:
print('Ensemble created with %s\n' % weights)
for k in ['names', 'stride']:
setattr(model, k, getattr(model[-1], k))
return model # return ensemble
5 changes: 3 additions & 2 deletions models/export.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
# TorchScript export
try:
print('\nStarting TorchScript export with torch %s...' % torch.__version__)
f = opt.weights.replace('.pt', '.torchscript') # filename
f = opt.weights.replace('.pt', '.torchscript.pt') # filename
ts = torch.jit.trace(model, img)
ts.save(f)
print('TorchScript export success, saved as %s' % f)
Expand Down Expand Up @@ -61,7 +61,8 @@
import coremltools as ct

print('\nStarting CoreML export with coremltools %s...' % ct.__version__)
model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape)]) # convert
# convert model from torchscript and apply pixel scaling as per detect.py
model = ct.convert(ts, inputs=[ct.ImageType(name='images', shape=img.shape, scale=1 / 255.0, bias=[0, 0, 0])])
f = opt.weights.replace('.pt', '.mlmodel') # filename
model.save(f)
print('CoreML export success, saved as %s' % f)
Expand Down
35 changes: 20 additions & 15 deletions models/yolo.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import argparse
from copy import deepcopy

from models.experimental import *

Expand Down Expand Up @@ -43,20 +44,21 @@ def _make_grid(nx=20, ny=20):


class Model(nn.Module):
def __init__(self, model_cfg='yolov5s.yaml', ch=3, nc=None): # model, input channels, number of classes
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None): # model, input channels, number of classes
super(Model, self).__init__()
if type(model_cfg) is dict:
self.md = model_cfg # model dict
if isinstance(cfg, dict):
self.yaml = cfg # model dict
else: # is *.yaml
import yaml # for torch hub
with open(model_cfg) as f:
self.md = yaml.load(f, Loader=yaml.FullLoader) # model dict
self.yaml_file = Path(cfg).name
with open(cfg) as f:
self.yaml = yaml.load(f, Loader=yaml.FullLoader) # model dict

# Define model
if nc and nc != self.md['nc']:
print('Overriding %s nc=%g with nc=%g' % (model_cfg, self.md['nc'], nc))
self.md['nc'] = nc # override yaml value
self.model, self.save = parse_model(self.md, ch=[ch]) # model, savelist, ch_out
if nc and nc != self.yaml['nc']:
print('Overriding %s nc=%g with nc=%g' % (cfg, self.yaml['nc'], nc))
self.yaml['nc'] = nc # override yaml value
self.model, self.save = parse_model(deepcopy(self.yaml), ch=[ch]) # model, savelist, ch_out
# print([x.shape for x in self.forward(torch.zeros(1, ch, 64, 64))])

# Build strides, anchors
Expand All @@ -72,8 +74,7 @@ def __init__(self, model_cfg='yolov5s.yaml', ch=3, nc=None): # model, input cha

# Init weights, biases
torch_utils.initialize_weights(self)
self._initialize_biases() # only run once
torch_utils.model_info(self)
self.info()
print('')

def forward(self, x, augment=False, profile=False):
Expand Down Expand Up @@ -148,17 +149,21 @@ def fuse(self): # fuse model Conv2d() + BatchNorm2d() layers
m.conv = torch_utils.fuse_conv_and_bn(m.conv, m.bn) # update conv
m.bn = None # remove batchnorm
m.forward = m.fuseforward # update forward
torch_utils.model_info(self)
self.info()
return self

def parse_model(md, ch): # model_dict, input_channels(3)
def info(self): # print model information
torch_utils.model_info(self)


def parse_model(d, ch): # model_dict, input_channels(3)
print('\n%3s%18s%3s%10s %-40s%-30s' % ('', 'from', 'n', 'params', 'module', 'arguments'))
anchors, nc, gd, gw = md['anchors'], md['nc'], md['depth_multiple'], md['width_multiple']
anchors, nc, gd, gw = d['anchors'], d['nc'], d['depth_multiple'], d['width_multiple']
na = (len(anchors[0]) // 2) # number of anchors
no = na * (nc + 5) # number of outputs = anchors * (classes + 5)

layers, save, c2 = [], [], ch[-1] # layers, savelist, ch out
for i, (f, n, m, args) in enumerate(md['backbone'] + md['head']): # from, number, module, args
for i, (f, n, m, args) in enumerate(d['backbone'] + d['head']): # from, number, module, args
m = eval(m) if isinstance(m, str) else m # eval strings
for j, a in enumerate(args):
try:
Expand Down
2 changes: 1 addition & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Cython
numpy==1.17
opencv-python
torch>=1.4
torch>=1.5.1
matplotlib
pillow
tensorboard
Expand Down
Loading

0 comments on commit 5bf8beb

Please sign in to comment.