Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions About the feature_visualization Function #3914

Closed
Zengyf-CVer opened this issue Jul 7, 2021 · 2 comments · Fixed by #3920
Closed

Questions About the feature_visualization Function #3914

Zengyf-CVer opened this issue Jul 7, 2021 · 2 comments · Fixed by #3920
Labels
question Further information is requested

Comments

@Zengyf-CVer
Copy link
Contributor

Zengyf-CVer commented Jul 7, 2021

@glenn-jocher

This is my first question:

I tried the feature_visualization function and found some problems.
First, when I run this detection code, two directories are generated in runs/features: exp and exp2, as shown in the figure:

python detect.py --weights ./weight/yolov5s.pt --source ./train2017/000000000081.jpg

ksnip_20210707-100539
Display in the shell:
ksnip_20210707-102925

The picture above is the first saved file, in exp.
In the display of the shell, there is another one that processes the input image, as shown in the figure:
ksnip_20210707-100511

So I checked the specific feature map generated:
In exp:
stage_8_SPP_features
In exp2:
stage_8_SPP_features
So what I want to ask is, why are two directories exp and exp2 generated at the same time? What do these two directories represent?

This is my second question:

I checked the feature_visualization function in plots.py, there are two valuable codes:

blocks = torch.chunk(x, channels, dim=1) # block by channel dimension
and
feature = transforms.ToPILImage()(blocks[i].squeeze())

I am not sure, whether the generated two directories exp and exp2 are related to the models type I set?

yolov5/models/yolo.py

Lines 158 to 159 in 33202b7

if feature_vis and m.type == 'models.common.SPP':
feature_visualization(x, m.type, m.i)

What else are settings like models.common.SPP?
I am not sure about the specific architecture of yolov5. Is there any relevant architecture diagram? Like this, but the official version?

image

Looking forward to your reply, thank you very much.

@Zengyf-CVer Zengyf-CVer added the question Further information is requested label Jul 7, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Jul 7, 2021

👋 Hello @Zengyf-CVer, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@glenn-jocher
Copy link
Member

glenn-jocher commented Jul 7, 2021

@Zengyf-CVer the first features are created when the model is run once to initialize GPUs for consistent speeds in later images:

yolov5/detect.py

Lines 90 to 92 in 33202b7

# Run inference
if device.type != 'cpu':
model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once

We've made visualizing YOLOv5 🚀 architectures super easy. There are two main ways:

model.yaml

Each model has a corresponding yaml file that displays the model architecture. Here is YOLOv5s, defined by yolov5s.yaml:

# YOLOv5 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Focus, [64, 3]], # 0-P1/2
[-1, 1, Conv, [128, 3, 2]], # 1-P2/4
[-1, 3, C3, [128]],
[-1, 1, Conv, [256, 3, 2]], # 3-P3/8
[-1, 9, C3, [256]],
[-1, 1, Conv, [512, 3, 2]], # 5-P4/16
[-1, 9, C3, [512]],
[-1, 1, Conv, [1024, 3, 2]], # 7-P5/32
[-1, 1, SPP, [1024, [5, 9, 13]]],
[-1, 3, C3, [1024, False]], # 9
]
# YOLOv5 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]], # cat backbone P4
[-1, 3, C3, [512, False]], # 13
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]], # cat backbone P3
[-1, 3, C3, [256, False]], # 17 (P3/8-small)
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]], # cat head P4
[-1, 3, C3, [512, False]], # 20 (P4/16-medium)
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]], # cat head P5
[-1, 3, C3, [1024, False]], # 23 (P5/32-large)
[[17, 20, 23], 1, Detect, [nc, anchors]], # Detect(P3, P4, P5)
]

TensorBoard Graph

Simply start training a model, and then view the TensorBoard Graph for an interactive view of the model architecture. This example shows YOLOv5s viewed in our NotebookOpen In Colab Open In Kaggle

# Tensorboard
%load_ext tensorboard
%tensorboard --logdir runs/train

# Train YOLOv5s on COCO128 for 3 epochs
!python train.py --weights yolov5s.pt --epochs 3

Screenshot 2021-04-11 at 01 10 09

@glenn-jocher glenn-jocher linked a pull request Jul 7, 2021 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants