Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Commit

Permalink
enable inference on cpu (facebookresearch#821)
Browse files Browse the repository at this point in the history
now can inference on cpu by setting MODEL.DEVICE cpu
  • Loading branch information
techkang authored and fmassa committed May 25, 2019
1 parent 0b2a075 commit a5f77c0
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion maskrcnn_benchmark/engine/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,8 @@ def compute_on_dataset(model, data_loader, device, timer=None):
else:
output = model(images.to(device))
if timer:
torch.cuda.synchronize()
if not cfg.MODEL.DEVICE == 'cpu':
torch.cuda.synchronize()
timer.toc()
output = [o.to(cpu_device) for o in output]
results_dict.update(
Expand Down

0 comments on commit a5f77c0

Please sign in to comment.