Skip to content
This repository has been archived by the owner on Dec 15, 2021. It is now read-only.

Commit

Permalink
Fix typo for shapes_dataset_demo.ipynb (facebookresearch#885)
Browse files Browse the repository at this point in the history
Fix typo, highlight words in python language, add hyperlink
  • Loading branch information
keineahnung2345 authored and fmassa committed Jun 12, 2019
1 parent 657f280 commit 04b5e8c
Showing 1 changed file with 26 additions and 26 deletions.
52 changes: 26 additions & 26 deletions demo/shapes_dataset_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -928,7 +928,7 @@
"source": [
"### Checking our Installation\n",
"\n",
"If a module not found error appears, restart the runtime. The libraries should be loaded after restarting"
"If a `Module not found` error appears, restart the runtime. The libraries should be loaded after restarting"
]
},
{
Expand Down Expand Up @@ -1035,21 +1035,21 @@
"source": [
"# Loading Our Dataset\n",
"\n",
"To train a network using the MaskRCNN repo, we first need to define our dataset. The dataset needs to a class of type object and should extend 3 things. \n",
"To train a network using the `facebookresearch/maskrcnn-benchmark` repo, we first need to define our dataset. The dataset needs to be a subclass of `object` and should implement 6 things. \n",
"\n",
"1. **__getitem__(self, idx)**: This function should return a PIL Image, a BoxList and the idx. The Boxlist is an abstraction for our bounding boxes, segmentation masks, class lables and also people keypoints. Please check ABSTRACTIONS.ms for more details on this. \n",
"1. `__getitem__(self, idx)`: This function should return a PIL Image, a BoxList and the idx. The Boxlist is an abstraction for our bounding boxes, segmentation masks, class labels and also people keypoints. Please check [ABSTRACTIONS.md](https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/ABSTRACTIONS.md) for more details on this. \n",
"\n",
"2. **__len__()**: returns the length of the dataset. \n",
"2. `__len__()`: returns the length of the dataset. \n",
"\n",
"3. **get_img_info(self, idx)**: Return a dict of img info with the fields \"height\" and \"width\" filled in with the idx's image's height and width.\n",
"3. `get_img_info(self, idx)`: Return a dict of img info with the fields \"height\" and \"width\" filled in with the idx's image's height and width.\n",
"\n",
"4. **self.coco**: Should be a variable that holds the COCO object for your annotations so that you can perform evaluations of your dataset. \n",
"4. `self.coco`: Should be a variable that holds the COCO object for your annotations so that you can perform evaluations of your dataset. \n",
"\n",
"5. **self.id_to_img_map**: Is a dictionary that maps the ids to coco image ids. Almost in all cases just map the idxs to idxs. This is simply a requirement for the coco evaluation. \n",
"5. `self.id_to_img_map`: Is a dictionary that maps the ids to coco image ids. Almost in all cases just map the idxs to idxs. This is simply a requirement for the coco evaluation. \n",
"\n",
"6. **self.contiguous_category_id_to_json_id**: Another requirement for coco evaluation. It maps the categpry to json category id. Again, for almost all purposes category id and json id should be same. \n",
"6. `self.contiguous_category_id_to_json_id`: Another requirement for coco evaluation. It maps the categpry to json category id. Again, for almost all purposes category id and json id should be same. \n",
"\n",
"Given below is a sample fo a dataset. It is the Shape Dataset taken from the Matterport Mask RCNN Repo. One important detail is that the constructor if the dataset should have the variable transforms that is set inside the constructor. It should thgen be used inside **__get__item(idx)** as shown below."
"Given below is a sample fo a dataset. It is the Shape Dataset taken from the [Matterport/Mask_RCNN](https://github.com/matterport/Mask_RCNN) Repo. One important detail is that the constructor of the dataset should have the variable `transforms`, which is set inside the constructor. It should then be used in `__getitem__(self, idx)` as shown below."
]
},
{
Expand Down Expand Up @@ -1160,7 +1160,7 @@
" self.image_info = []\n",
" self.logger = logging.getLogger(__name__)\n",
" \n",
" # Class Names: Note that the ids start fromm 1 not 0. This repo uses the 0 index for background\n",
" # Class Names: Note that the ids start from 1, not 0. This repo uses the index 0 for background\n",
" self.class_names = {\"square\": 1, \"circle\": 2, \"triangle\": 3}\n",
" \n",
" # Add images\n",
Expand Down Expand Up @@ -1188,7 +1188,7 @@
" def random_shape(self, height, width):\n",
" \"\"\"Generates specifications of a random shape that lies within\n",
" the given height and width boundaries.\n",
" Returns a tuple of three valus:\n",
" Returns a tuple of three values:\n",
" * The shape name (square, circle, ...)\n",
" * Shape color: a tuple of 3 values, RGB.\n",
" * Shape dimensions: A tuple of values that define the shape size\n",
Expand Down Expand Up @@ -1225,7 +1225,7 @@
" x, y, s = dims\n",
" boxes.append([y-s, x-s, y+s, x+s])\n",
"\n",
" # Apply non-max suppression wit 0.3 threshold to avoid\n",
" # Apply non-max suppression with 0.3 threshold to avoid\n",
" # shapes covering each other\n",
" keep_ixs = non_max_suppression(np.array(boxes), np.arange(N), 0.3)\n",
" shapes = [s for i, s in enumerate(shapes) if i in keep_ixs]\n",
Expand Down Expand Up @@ -1310,7 +1310,7 @@
" masks = SegmentationMask(torch.tensor(masks), image.size, \"mask\")\n",
" boxlist.add_field(\"masks\", masks)\n",
" \n",
" # Important line! dont forget to add this\n",
" # Important line! don't forget to add this\n",
" if self.transforms:\n",
" image, boxlist = self.transforms(image, boxlist)\n",
"\n",
Expand Down Expand Up @@ -1362,7 +1362,7 @@
" bbox = [ x - s, y - s, x+s, y +s ] \n",
" area = (bbox[0] - bbox[2]) * (bbox[1] - bbox[3])\n",
" \n",
" # Format for COCOC\n",
" # Format for COCO\n",
" annotations.append( {\n",
" \"id\": int(ann_id),\n",
" \"category_id\": category_id,\n",
Expand Down Expand Up @@ -1492,7 +1492,7 @@
"source": [
"# Training a Model\n",
"\n",
"Now we move on to training our very own model. Here we will be finetuning a Mask RCNN on this dataset. To do this we need\n",
"Now we move on to training our very own model. Here we will be fine-tuning a Mask RCNN on this dataset. To do this we need\n",
"\n",
"1. A base model that has the same amount of output classes as our dataset. In this case, we have need for only 3 classes instead of COCO's 80. Hence , we first need to do some model trimming. "
]
Expand Down Expand Up @@ -1902,7 +1902,7 @@
"source": [
"### Base Model Config\n",
"\n",
"This is the base model that we will finetune from. First we need to replace the bounding box heads and mask heads to make it compatible with our Shapes Dataset."
"This is the base model that we will fine-tune from. First we need to replace the bounding box heads and mask heads to make it compatible with our Shapes Dataset."
]
},
{
Expand Down Expand Up @@ -1970,9 +1970,9 @@
"colab_type": "text"
},
"source": [
"### Pretrained weight removal\n",
"### Pre-trained weight removal\n",
"\n",
"Here, the pretrained weights of bbox, mask and class predictions are removed. This is done so that we can make the model shapes dataset compatible i.e predict 3 classes instead of Coco's 81 classes."
"Here, the pre-trained weights of bbox, mask and class predictions are removed. This is done so that we can make the model shapes dataset compatible i.e predict 3 classes instead of Coco's 81 classes."
]
},
{
Expand Down Expand Up @@ -2027,7 +2027,7 @@
" \"roi_heads.mask.predictor.mask_fcn_logits.weight\", \"roi_heads.mask.predictor.mask_fcn_logits.bias\"\n",
" ])\n",
"\n",
"# Save new state dict, we will use this as our starting weights for our finwe tuned model\n",
"# Save new state dict, we will use this as our starting weights for our fine-tuned model\n",
"torch.save(new_state_dict, \"base_model.pth\")\n"
],
"execution_count": 0,
Expand All @@ -2044,8 +2044,8 @@
"\n",
"Here we define our shape Dataset config. The important fields are \n",
"\n",
"1. WEIGHT: which point to our base_model.pth saved in the previous step\n",
"2. NUM_CLASSES: Which define how many classes we will rpedict . note that the number includes the background, hence our shapes dataset has 4 classes. "
"1. WEIGHT: which point to our `base_model.pth` saved in the previous step\n",
"2. NUM_CLASSES: Which define how many classes we will predict . Note that the number includes the background, hence our shapes dataset has 4 classes. "
]
},
{
Expand Down Expand Up @@ -2313,10 +2313,10 @@
"cfg.merge_from_file(config_file)\n",
"\n",
"cfg.merge_from_list(['OUTPUT_DIR', 'shapeDir']) # The output folder where all our model checkpoints will be saved during training.\n",
"cfg.merge_from_list(['SOLVER.IMS_PER_BATCH', 4]) # Number of images to take insiade a single batch. This number depends on the size of your GPU\n",
"cfg.merge_from_list(['SOLVER.IMS_PER_BATCH', 4]) # Number of images to take inside a single batch. This number depends on the size of your GPU\n",
"cfg.merge_from_list(['SOLVER.BASE_LR', 0.0050]) # The Learning Rate when training starts. Please check Detectron scaling rules to determine your learning for your GPU setup. \n",
"cfg.merge_from_list(['SOLVER.MAX_ITER', 1000]) # The number of training iterations that will be executed during training. One iteration is given as one forward and backward pass of a mini batch of the network\n",
"cfg.merge_from_list(['SOLVER.STEPS', \"(700, 800)\"]) # These two numberes represent after how many iterations is the learning rate divided by 10. \n",
"cfg.merge_from_list(['SOLVER.STEPS', \"(700, 800)\"]) # These two numbers represent after how many iterations is the learning rate divided by 10. \n",
"cfg.merge_from_list(['TEST.IMS_PER_BATCH', 1]) # Batch size during testing/evaluation\n",
"cfg.merge_from_list(['MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN', 4000]) # This determines how many region proposals to take in for processing into the stage after the RPN. The rule is 1000*batch_size = 4*1000 \n",
"cfg.merge_from_list(['SOLVER.CHECKPOINT_PERIOD', 100]) # After how many iterations does one want to save the model.\n",
Expand Down Expand Up @@ -2754,7 +2754,7 @@
"source": [
"# Evaluation\n",
"\n",
"Now after our model is trainined, we would like to see how well it predicts objects in our sample images. One way to validate your model is through a standard metric called COCO mAP. This metric is used widely. Hence, we shall do this now. "
"Now after our model is trained, we would like to see how well it predicts objects in our sample images. One way to validate your model is through a standard metric called COCO mAP. This metric is used widely. Hence, we shall do this now. "
]
},
{
Expand Down Expand Up @@ -3073,7 +3073,7 @@
"source": [
"# Conclusion\n",
"\n",
"Looks good! Have fun training the models on your own dataset ! There are a lot of features that this demo doesn't use like:\n",
"Looks good! Have fun training the models on your own dataset ! There are a lot of features that this demo doesn't use, like:\n",
"\n",
"1. Mixed precision training\n",
"2. Lighter object detection architectures like RetinaNet\n",
Expand All @@ -3082,4 +3082,4 @@
]
}
]
}
}

0 comments on commit 04b5e8c

Please sign in to comment.