datasets/obb/ #8462
Replies: 24 comments 83 replies
-
The "YOLO OBB Format" subsection defines the format as: But later in the example the format is: Can someone please confirm if both CSV and space separated formats are supported? |
Beta Was this translation helpful? Give feedback.
-
Do the obb's need to be right rectangles, i.e. angle between adjacent edges = 90 degrees? I'm assuming so, but I can't find where this is mentioned anywhere. |
Beta Was this translation helpful? Give feedback.
-
Are there any OBB datasets for indoor furniture, like the Object models (Chair, Bed, Table, TV, etc...) |
Beta Was this translation helpful? Give feedback.
-
Hi, I am not able to wrap my head around the condition "l1 < l2" and "l1 > l2". I have cheque-shaped document that I have annotated, such that the top left corner of document is always x1, y1, and top right corner is x2, y2, and bottom right is x3, y3, and bottom left is x4, y4. This is regardless of orientation. But does the convention of height, width or point-based OBB change with orientation? Do we need to, at certain orientations, flip height and width in the training dataset? I believe this is a very crucial information and not having a single life of explanation above, apart from just the image. |
Beta Was this translation helpful? Give feedback.
-
Hi. First of all, great job documenting and training models! I had a great introduction with the YOLO models from your website. # Classes for DOTA 1.0
names:
0: plane
1: ship
2: storage tank
3: baseball diamond
4: tennis court
5: basketball court
6: ground track field
7: harbor
8: bridge
9: large vehicle
10: small vehicle
11: helicopter
12: roundabout
13: soccer ball field
14: swimming pool How can I add a new label 'book' to the class IDs and get a succesful training? path: datasets/books0 # dataset root dir
train: images/train # train images (relative to 'path')
val: images/val # val images (relative to 'path')
# Classes
names:
0: plane
1: ship
2: storage tank
3: baseball diamond
4: tennis court
5: basketball court
6: ground track field
7: harbor
8: bridge
9: large vehicle
10: small vehicle
11: helicopter
12: roundabout
13: soccer ball field
14: swimming pool
15: book I defined the oriented boxes using the specified format:
And executed the following python code:
But I got the following messages during the training which makes me think I'm doing something wrong.
What am I missing? |
Beta Was this translation helpful? Give feedback.
-
Does yolov9c or yolov9e suppports obb ? |
Beta Was this translation helpful? Give feedback.
-
What is the representation of the rectangle box?Long side notation or opencv notation |
Beta Was this translation helpful? Give feedback.
-
Hi there. I am using the xywhr2xyxyxyxy(rboxes) Ultralytics function to convert rboxes from (cx, cy, w, h, r). When testing with the same bbox globally oriented at [0,45,90,135,180,225,315], only the 0 and 180 orientations are correctly annotated onto the resulting image. So how does this function correctly orient bboxes say at 45 or 315 degrees, when the input rotation is clamped 0-90deg? |
Beta Was this translation helpful? Give feedback.
-
Do the boxes I annotate have to be rectangular boxes? Can they be only four points?Thank you |
Beta Was this translation helpful? Give feedback.
-
I would like to model the results of the test and zoom in on the DOTAv1 TASK1 server for validation, but I have not been able to find a way to generate a submission format, can you help me please? The requirements are as follows: imgname score x1 y1 x2 y2 x3 y3 x4 y4 |
Beta Was this translation helpful? Give feedback.
-
For the box at the edge of the image, some box corners may not be on the image, how to deal with this? In addition, is there any data annotation tool recommended? thank you |
Beta Was this translation helpful? Give feedback.
-
My data annotation only has two situations: opening up and opening down. I tried using OBB and trained the data in the (id xyxyxy) format, and the predicted rotation direction was 1.570796 (90 °). I cannot know the true rotation direction of the target. How can I solve this problem? I have tried to randomly rotate all images by different angles and annotate them, but the training results still cannot correctly predict the rotation direction of vertically symmetric targets? I need to know the direction of the target's opening. Is YOLO OBB unable to solve this problem? Or where did I do it incorrectly? Is there anyone who can provide assistance? |
Beta Was this translation helpful? Give feedback.
-
xyxyxyxy=ops.xywhr2xyxyxyxy(torch.tensor([200,200,170,100,1.57])) here result r is not equal 1.57....why? |
Beta Was this translation helpful? Give feedback.
-
兄弟,你有没有在dota server 提交过,结果邮件点高不高
…Sent from my iPhone
On Jul 17, 2024, at 19:31, lsj13210 ***@***.***> wrote:
xyxyxyxy=ops.xywhr2xyxyxyxy(torch.tensor([200,200,170,100,1.57]))
data_array=xyxyxyxy.to(torch.int32).cpu().numpy()
x1y1=[data_array[0][0], data_array[0][1]]
x2y2=[data_array[1][0], data_array[1][1]]
x3y3=[data_array[2][0], data_array[2][1]]
x4y4=[data_array[3][0], data_array[3][1]]
xywhr=ops.xyxyxyxy2xywhr(torch.tensor([[x1y1, x2y2,x3y3, x4y4]])
here result r is not equal 1.57....why?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
-
if l1 == l2, what should I do? |
Beta Was this translation helpful? Give feedback.
-
首先们要理解什么是OBB,OBB只是在目标检测的基础上加了一个旋转方向R,也就是说我们是没有办法去使用OBB来实现识别带方向的旋转。我们只能只能一个目标他有没有旋转,而旋转的方向也只有0-90°,这个你去查看官方源码也是这么解释的。所以我们只需要知道矩形框的中心,长和宽,还有旋转角度就可以,也就是所谓的xywhr,然后通过引用ultralytics.ops里的方法xywhr2xyxyxyxy就可以计算出矩形的四个点。然后把这四个点归一化,x除以宽度,y除以高度,然后按照官方说法id xyxyxyxy 一行一个矩形框准备训练数据,然后开始训练!剩下的就是等训练结果出来后,你调用YOLO.predict方法进行识别,然后结果obb里就有xywhr,这里的r就是旋转方向。另外需要注意的是在yolo里顺时针是正值,逆时针是负值,而0就是默认不带旋转的位置就是xywh轴对称的。所以不要去管什么长边短边,你标注的时候只需要提供矩形的四个定点,从哪个位置开始,顺时针逆时针连起来都无所谓,因为OBB不能识别目标的方向,而只能知道这个矩形有没有旋转! |
Beta Was this translation helpful? Give feedback.
-
Hey, my dataset current format matches theta-based OBB. I understand that only point-based OBB is supported. Is there any reference of converting a theta based OBB to a point-based one (i.e., cx, cy, h, w, theta to the four corners)? I would appreciate your help in this. |
Beta Was this translation helpful? Give feedback.
-
image 1/1 /content/datasets/Aerial-Solar-Panels-6/test/images/DJI_0753_MP4-28_jpg.rf.dfa764d53d312a5b3abf4262e53305ee.jpg: 640x640 12.5ms boxes: None
orig_shape: (640, 640) this is the result i got , why does the boxes come out as none, i need to get the boxes directly from the prediction result not from the saved output file |
Beta Was this translation helpful? Give feedback.
-
Im confused about the angle convertion during the training process. the function that convert label from xyxyxyxy2xywhr:
the forward function of OBB head:
Thanks a lot! |
Beta Was this translation helpful? Give feedback.
-
Hi, sirs. I already trained my custom dataset(80 picture), but the bounding box fited inaccurately for those at-an-angled object. On the other hand, those vertical/horizontal object fited the bounding box very well. I checked my labels (from xywhr to xyxyxyxyxy) are totally correct about corner location. I wanna know anyone has encountered this problem? I need help and suggestion to solve this. Thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Hi everyone! i have a question. but if i use this format for training it gives error, can not train, i think i need to convert this xywhr format to xyxyxyxy format. |
Beta Was this translation helpful? Give feedback.
-
According to the |
Beta Was this translation helpful? Give feedback.
-
How do we easily create Mask OBB? Also, can it be possible to ensure a bearing/direction for the front is baked into the dataset too? If it doesn't exist yet, can you guys ensure this is reliable on a new update? |
Beta Was this translation helpful? Give feedback.
-
Issue with Incorrect Predictions from Quantized YOLOv8m-obb Model (TFLite) in TensorFlow Framework I have exported my YOLOv8m-obb model to TFLite format with INT8 quantization enabled, using an image size of 640x640 and a data.yaml for my dataset. When I use the quantized model for inference with the Ultralytics framework (Oriented Bounding Boxes), the predictions are correct. However, when I use the same model in the TensorFlow framework, I encounter several issues with the output:
I suspect there might be an issue with how the quantization parameters (scale and zero-point) are being applied in TensorFlow, or possibly with the way I'm handling the model's output or the way I exported the model. I would appreciate guidance on how to correctly handle the quantized model in TensorFlow and resolve the issues with incorrect predictions. |
Beta Was this translation helpful? Give feedback.
-
datasets/obb/
Dive deep into various oriented bounding box (OBB) dataset formats compatible with Ultralytics YOLO models. Grasp the nuances of using and converting datasets to this format.
https://docs.ultralytics.com/datasets/obb/
Beta Was this translation helpful? Give feedback.
All reactions