Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oriented RepPoints DOTA-v2.0 CUDA illegal memory access #412

Closed
austinmw opened this issue Jul 21, 2022 · 15 comments
Closed

Oriented RepPoints DOTA-v2.0 CUDA illegal memory access #412

austinmw opened this issue Jul 21, 2022 · 15 comments

Comments

@austinmw
Copy link

austinmw commented Jul 21, 2022

Describe the bug

After training runs successfully for a few thousand iterations I get a CUDA illegal memory access error

Reproduction

  1. What command or script did you run?
python tools/train.py
  1. Did you make any modifications on the code or config? Did you understand what you have modified?

Just modified to use DOTA v2.0 instead of v1.0

  1. What dataset did you use?

DOTA v2.0

Environment

sys.platform: linux
Python: 3.8.10 | packaged by conda-forge | (default, Sep 13 2021, 21:46:58) [GCC 9.4.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.109
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 1.11.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.12.0+cu113
OpenCV: 4.6.0
MMCV: 1.5.3
MMCV Compiler: GCC 7.3
MMCV CUDA Compiler: 11.3
MMRotate: 0.3.2+c62f148

Error traceback

q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:10:28,416 - mmrotate - INFO - Epoch [1][3000/18854]	lr: 8.332e-04, eta: 1 day, 0:28:33, time: 2.607, data_time: 0.011, memory: 8399, loss_cls: 0.2142, loss_pts_init: 0.3390, loss_pts_refine: 0.3344, loss_spatial_init: 0.0166, loss_spatial_refine: 0.0001, loss: 0.9044, grad_norm: 4.4368
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:12:47,384 - mmrotate - INFO - Epoch [1][3050/18854]	lr: 8.415e-04, eta: 1 day, 0:28:43, time: 2.779, data_time: 0.011, memory: 8399, loss_cls: 0.1771, loss_pts_init: 0.3319, loss_pts_refine: 0.3135, loss_spatial_init: 0.0186, loss_spatial_refine: 0.0002, loss: 0.8413, grad_norm: 4.3670
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:14:46,473 - mmrotate - INFO - Epoch [1][3100/18854]	lr: 8.498e-04, eta: 1 day, 0:25:06, time: 2.382, data_time: 0.011, memory: 8399, loss_cls: 0.1685, loss_pts_init: 0.2895, loss_pts_refine: 0.2816, loss_spatial_init: 0.0140, loss_spatial_refine: 0.0001, loss: 0.7538, grad_norm: 4.1430
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:16:50,421 - mmrotate - INFO - Epoch [1][3150/18854]	lr: 8.582e-04, eta: 1 day, 0:22:25, time: 2.479, data_time: 0.011, memory: 8399, loss_cls: 0.1665, loss_pts_init: 0.3170, loss_pts_refine: 0.2777, loss_spatial_init: 0.0175, loss_spatial_refine: 0.0002, loss: 0.7788, grad_norm: 3.6923
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:18:49,040 - mmrotate - INFO - Epoch [1][3200/18854]	lr: 8.665e-04, eta: 1 day, 0:18:48, time: 2.372, data_time: 0.011, memory: 8399, loss_cls: 0.1815, loss_pts_init: 0.3159, loss_pts_refine: 0.3027, loss_spatial_init: 0.0156, loss_spatial_refine: 0.0002, loss: 0.8158, grad_norm: 3.9381
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:20:44,485 - mmrotate - INFO - Epoch [1][3250/18854]	lr: 8.748e-04, eta: 1 day, 0:14:41, time: 2.309, data_time: 0.011, memory: 8399, loss_cls: 0.1662, loss_pts_init: 0.3344, loss_pts_refine: 0.2709, loss_spatial_init: 0.0223, loss_spatial_refine: 0.0002, loss: 0.7940, grad_norm: 4.3331
q8rucv6e9h-algo-1-kmg1k | 2022-07-21 17:22:45,505 - mmrotate - INFO - Epoch [1][3300/18854]	lr: 8.832e-04, eta: 1 day, 0:11:36, time: 2.420, data_time: 0.011, memory: 8399, loss_cls: 0.2351, loss_pts_init: 0.3008, loss_pts_refine: 0.3008, loss_spatial_init: 0.0157, loss_spatial_refine: 0.0002, loss: 0.8526, grad_norm: 4.6007
q8rucv6e9h-algo-1-kmg1k | Traceback (most recent call last):
q8rucv6e9h-algo-1-kmg1k | File "/opt/ml/code/mmrotate/tools/train.py", line 192, in <module>
q8rucv6e9h-algo-1-kmg1k | main()
q8rucv6e9h-algo-1-kmg1k | File "/opt/ml/code/mmrotate/tools/train.py", line 181, in main
q8rucv6e9h-algo-1-kmg1k | train_detector(
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmrotate/apis/train.py", line 141, in train_detector
q8rucv6e9h-algo-1-kmg1k | runner.run(data_loaders, cfg.workflow)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 130, in run
q8rucv6e9h-algo-1-kmg1k | epoch_runner(data_loaders[i], **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 51, in train
q8rucv6e9h-algo-1-kmg1k | self.run_iter(data_batch, train_mode=True, **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
q8rucv6e9h-algo-1-kmg1k | outputs = self.model.train_step(data_batch, self.optimizer,
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmcv/parallel/distributed.py", line 59, in train_step
q8rucv6e9h-algo-1-kmg1k | output = self.module.train_step(*inputs[0], **kwargs[0])
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 248, in train_step
q8rucv6e9h-algo-1-kmg1k | losses = self(**data)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
q8rucv6e9h-algo-1-kmg1k | return forward_call(*input, **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 116, in new_func
q8rucv6e9h-algo-1-kmg1k | return old_func(*args, **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 172, in forward
q8rucv6e9h-algo-1-kmg1k | return self.forward_train(img, img_metas, **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmrotate/models/detectors/single_stage.py", line 81, in forward_train
q8rucv6e9h-algo-1-kmg1k | losses = self.bbox_head.forward_train(x, img_metas, gt_bboxes,
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmdet/models/dense_heads/base_dense_head.py", line 335, in forward_train
q8rucv6e9h-algo-1-kmg1k | losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmrotate/models/dense_heads/oriented_reppoints_head.py", line 952, in loss
q8rucv6e9h-algo-1-kmg1k | quality_assess_list, = multi_apply(
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmdet/core/utils/misc.py", line 30, in multi_apply
q8rucv6e9h-algo-1-kmg1k | return tuple(map(list, zip(*map_results)))
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmrotate/models/dense_heads/oriented_reppoints_head.py", line 480, in pointsets_quality_assessment
q8rucv6e9h-algo-1-kmg1k | sampling_pts_pred_init = self.sampling_points(
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/mmrotate/models/dense_heads/oriented_reppoints_head.py", line 342, in sampling_points
q8rucv6e9h-algo-1-kmg1k | ratio = torch.linspace(0, 1, points_num).to(device).repeat(
q8rucv6e9h-algo-1-kmg1k | RuntimeError: CUDA error: an illegal memory access was encountered
q8rucv6e9h-algo-1-kmg1k | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
q8rucv6e9h-algo-1-kmg1k | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
q8rucv6e9h-algo-1-kmg1k | terminate called after throwing an instance of 'c10::CUDAError'
q8rucv6e9h-algo-1-kmg1k | what():  CUDA error: an illegal memory access was encountered
q8rucv6e9h-algo-1-kmg1k | CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
q8rucv6e9h-algo-1-kmg1k | For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
q8rucv6e9h-algo-1-kmg1k | Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:1230 (most recent call first):
q8rucv6e9h-algo-1-kmg1k | frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7fb2069107d2 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
q8rucv6e9h-algo-1-kmg1k | frame #1: <unknown function> + 0x239de (0x7fb23f4f59de in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
q8rucv6e9h-algo-1-kmg1k | frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x22d (0x7fb23f4f757d in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
q8rucv6e9h-algo-1-kmg1k | frame #3: <unknown function> + 0x300568 (0x7fb2bbd28568 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
q8rucv6e9h-algo-1-kmg1k | frame #4: c10::TensorImpl::release_resources() + 0x175 (0x7fb2068f9005 in /opt/conda/lib/python3.8/site-packages/torch/lib/libc10.so)
q8rucv6e9h-algo-1-kmg1k | frame #5: <unknown function> + 0x1ee569 (0x7fb2bbc16569 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
q8rucv6e9h-algo-1-kmg1k | frame #6: <unknown function> + 0x4d9c78 (0x7fb2bbf01c78 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
q8rucv6e9h-algo-1-kmg1k | frame #7: THPVariable_subclass_dealloc(_object*) + 0x292 (0x7fb2bbf01f72 in /opt/conda/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
q8rucv6e9h-algo-1-kmg1k | frame #8: <unknown function> + 0xed1ff (0x56288e9cd1ff in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #9: <unknown function> + 0xeee2b (0x56288e9cee2b in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #10: <unknown function> + 0xef202 (0x56288e9cf202 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #11: <unknown function> + 0xef202 (0x56288e9cf202 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #12: <unknown function> + 0xef598 (0x56288e9cf598 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #13: <unknown function> + 0xf1c38 (0x56288e9d1c38 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #14: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #15: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #16: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #17: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #18: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #19: <unknown function> + 0xf1cd9 (0x56288e9d1cd9 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #20: PyDict_SetItemString + 0x401 (0x56288ea77a41 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #21: PyImport_Cleanup + 0xa4 (0x56288eb4d0b4 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #22: Py_FinalizeEx + 0x7a (0x56288eb4d68a in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #23: Py_RunMain + 0x1b8 (0x56288eb52808 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #24: Py_BytesMain + 0x39 (0x56288eb52b79 in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | frame #25: __libc_start_main + 0xf3 (0x7fb2c9475083 in /usr/lib/x86_64-linux-gnu/libc.so.6)
q8rucv6e9h-algo-1-kmg1k | frame #26: <unknown function> + 0x1de1cd (0x56288eabe1cd in /opt/conda/bin/python3.8)
q8rucv6e9h-algo-1-kmg1k | ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 0 (pid: 95) of binary: /opt/conda/bin/python3.8
q8rucv6e9h-algo-1-kmg1k | Traceback (most recent call last):
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/bin/torchrun", line 8, in <module>
q8rucv6e9h-algo-1-kmg1k | sys.exit(main())
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
q8rucv6e9h-algo-1-kmg1k | return f(*args, **kwargs)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
q8rucv6e9h-algo-1-kmg1k | run(args)
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
q8rucv6e9h-algo-1-kmg1k | elastic_launch(
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
q8rucv6e9h-algo-1-kmg1k | return launch_agent(self._config, self._entrypoint, list(args))
q8rucv6e9h-algo-1-kmg1k | File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
q8rucv6e9h-algo-1-kmg1k | raise ChildFailedError(
q8rucv6e9h-algo-1-kmg1k | torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
q8rucv6e9h-algo-1-kmg1k | ===================================================
q8rucv6e9h-algo-1-kmg1k | /opt/ml/code/mmrotate/tools/train.py FAILED
@yangxue0827
Copy link
Collaborator

The same issue as #405

@yangxue0827
Copy link
Collaborator

A successful solution: set smaller nms_pre

test_cfg=dict(
        nms_pre=1000,
        min_bbox_size=0,
        score_thr=0.05,
        nms=dict(iou_thr=0.4),
        max_per_img=2000))

@austinmw
Copy link
Author

austinmw commented Sep 26, 2022

@yangxue0827 That change did not solve the issue for me as mentioned in #405 (comment)

@LLsmile
Copy link

LLsmile commented Jan 13, 2023

This is not same as https://github.com/open-mmlab/mmrotate/issues/405#issuecomment-1215944571. RepPoints may crash in both sampling and calculating min_area_polygons. None of them is solved for me after all cuda related changes in this repo untill now.

@crisz94
Copy link

crisz94 commented Feb 14, 2023

calculating

I've met the same problem...Any progress now?

@pphgood
Copy link

pphgood commented Mar 21, 2023

've met the same problem...Any progress now?

I've met the same problem...Have you solved the problem now?

@pphgood
Copy link

pphgood commented Mar 28, 2023

This is not same as https://github.com/open-mmlab/mmrotate/issues/405#issuecomment-1215944571. RepPoints may crash in both sampling and calculating min_area_polygons. None of them is solved for me after all cuda related changes in this repo untill now.

I've met the same problem,This problem sometimes occurs during the training and testing stages, but sometimes it can be successfully trained。Have you solved the problem now?

@yangxue0827
Copy link
Collaborator

  1. For reppoints based detector, close validation during training.
  2. If the same error occurs during testing, try some potentially useful solutions mentioned in CUDA error: an illegal memory access was encountered #405.
    @pphgood

@crisz94
Copy link

crisz94 commented Mar 28, 2023

've met the same problem...Any progress now?

I've met the same problem...Have you solved the problem now?

@pphgood
#614 (comment)

@pphgood
Copy link

pphgood commented Mar 28, 2023

  1. For reppoints based detector, close validation during training.
  2. If the same error occurs during testing, try some potentially useful solutions mentioned in CUDA error: an illegal memory access was encountered #405.
    @pphgood
    I have read the relevant issue and close validation during training. But this bug is different from the issue 405.It appears during my training phase.

80e4df1d0c7fdb10852f2b1a5bbb8a9

This question is the same as the issue 405。 Another point is that this problem sometimes occurs, and sometimes it can be successfully trained for several epochs

@pphgood
Copy link

pphgood commented Mar 28, 2023

've met the same problem...Any progress now?

I've met the same problem...Have you solved the problem now?

@pphgood #614 (comment)
I have used cv2.minAreaRect. But the speed of training seems to have decreased a lot.

@crisz94
Copy link

crisz94 commented Mar 28, 2023

've met the same problem...Any progress now?

I've met the same problem...Have you solved the problem now?

@pphgood #614 (comment)
I have used cv2.minAreaRect. But the speed of training seems to have decreased a lot.

@pphgood At least it didn't crash...If you want to speedup, try to fix cuda op min_area_rect. I believe the bug lies in this op

@pphgood
Copy link

pphgood commented Mar 28, 2023

've met the same problem...Any progress now?

I've met the same problem...Have you solved the problem now?

@pphgood #614 (comment)
I have used cv2.minAreaRect. But the speed of training seems to have decreased a lot.

@pphgood At least it didn't crashed...
You are right.Let me try this method.

@crisz94
Copy link

crisz94 commented Mar 28, 2023

  1. For reppoints based detector, close validation during training.
  2. If the same error occurs during testing, try some potentially useful solutions mentioned in CUDA error: an illegal memory access was encountered #405.
    @pphgood

@yangxue0827 I believe the bug lies in the cuda op min_area_rect, FYI

@pphgood
Copy link

pphgood commented Mar 29, 2023

  1. For reppoints based detector, close validation during training.
  2. If the same error occurs during testing, try some potentially useful solutions mentioned in CUDA error: an illegal memory access was encountered #405.
    @pphgood

@yangxue0827 I believe the bug lies in the cuda op min_area_rect, FYI
When I used cv2.minAreaRect, this bug did not occur again. I think this bug is indeed the cause of cuda op min_area_rect.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants