Replies: 1 comment
-
@toschi23 hello! It's great to see you're integrating YOLOv8 with your custom autoencoder. To use a Your current workaround converts the tensor to a list of NumPy arrays, which is a valid approach. However, to keep the input consistent and avoid unnecessary conversions, you might consider adjusting your preprocessing pipeline to output the data in the format expected by YOLOv8's predict function. Remember to ensure that the tensor is in the correct shape Keep in mind that the predict function is optimized for the expected input types, so aligning your data to match this requirement should provide a smooth inference process. If you have further questions or need more assistance, feel free to check out our documentation or ask here. Happy coding! 😊 |
Beta Was this translation helpful? Give feedback.
-
This might be a very basic.
How do I use a torch.tensor
[batch_size, channels, height, width]
as input for the predict function.When I just try to pass a tensor, I receive the following error message:
AssertionError: Expected PIL/np.ndarray image type, but got <class 'torch.Tensor'>
Background:
I want to use YOLOv8 in parallel to a custom autoencoder.
For further finetuning I wish to keep the input always identical for my custom model and the yolo input. To achive this, using the same torch tensor after the same preprocessing steps would be desired.
The workaround I have come up with is the one liner
model_input = list(255 * np.transpose(input.numpy(),(0,2,3,1)))
, which simply can not be the intended use.Beta Was this translation helpful? Give feedback.
All reactions