YOLOView is an application designed for creating, segmenting, and checking frames, as well as assembling datasets and training a YOLO segmentation model. During the training process, it provides real-time data insights directly from the model.
Features
- Efficient YOLO Visualizations: Accurate real-time object detections and visual representation using YOLO.
- Tkinter Integration: A user-friendly graphical user interface for efficient and intuitive operation.
- Segmentation Capabilities: Precise and effective segmentation tools for enhanced visualization.
Installation and Setup
- Clone this repository: git clone https://github.com/Pecako2001/YOLOview.git
- Navigate to the project directory: cd YOLOview
- Install the required dependencies: pip install -r requirements.txt
- Run the application: python main.py
My advice would be to create a virtual environment for this and install Conda in there.
Contributing We value contributions from the community and encourage developers to improve and expand YOLOview's capabilities:
- Fork the repository.
- Create a new branch for your features or fixes: git checkout -b [branch-name]
- Commit your changes with a descriptive message.
- Push your branch to your fork.
- Create a Pull Request detailing the changes introduced.
All contributions, either in the form of feature requests or pull requests, are greatly appreciated.
Contact & Credits Developed by Koen van Wijlick. For inquiries or feedback, please reach out to [email protected].
Ask for Video Path
Upon clicking this button, the user will be prompted to select or input a path to a video file. This video will then be the subject of subsequent operations.
Process Video
After specifying the video path, use this button to initiate processing on the chosen video. The exact processing steps will be determined by the underlying functionality, which might include tasks like extracting frames, basic edits, etc.
Annotate Data
Clicking on this button allows users to annotate the video data. Annotations could include marking objects, specifying regions of interest, or tagging frames with specific labels.
Annotation Viewer
Use this feature to view the annotations made on the data. It provides a visual representation of all the markings and tags added during the annotation process.
Generate Dataset
Once the video has been annotated, this button triggers the process to generate a structured dataset. This dataset can be used for various machine learning tasks, ensuring all annotations are appropriately integrated.
Tab Model Parameters
Here, you can define various parameters for the YOLO model. All default values align with the standard recommendations provided in the official YOLO documentation.
Tab Training data
Here, you can initiate your model's training process, during which you will be able to view the values being generated by the model in real-time.
Current Issues:
- Training Model Problem: The "Train Model" function is currently non-operational due to an incorrect file structure within data.yaml.
Feature Requests:
- Enhanced File Selection: Allow the selection of models and files without relying on file extensions, providing greater flexibility and a more intuitive user experience.