E2E-CVTON is an end-to-end system which can generate a cloth try-on image, provided an image of a person and of a cloth. This system is implemented as a FastAPI server which is used to interact with the AI model over HTTP.
This project uses C-VTON as the baseline model to generate try-on images.
Person | Cloth | Try-on image |
---|---|---|
This zip file contains both the input data and and the outputs generated from the model during our test run.
Input data is in viton/data folder.
Generated outputs are in viton/results folder.
Trained parameters for masking_model.Masker
.
This was the environment which we used for development:
- Linux-x64
- Python v3.12.2
- CUDA 12.4
- Torch v2.2.1
- Torchvision v0.17.1
-
Clone the repository and move into the cloned directory.
git clone https://github.com/VTON-Project/E2E-CVTON.git cd E2E-CVTON
-
Go to the original repository and download the BPGM and C-VTON pretrained models for VITON-HD as instructed in the Testing section. Put the models in their respective folders.
-
Download masking_model.zip file and extract its contents into the masking_model directory.
-
Install the required packages in your environment using requirements.txt file.
pip install -r requirements.txt
-
Run the following command to start a gunicorn server:
gunicorn api:app --workers 4 --worker-class uvicorn.workers.UvicornWorker --bind 127.0.0.1:5000
-
Now you will be able to access the API at http://localhost:5000. To initiate the process, send a POST request to the API containing two images with keys 'person' and 'cloth' respectively. Upon receipt, the server will generate a try-on image in response. You can use a tool like Postman for this.