Skip to content

L-fountain/CLIP_tensorrt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

How to use

Step 1 Convert model

  1. Obtain the CLIP model

  2. Use clip2onnx to convert it from pt to onnx

  3. (Options) PTQ quantification model

Step2 Preprocess

Two ways to implement image preprocess

  1. Using cuda preprocess

  2. Using Opencv preprocess

    By the way: Multiple batch preprocessing can refer to this

The way to tokenize the words

  1. You can refer to clip_tokenizer

Step3 Build

  1. Dynamic batchsize code like reference_1

    Note: Please refer to reference_2 for the solution to the issue where the BufferManager of TensorRT samples does not support engines with dynamic dimensions

Step4 Infer

Step5 Compile

  1. Modify and correctly configure dependency paths in CmakeLists at all levels of directories

    Note: The code uses the filesystem library of C++17. If C++17 is not supported, the corresponding functions and modules need to be downgraded and replaced

  2. Perform the following actions in the root directory of the project

        mkdir build
        cd build
        cmake ..
        make
    
  3. The executable file will be generated in the build/bin directory, and specific runtime parameters can refer to main.cpp and argsParser.h

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages