I am Xuchen Li (李旭宸), a first-year Ph.D. student at Institute of Automation, Chinese Academy of Sciences (CASIA), supervised by Prof. Kaiqi Huang, co-supervised by Dr. Shiyu Hu. Additionally, I am a member of Visual Intelligence Interest Group (VIIG). Before that, I received my B.E. degree in Computer Science and Technology with overall ranking 1/449 (0.22%) at School of Computer Science (SCS) from Beijing University of Posts and Telecommunications (BUPT) in Jun. 2024. I am very grateful to work with Dr. Shiyu Hu, which has a significant impact on me. I am also grateful to grow up and study with my twin brother Xuzhao Li, which is a truly unique and special experience for me. My research focuses on Visual Language Tracking, Multi-modal Learning, Data-centric AI and Large Language Model. If you are interested in my work or would like to collaborate, please feel free to contact me. |
|
-
CASIA & BUPT
- Beijing, China
- https://xuchen-li.github.io
- https://viig.aitestunion.com
Highlights
Pinned Loading
-
MemVLT
MemVLT PublicForked from XiaokunFeng/MemVLT
[NeurIPS'24] MemVLT: Vision-Language Tracking with Adaptive Memory-based Prompts
-
CPDTrack
CPDTrack PublicForked from ZhangDailing8/CPDTrack
[NeurIPS'24] Beyond accuracy: Tracking more like Human via Visual Search
-
MGIT
MGIT PublicForked from huuuuusy/videocube-toolkit
[NeurIPS’23] A Multi-modal Global Instance Tracking Benchmark (MGIT): Better Locating Target in Complex Spatio-temporal and Causal Relationship
Python
-
Awesome-Multimodal-Object-Tracking
Awesome-Multimodal-Object-Tracking PublicForked from 983632847/Awesome-Multimodal-Object-Tracking
A personal investigative project to track the latest progress in the field of multi-modal object tracking.
-
Awesome-Visual-Language-Tracking
Awesome-Visual-Language-Tracking PublicA visual language tracking paper list, articles related to visual language tracking have been documented.
If the problem persists, check the GitHub status page or contact support.