Instagram Computer Vision is an innovative app that integrates GPT-4 Vision to analyze and narrate Instagram videos. This project was submitted for the Multimodal Hackathon at LabLab.ai under the project title "Let Them Live".
To install and run Instagram Computer Vision, follow these steps:
-
Clone the Repository
git clone https://github.com/Louvivien/InstagramComputerVisionAI.git cd InstagramComputerVisionAI ```
-
Install the packages:
pip install -r requirements.txt
-
Run the applicatioin:
streamlit run app.py
- AI-Driven Video Analysis: Utilizes GPT-4 Vision to interpret and understand Instagram videos.
- Narration Generation: Converts the AI analysis into engaging, descriptive audio narrations.
- User-Friendly Interface: Easy to navigate app interface for seamless user experience.
- Accessibility Support: Enhances the accessibility of Instagram content for visually impaired users.
After installation, navigate to the app interface. Here's how to use it:
- Enter the username of an Instagram account.
- The app will automatically download and analyze the latest video from that account.
- Listen to or read the generated narration to understand the video's content.
Contributions are what make the open-source community an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Distributed under the MIT License. See LICENSE
for more information.
- Thanks to LabLab.ai for hosting the Multimodal Hackathon.