Skip to content

Latest commit

 

History

History
107 lines (77 loc) · 3.96 KB

README.md

File metadata and controls

107 lines (77 loc) · 3.96 KB

VHAKG tools

This repository provides a set of tools for searching and extracting videos from VHAKG, a multi-modal knowledge graph (MMKG) of multi-view videos of daily activities.

Contents

How to use

Prerequisites

  • Local machine (RAM: 32GB, HDD: free space 150GB)
    • If there is not enough free memory, loading will be skipped; increase Docker's memory allocation. We have allocated 16 GB of memory to Docker and confirmed that it works. It may work with a little less.
  • Install Docker
  • Download VHAKG
    DOI

GUI

  • Run mkdir RDF.

  • Place VHAKG's .ttl files on RDF/ only for the first time

    • Important: Please do not place any files other than .ttl under the RDF/. Please delete .DS_Store if it exists.
  • Run chmod +x entrypoint.sh only for the first time

  • Run docker compose up --build -d

    • Important: If you are not using Apple Silicon, you must change the GraphDB image in compose.yaml from ontotext/graphdb:10.4.4-arm64 to ontotext/graphdb:10.4.4
  • Wait for data to be loaded until the Docker GraphDB container displays the log [main] INFO com.ontotext.graphdb.importrdf.Preload - Finished.

  • Open http://localhost:5050

    • Please wait a moment when you open first time, as the back-end system needs to load the activity data.

gif

CLI

  • Perform the same steps as in GUI
  • Run cd cli
  • Run pip install -r requirements.txt only for the first time
  • Run python mmkg-search.py -h if you want to know command arguments
  • Run python mmkg-search.py args

Example

Extract the video segment of the "grab" part from the camera4’s video of "clean_kitchentable1" in scene1.

python mmkg-search.py clean_kitchentable1 scene1 camera4 . -a grab

SPARQL

How to develop

GUI

  • Run mkdir RDF only for the first time
  • Place RDF Data on RDF/ only for the first time
  • Run chmod +x entrypoint.sh only for the first time
  • Run COMPOSE_FILE=compose.yaml:development.yaml docker compose up
  • Wait for data to be loaded until the Docker GraphDB container displays the log [main] INFO com.ontotext.graphdb.importrdf.Preload - Finished.
  • Open http://localhost:5051

Lint

  • Run docker compose exec app-dev sh -c "cd /app && yarn lint"

Format

  • Run docker compose exec app-dev sh -c "cd /app && yarn format"

CLI

Environment

  • Run pyenv install miniforge3-4.14.0-2
  • Run pyenv virtualenv miniforge3-4.14.0-2 vhakg-tools

Experiments

An experimental example of dataset creation and LVLM evaluation using VHAKG

Dataset creation

Evaluation

GPT-4o and GPT-4V

  • Run pip install openai
  • Run jupyter notebook
  • Open&Run evaluate_lvlm.ipynb with your OpenAI API key