Skip to content
/ MMEA Public
forked from liyichen-cly/MMEA

MMEA: Entity Alignment for Multi-Modal Knowledge Graphs

Notifications You must be signed in to change notification settings

huangjf11/MMEA

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MMEA: Entity Alignment for Multi-Modal Knowledge Graphs

Contributions Welcome language-python3 made-with-Tensorflow Paper DOI
Model code and datasets for paper "MMEA: Entity Alignment for Multi-Modal Knowledge Graphs" published in Proceedings of the 13th International Conference on Knowledge Science, Engineering and Management (KSEM'2020).
MMEA task

Entity alignment plays an essential role in the knowledge graph (KG) integration. Though large efforts have been made on exploring the association of relational embeddings between different knowledge graphs, they may fail to effectively describe and integrate the multimodal knowledge in the real application scenario. To that end, in this paper, we propose a novel solution called Multi-Modal Entity Alignment (MMEA) to address the problem of entity alignment in a multi-modal view. Specifically, we first design a novel multi-modal knowledge embedding method to generate the entity representations of relational, visual and numerical knowledge, respectively. Along this line, multiple representations of different types of knowledge will be integrated via a multimodal knowledge fusion module. Extensive experiments on two public datasets clearly demonstrate the effectiveness of the MMEA model with a significant margin compared with the state-of-the-art methods.

Dataset

Three public multi-modal knowledge graphs with relational, numerical and visual knowledge from paper "MMKG: Multi-Modal Knowledge Graphs", i.e., FB15k, DB15k and YAGO15k. There are sameAs links between FB15k and DB15k as well as between FB15k and YAGO15k, which could be regarded as alignment relations. Please click here to download the datasets.

Code

MMEA framework
Our code was implemented by extending the public benchmark OpenEA, therefore we only public the model code to avoid repetition. We appreciate the authors for making OpenEA open-sourced.

Dependencies

  • Python 3.6
  • Tensorflow 1.10
  • Numpy 1.16

Citation

If you use this model or code, please kindly cite it as follows:

@inproceedings{chen2020mmea,
  title={MMEA: Entity Alignment for Multi-modal Knowledge Graph},
  author={Liyi Chen and Zhi Li and Yijun Wang and Tong Xu and Zhefeng Wang and Enhong Chen},
  booktitle={International Conference on Knowledge Science, Engineering and Management},
  pages={134--147},
  year={2020},
  organization={Springer}
}

Last but not least, if you have any difficulty or question in implementations, please send your email to [email protected].

About

MMEA: Entity Alignment for Multi-Modal Knowledge Graphs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%