JSON schema files defining the MEx metadata model.
The Metadata Exchange (MEx) project is committed to improve the retrieval of RKI research data and projects. How? By focusing on metadata: instead of providing the actual research data directly, the MEx metadata catalog captures descriptive information about research data and activities. On this basis, we want to make the data FAIR1 so that it can be shared with others.
Via MEx, metadata will be made findable, accessible and shareable, as well as available for further research. The goal is to get an overview of what research data is available, understand its context, and know what needs to be considered for subsequent use.
RKI cooperated with D4L data4life gGmbH for a pilot phase where the vision of a FAIR metadata catalog was explored and concepts and prototypes were developed. The partnership has ended with the successful conclusion of the pilot phase.
After an internal launch, the metadata will also be made publicly available and thus be available to external researchers as well as the interested (professional) public to find research data from the RKI.
For further details, please consult our project page.
Contact
For more information, please feel free to email us at [email protected].
Robert Koch-Institut
Nordufer 20
13353 Berlin
Germany
Our metadata model is represented as JSON schema in mex/model
. There, we defined 1.
entities
, described by their properties, 2. fields
, small objects, that are used as
$ref
for certain properties, 3. an extension
, which contains additional properties,
that are not in scope of the JSON schema definition, 4. i18n
files, that hold
translations of the properties and are to be used in the context of user interfaces and
5. vocabularies
, which are used in context of the entities
. A more detailed
description of the model's context can be found in /docs/index.rst
.
This package is licensed under the MIT license. All other software components of the MEx project are open-sourced under the same license as well.
- on unix, consider using pyenv https://github.com/pyenv/pyenv
- get pyenv
curl https://pyenv.run | bash
- install 3.11
pyenv install 3.11
- switch version
pyenv global 3.11
- run
make install
- get pyenv
- on windows, consider using pyenv-win https://pyenv-win.github.io/pyenv-win/
- follow https://pyenv-win.github.io/pyenv-win/#quick-start
- install 3.11
pyenv install 3.11
- switch version
pyenv global 3.11
- run
.\mex.bat install
- run all linters with
pdm lint
- update boilerplate files with
cruft update
- update global requirements in
requirements.txt
manually - update git hooks with
pre-commit autoupdate
- update package dependencies using
pdm update-all
- update github actions in
.github/workflows/*.yml
manually
- run
pdm release RULE
to release a new version where RULE determines which part of the version to update and is one ofmajor
,minor
,patch
.
Footnotes
-
FAIR is referencing the so-called FAIR data principles – guidelines to make data Findable, Accessible, Interoperable and Reusable. ↩