From d483386699cf15d0c9f1339f50a3fbdf76ef1acc Mon Sep 17 00:00:00 2001 From: Manuel Holtgrewe Date: Thu, 13 Jun 2024 08:24:36 +0200 Subject: [PATCH] docs: moving out dev docs (#1698) (#1704) --- .readthedocs.yml | 6 +- docs/developer_checklists.rst | 64 ----- docs/developer_data_builds.rst | 187 ------------- docs/developer_database.rst | 122 --------- docs/developer_development.rst | 189 ------------- docs/developer_installation.rst | 257 ------------------ docs/developer_kiosk.rst | 44 --- docs/developer_templates.rst | 182 ------------- docs/developer_tf.rst | 20 -- docs/index.rst | 23 +- ....sh => install_backend_os_dependencies.sh} | 0 ...sh => install_frontend_os_dependencies.sh} | 0 12 files changed, 6 insertions(+), 1088 deletions(-) delete mode 100644 docs/developer_checklists.rst delete mode 100644 docs/developer_data_builds.rst delete mode 100644 docs/developer_database.rst delete mode 100644 docs/developer_development.rst delete mode 100644 docs/developer_installation.rst delete mode 100644 docs/developer_kiosk.rst delete mode 100644 docs/developer_templates.rst delete mode 100644 docs/developer_tf.rst rename utils/{install_os_dependencies.sh => install_backend_os_dependencies.sh} (100%) rename utils/{install_vue_dev.sh => install_frontend_os_dependencies.sh} (100%) diff --git a/.readthedocs.yml b/.readthedocs.yml index e8f85d43c..18fe54912 100644 --- a/.readthedocs.yml +++ b/.readthedocs.yml @@ -5,9 +5,9 @@ build: tools: python: "3.10" -# Build documentation in the "docs_manual/" directory with Sphinx +# Build documentation in the "docs/" directory with Sphinx sphinx: - configuration: docs_manual/conf.py + configuration: docs/conf.py formats: - pdf @@ -15,4 +15,4 @@ formats: python: install: - - requirements: docs_manual/requirements.txt + - requirements: docs/requirements.txt diff --git a/docs/developer_checklists.rst b/docs/developer_checklists.rst deleted file mode 100644 index ea5afbe50..000000000 --- a/docs/developer_checklists.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. _developer_checklists: - -========== -Checklists -========== - --------- -Releases --------- - -Prerequisites: - -- Have all issues done for the next milestone. - -Tasks: - -1. Create ticket with the following template and assign it to the proper milestone. - - .. code-block:: markdown - - Release for version vVERSION - - - [ ] edit `HISTORY.rst` and ensure a proper section is added - - [ ] edit `admin_upgrade.rst` to reflect the upgrade instructions - - [ ] In varfish-org/varfish-docker-compose, bump the version in `docker-compose.yml` - - [ ] create a git tag `v.MAJOR.MINOR.PATCH` and `git push --tags` - - [ ] create a "Github release" based on the tag with the text - - ``` - All details can be found in the `HISTORY.rst` file. - ``` - -2. Follow through the items. - --------------------------- -Data & Software Validation --------------------------- - -Prerequisites: - -- Have all background data imported into dedicated instances for validation. - (Internally we use ``varfish-build-release-{37,38}.cubi.bihealth.org``). -- Create the ``varfish-site-data-X.tar.gz`` tarball with the database dump. -- Have a token ready for the root user. - -Tasks: - -1. Create a ticket with the following template. - - .. code-block:: markdown - - Validate data for: - - - **VarFish:** vMAJOR.MINOR.PATCH - - **Site Data:** vVERSION (`sha256:CHECKSUM`) - - **Genome Build:** GRCh37 or GRCh38 - - Result Reports: - - PASTE HERE - -2. Use the ``varfish-wf-validation`` Snakemake workflow for running the validation. - -3. Paste the result reports into the tickets. diff --git a/docs/developer_data_builds.rst b/docs/developer_data_builds.rst deleted file mode 100644 index 60e8ffbe7..000000000 --- a/docs/developer_data_builds.rst +++ /dev/null @@ -1,187 +0,0 @@ -.. _developer_data_builds: - -==================== -Docker & Data Builds -==================== - -This section describes how to build the Docker images and also the VarFish site data tarballs. -The intended audience are VarFish developers. - -------------------- -Build Docker Images -------------------- - -Building the image:: - - $ ./docker/build-docker.sh - -By default the latest tag is used. -You can change this with. - - $ GIT_TAG=v0.1.0 ./docker/build-docker.sh - ------------------------------- -Get ``varfish-docker-compose`` ------------------------------- - -The database is built in ``varfish-docker-compose``. - -:: - - $ git clone git@github.com:varfish-org/varfish-docker-compose.git - $ cd varfish-docker-compose - $ ./init.sh - ----------------------------- -First-Time Container Startup ----------------------------- - -You have to startup the postgres container once to create the Postgres database. -Once it has been initialized, shutdown with Ctrl-C. - -:: - - $ docker-compose up postgres - - -Now copy over the ``postgresql.conf`` file that has been tuned for the VarFish use cases. - -:: - - $ cp config/postgres/postgresql.conf volumes/postgres/data/postgresql.conf - -Bring up the site again so we can build the database. - -:: - - $ docker-compose up - -Wait until ``varfish-web`` is up and running and all migrations have been applied, look for ``VARFISH MIGRATIONS END`` in the output of ``run-docker-compose-up.sh``. - ---------------------------- -Pre-Build Postgres Database ---------------------------- - -Download static data - -:: - - $ cd /plenty/space - $ wget https://file-public.bihealth.org/transient/varfish/anthenea/varfish-server-background-db-20201006.tar.gz{,.sha256} - $ sha256sum -c varfish-server-background-db-20201006.tar.gz.sha256 - $ tar xzvf varfish-server-background-db-20201006.tar.gz - -Adjust the ``docker-compose.yml`` file such that ``/plenty/space`` is visible in the varfish-web container. - -:: - - volumes: - - "/plenty/space:/data" - -Get the name of the running varfish-web container. - -:: - - $ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 44be6ece102e minio/minio "/usr/bin/docker-ent…" 11 minutes ago Up About a minute 9000/tcp varfish-docker-compose_minio_1 - 3b23113e5aa1 quay.io/biocontainers/exomiser-rest-prioritiser:12.1.0--1 "exomiser-rest-prior…" 11 minutes ago Up About a minute varfish-docker-compose_exomiser-rest-prioritiser_1 - b8c49e8c24a6 quay.io/biocontainers/jannovar-cli:0.33--0 "jannovar -Xmx6G -Xm…" 11 minutes ago Up About a minute varfish-docker-compose_jannovar_1 - 409a535b9951 varfish-org/varfish-server:0.22.1-0 "docker-entrypoint.s…" 12 minutes ago Up About a minute 8080/tcp varfish-docker-compose_varfish-celerybeat_1 - 7eb7425c59e2 varfish-org/varfish-server:0.22.1-0 "docker-entrypoint.s…" 12 minutes ago Up About a minute 8080/tcp varfish-docker-compose_varfish-celeryd-import_1 - 020811fde306 varfish-org/varfish-server:0.22.1-0 "docker-entrypoint.s…" 12 minutes ago Up About a minute 8080/tcp varfish-docker-compose_varfish-celeryd-query_1 - 87b03ee0249b varfish-org/varfish-server:0.22.1-0 "docker-entrypoint.s…" 12 minutes ago Up About a minute 8080/tcp varfish-docker-compose_varfish-celeryd-default_1 - 7a3fdb337fae varfish-org/varfish-server:0.22.1-0 "docker-entrypoint.s…" 12 minutes ago Up About a minute 8080/tcp varfish-docker-compose_varfish-web_1 - 9295a101570f postgres:12 "docker-entrypoint.s…" 12 minutes ago Up About a minute 5432/tcp varfish-docker-compose_postgres_1 - 1c4d6e235074 traefik:v2.3.1 "/entrypoint.sh --pr…" 12 minutes ago Up About a minute 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp varfish-docker-compose_traefik_1 - 8d72fd096743 redis:6 "docker-entrypoint.s…" 12 minutes ago Up About a minute 6379/tcp varfish-docker-compose_redis_1 - -Initialize the tables (while at least ``docker-compose up varfish-web postgres redis`` is running). - -:: - - $ docker exec -it -w /usr/src/app varfish-docker-compose_varfish-web_1 python manage.py import_tables --tables-path /data --threads 8 - -Then, shutdown the ``docker-compose up``, remove the ``volumes:`` entry for ``varfish-web``, and create a tarball of the postgres database to have a clean copy. - --------------- -Add Other Data --------------- - -Copy the other required data for ``jannovar`` and ``exomiser``. -You can find the appropriate files to download on the Jannovar (via Zenodo) and Exomiser data download sites: - -- https://zenodo.org/record/5410367 -- https://data.monarchinitiative.org/exomiser/data/index.html - -You should use the hg19 data for Exomiser for any genome release as we will only use the the gene to phenotype prioritization that is independent of the genome release. - -The result should look similar to this: - -:: - - # tree volumes/jannovar volumes/exomiser - volumes/jannovar - ├── hg19_ensembl.ser - ├── hg19_refseq_curated.ser - └── hg19_refseq.ser - volumes/exomiser - ├── 1909_hg19 - │ ├── 1909_hg19_clinvar_whitelist.tsv.gz - . . [..] - │ └── 1909_hg19_variants.mv.db - └── 1909_phenotype - ├── 1909_phenotype.h2.db - ├── phenix - │ ├── 10.out - . . [..] - │ ├── ALL_SOURCES_ALL_FREQUENCIES_genes_to_phenotype.txt - │ ├── hp.obo - │ └── phenotype_annotation.tab - └── rw_string_10.mv - - 3 directories, 55 files - ------------------- -Create a Superuser ------------------- - -While the ``docker-compose up`` is running - -:: - - $ docker exec -it -w /usr/src/app varfish-docker-compose_varfish-web_1 python manage.py createsuperuser - Username: root - Email address: - Password: - Password (again): - Superuser created successfully. - ------------------- -Setup Initial Data ------------------- - -Create test category & project. - -Obtain API key and configure ``varfish-cli``. - -Import some test data through the API. - -:: - - $ varfish-cli --no-verify-ssl case create-import-info --resubmit \ - 92f5d735-0967-4db2-a801-50fe96359f51 \ - $(find path/to/variant_export/work/*NA12878* -name '*.tsv.gz' -or -name '*.ped') - - --------------------- -Create Data Tarballs --------------------- - -Now create the released data tarballs. - -:: - - tar -cf - volumes | pigz -c > varfish-site-data-v1-20210728-grch37.tar.gz && sha256sum varfish-site-data-v1-20210728-grch37.tar.gz >varfish-site-data-v1-20210728-grch37.tar.gz.sha256 & - tar -cf - volumes | pigz -c > varfish-site-data-v1-20210728-grch38.tar.gz && sha256sum varfish-site-data-v1-20210728-grch38.tar.gz >varfish-site-data-v1-20210728-grch38.tar.gz.sha256 & - tar -cf - test-data | pigz -c > varfish-test-data-v1-20211125.tar.gz && sha256sum varfish-test-data-v1-20211125.tar.gz >varfish-test-data-v1-20211125.tar.gz.sha256 diff --git a/docs/developer_database.rst b/docs/developer_database.rst deleted file mode 100644 index aa49fbcb5..000000000 --- a/docs/developer_database.rst +++ /dev/null @@ -1,122 +0,0 @@ -.. _developer_database: - -=============== -Database Import -=============== - -First, download the pre-build database files that we provide and unpack them. -Please make sure that you have enough space available. The packed file consumes -31 Gb. When unpacked, it consumed additional 188 Gb. - -.. code-block:: bash - - $ cd /plenty/space - $ wget https://file-public.bihealth.org/transient/varfish/varfish-server-background-db-20201006.tar.gz{,.sha256} - $ sha256sum -c varfish-server-background-db-20201006.tar.gz.sha256 - $ tar xzvf varfish-server-background-db-20201006.tar.gz - -We recommend to exclude the large databases: frequency tables, extra -annotations and dbSNP. Also, keep in mind that importing the whole database -takes >24h, depending on the speed of your HDD. - -This is a list of the possible imports, sorted by its size: - -=================== ==== ================== ============================= -Component Size Exclude Function -=================== ==== ================== ============================= -gnomAD_genomes 80G highly recommended frequency annotation -extra-annos 50G highly recommended diverse -dbSNP 32G highly recommended SNP annotation -thousand_genomes 6,5G highly recommended frequency annotation -gnomAD_exomes 6,0G highly recommended frequency annotation -knowngeneaa 4,5G highly recommended alignment annotation -clinvar 3,3G highly recommended pathogenicity classification -ExAC 1,9G highly recommended frequency annotation -dbVar 573M recommended SNP annotation -gnomAD_SV 250M recommended SV frequency annotation -ncbi_gene 151M gene annotation -ensembl_regulatory 77M frequency annotation -DGV 43M SV annotation -hpo 22M phenotype information -hgnc 15M gene annotation -gnomAD_constraints 13M frequency annotation -mgi 10M mouse gene annotation -ensembltorefseq 8,3M identifier mapping -hgmd_public 5,0M gene annotation -ExAC_constraints 4,6M frequency annotation -refseqtoensembl 2,0M identifier mapping -ensembltogenesymbol 1,6M identifier mapping -ensembl_genes 1,2M gene annotation -HelixMTdb 1,2M MT frequency annotation -refseqtogenesymbol 1,1M identifier mapping -refseq_genes 804K gene annotation -mim2gene 764K phenotype information -MITOMAP 660K MT frequency annotation -kegg 632K pathway annotation -mtDB 336K MT frequency annotation -tads_hesc 108K domain annotation -tads_imr90 108K domain annotation -vista 104K orthologous region annotation -acmg 16K disease gene annotation -=================== ==== ================== ============================= - -You can find the ``import_versions.tsv`` file in the root folder of the -package. This file determines which component (called ``table_group`` and -represented as folder in the package) gets imported when the import command is -issued. To exclude a table, simply comment out (``#``) or delete the line. -Excluding tables that are not required for development can reduce time and -space consumption. Also, the GRCh38 tables can be excluded. - -A space-consumption-friendly version of the file would look like this:: - - build table_group version - GRCh37 acmg v2.0 - #GRCh37 clinvar 20200929 - #GRCh37 dbSNP b151 - #GRCh37 dbVar latest - GRCh37 DGV 2016 - GRCh37 ensembl_genes r96 - GRCh37 ensembl_regulatory latest - GRCh37 ensembltogenesymbol latest - GRCh37 ensembltorefseq latest - GRCh37 ExAC_constraints r0.3.1 - #GRCh37 ExAC r1 - #GRCh37 extra-annos 20200704 - GRCh37 gnomAD_constraints v2.1.1 - #GRCh37 gnomAD_exomes r2.1 - #GRCh37 gnomAD_genomes r2.1 - #GRCh37 gnomAD_SV v2 - GRCh37 HelixMTdb 20190926 - GRCh37 hgmd_public ensembl_r75 - GRCh37 hgnc latest - GRCh37 hpo latest - GRCh37 kegg april2011 - #GRCh37 knowngeneaa latest - GRCh37 mgi latest - GRCh37 mim2gene latest - GRCh37 MITOMAP 20200116 - GRCh37 mtDB latest - GRCh37 ncbi_gene latest - GRCh37 refseq_genes r105 - GRCh37 refseqtoensembl latest - GRCh37 refseqtogenesymbol latest - GRCh37 tads_hesc dixon2012 - GRCh37 tads_imr90 dixon2012 - #GRCh37 thousand_genomes phase3 - GRCh37 vista latest - #GRCh38 clinvar 20200929 - #GRCh38 dbVar latest - #GRCh38 DGV 2016 - -To perform the import, issue: - -.. code-block:: bash - - $ python manage.py import_tables --tables-path /plenty/space/varfish-server-background-db-20201006 - -Performing the import twice will automatically skip tables that are already -imported. To re-import tables, add the ``--force`` parameter to the command: - -.. code-block:: bash - - $ python manage.py import_tables --tables-path varfish-db-downloader --force diff --git a/docs/developer_development.rst b/docs/developer_development.rst deleted file mode 100644 index 1a334c7cf..000000000 --- a/docs/developer_development.rst +++ /dev/null @@ -1,189 +0,0 @@ -.. _developer_development: - -=========== -Development -=========== - ------------------------ -Working With Sodar Core ------------------------ - -VarFish is based on the Sodar Core framework which has a `developer manual `_ itself. -It is worth reading its development instructions. -The following lists the most important topics: - -- `Models `_ -- `Rules `_ -- `Views `_ -- `Templates `_ - - `Icons `_ -- `Forms `_ - - -------------- -Running Tests -------------- - -Running the VarFish test suite is easy, but can take a long time to finish (>10 minutes). - -.. code-block:: bash - - $ make test - -You can exclude time-consuming UI tests: - -.. code-block:: bash - - $ make test-noselenium - -If you are working on one only a few tests, it is better to run them directly. -To specify them, follow the path to the test file, add the class name and the test function, all separated by a dot: - -.. code-block:: bash - - $ python manage.py test -v2 --settings=config.settings.test variants.tests.test_ui.TestVariantsCaseFilterView.test_variant_filter_case_multi_bookmark_one_variant - -This would run the UI tests in the variants app for the case filter view. - -To speedup your tests, you can use the ``--keepdb`` parameter. -This will only run the migrations on the first test run. - ---------------- -Style & Linting ---------------- - -We use `black `__ for formatting Python code, `flake8 `__ for linting, and `isort `__ for sorting includes. -To ensure that your Python code follows all restrictions and passes CI, use - -.. code-block:: bash - - $ make black isort flake8 - -We use `prettier `__ for Javascript formatting and `eslint `__ for linting the code. -Similarly, you can use the following for the Javascript/Vue code: - -.. code-block:: bash - - $ make vue_lint prettier - -Or, all together: - -.. code-block:: bash - - $ make black isort flake8 vue_lint prettier - ---------- -Storybook ---------- - -We use `Storybook.js `__ to develop Vue components in isolation. -After running ``nmp i`` in ``varfish/vueapp``, you can launch the Storybook browser by calling: - -.. code-block:: bash - - $ make storybook - ----------------- -Working With Git ----------------- - -In this section we will briefly describe the workflow how to contribute to VarFish. -This is not a git tutorial and we expect basic knowledge. -We recommend `gitready `_ for any questions regarding git. -We do use `git rebase `_ a lot. - -In general, we recommend to work with ``git gui`` and ``gitk``. - -The first thing for you to do is to create a fork of our github repository in your github space. -To do so, go to the `VarFish repository `_ and click on the ``Fork`` button in the top right. - -Update Main -=========== - -`Pull with rebase on gitready `__ - -.. code-block:: bash - - $ git pull --rebase - - -Create Working Branch -===================== - -Always create your working branch from the latest main branch. -Use the ticket number and description as name, following the format ``-``, e.g. - -.. code-block:: bash - - $ git checkout -b 123-adding-useful-feature - -Write A Sensible Commit Message -=============================== - -A commit message should only have 72 characters per line. -As the first line is the representative, it should sum up everything the commit does. -Leave a blank line and add three lines of github directives to reference the issue. - -.. code-block:: - - Fixed serious bug that prevented user from doing x. - - Closes: #123 - Related-Issue: #123 - Projected-Results-Impact: none - -Cleanup Before Pull Request -=========================== - -We suggest to first squash your commits and then do a rebase to the main branch. - -Squash Multiple Commits (Or Use Amend) --------------------------------------- - -`Pull with rebase on gitready `__ - -We prefer to have only one commit per feature (most of the time there is only one feature per branch). -When your branch is rebased on the main branch, do: - -.. code-block:: bash - - $ git rebase -i main - -Alternatively, you can always use ``git commit --amend`` to modify your last commit. -This allows you also to change your latest commit message. - -Rebase To Main --------------- - -Make sure your main is up-to-date. In you branch, do: - -.. code-block:: bash - - $ git checkout 123-adding-useful-feature - $ git rebase main - -In case of conflicts, resolve them (find ``<<<<`` in conflicting files) and do: - -.. code-block:: bash - - $ git add conflicting.file - $ git rebase --continue - -If unsure, abort the rebase: - -.. code-block:: bash - - $ git rebase --abort - -Push To Origin --------------- - -.. code-block:: bash - - $ git push origin 123-adding-useful-feature - -In case you squashed and/or rebased and already pushed the branch, you need to force the push: - -.. code-block:: bash - - $ git push -f origin 123-adding-useful-feature diff --git a/docs/developer_installation.rst b/docs/developer_installation.rst deleted file mode 100644 index 1a4915c8f..000000000 --- a/docs/developer_installation.rst +++ /dev/null @@ -1,257 +0,0 @@ -.. _developer_installation: - -============ -Installation -============ - -The VarFish installation for developers should be set up differently from the -installation for production use. - -The reason being is that the installation for production use runs completely in -a Docker environment. All containers are assigned to a Docker network that the -host by default has no access to, except for the reverse proxy that gives -access to the VarFish webinterface. - -The developers installation is intended not to carry the full VarFish database -such that it is light-weight and fits on a laptop. We advise to install the -services not running in a Docker container. - -Please find the instructions for the Windows installation at the end of the page. - ----------------- -Install Postgres ----------------- - -Follow the instructions for your operating system to install `Postgres `_. -Make sure that the version is 12 (11, 13 and 14 would also work). -Ubuntu 20 already includes postgresql 12. In case of older Ubuntu versions, this would be:: - - $ sudo apt install postgresql-12 - - -Adapt the postgres configuration file, for postgres 14 this would be: - - sudo sed -i -e 's/.*max_locks_per_transaction.*/max_locks_per_transaction = 1024 # min 10/' /etc/postgresql/14/main/postgresql.conf - -------------- -Install Redis -------------- - -`Redis `_ is the broker that celery uses to manage the queues. -Follow the instructions for your operating system to install Redis. -For Ubuntu, this would be:: - - $ sudo apt install redis-server - -.. _dev_install_python_pipenv: - ---------------------- -Install Python Pipenv ---------------------- - -We use `pipenv `__ for managing dependencies. -The advantage over ``pip`` is that also the versions of "dependencies of dependencies" will be tracked in a ``Pipfile.lock`` file. -This allows for better reprocubility. -Earlier developer's instructions used ``conda`` but these instructions were updated. - -Also, note that VarFish is developed using Python 3.10 only. -To install Python 3.10, you can use `pyenv `__. -If you already have Python 3.10 (check with ``python --version`` then you can skip this step). - -.. code-block:: bash - - $ git clone https://github.com/pyenv/pyenv.git ~/.pyenv - $ echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc - $ echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc - $ echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.bashrc - $ exec $SHELL - $ pyenv install 3.10 - $ pyenv global 3.10 - -Now, install the latest version of pip and pipenv: - -.. code-block:: bash - - $ pip install --upgrade pip pipenv - - --------------------- -Clone git repository --------------------- - -Clone the VarFish Server repository and switch into the checkout. - -.. code-block:: bash - - $ git clone https://github.com/varfish-org/varfish-server - $ cd varfish-server - - ---------------------------- -Install Python Requirements ---------------------------- - -Some required packages have dependencies that are usually not preinstalled. -Therefore, run - -.. code-block:: bash - - $ sudo bash utils/install_os_dependencies.sh - -Now, you can install the Python dependencies as follows: - -.. code-block:: bash - - $ pipenv install --dev - -Afterwards, you can activate the virtual environment: - -.. code-block:: bash - - $ pipenv shell - # e.g., - $ make black - -Alternatively, you can also run commands directly in the virtual environment: - -.. code-block:: bash - - $ pipenv run make black - -For greater verbosity, we will use ``pyenv run COMMAND`` below, but you can skip the ``pyenv run`` if you are in the ``pyenv shell``. - --------------- -Setup Database --------------- - -Use the tool provided in ``utils/`` to set up the database. The name for the -database should be ``varfish`` (create new user: yes, name: varfish, password: varfish). - -.. code-block:: bash - - $ bash utils/setup_database.sh - ------------- -Setup vue.js ------------- - -Use the tool provided in ``utils/`` to set up vue.js. - -.. code-block:: bash - - $ sudo bash utils/install_vue_dev.sh - -Open an additional terminal and switch into the vue directory. -Then install the Varfish vue app. - -.. code-block:: bash - - $ cd varfish/vueapp/ - $ npm install - -When finished, keep this terminal open to run the vue app. - -.. code-block:: bash - - $ npm run serve - -------------- -Setup VarFish -------------- - -First, create a ``.env`` file with the following content. - -.. code-block:: bash - - export DATABASE_URL="postgres://varfish:varfish@127.0.0.1/varfish" - export CELERY_BROKER_URL=redis://localhost:6379/0 - export PROJECTROLES_ADMIN_OWNER=root - export DJANGO_SETTINGS_MODULE=config.settings.local - -If you wish to enable structural variants, add the following line. - -.. code-block:: bash - - export VARFISH_ENABLE_SVS=1 - -To create the tables in the VarFish database, run the ``migrate`` command. -This step can take a few minutes. - -.. code-block:: bash - - $ pipenv run python manage.py migrate - -Once done, create a superuser for your VarFish instance. By default, the VarFish root user is named ``root`` (the -setting can be changed in the ``.env`` file with the ``PROJECTROLES_ADMIN_OWNER`` variable). - -.. code-block:: bash - - $ pipenv run python manage.py createsuperuser - -Last, download the icon sets for VarFish and make scripts, stylesheets and icons available. - -.. code-block:: bash - - $ pipenv run python manage.py geticons -c bi cil fa-regular fa-solid gridicons octicon - $ pipenv run python manage.py collectstatic - -When done, open two terminals and start the VarFish server and the celery server. - -.. code-block:: bash - - terminal1$ pipenv run make serve - terminal2$ pipenv run make celery - - -====================== -Installation (Windows) -====================== - -The setup was done on a recent version of Windows 10 with Windows Subsystem for Linux Version 2 (WSL2). - ------------------ -Installation WSL2 ------------------ - -Following [this tutorial](https://www.omgubuntu.co.uk/how-to-install-wsl2-on-windows-10) to install WSL2. - -- Note that the whole thing appears to be a bit convoluted, you start out with `wsl.exe --install` -- Then you can install latest LTS Ubuntu 22.04 with the Microsoft Store -- Once complete, you probably end up with a WSL 1 (one!) that you can conver to version 2 (two!) with `wsl --set-version Ubuntu-22.04 2` or similar. -- WSL2 has some advantages including running a full Linux kernel but is even slower in I/O to the NTFS Windows mount. -- Everything that you do will be inside the WSL image. - --------------------- -Install Dependencies --------------------- - -.. code-block:: - - $ sudo apt install libsasl2-dev python3-dev libldap2-dev libssl-dev gcc make rsync - $ sudo apt install postgresql postgresql-server-dev-14 postgresql-client redis - $ sudo service postgresql start - $ sudo service postgresql status - $ sudo service redis-server start - $ sudo service redis-server status - $ sudo sed -i -e 's/.*max_locks_per_transaction.*/max_locks_per_transaction = 1024 # min 10/' /etc/postgresql/14/main/postgresql.conf - $ sudo service postgresql restart - -Create a postgres user `varfish` with password `varfish` and a database. - -.. code-block:: - - $ sudo -u postgres createuser -s -r -d varfish -P - $ [enter varfish as password] - $ sudo -u postgres createdb --owner=varfish varfish - -From here on, you can follow the instructions for the Linux installation, starting at `ref:dev_install_python_pipenv`. - - -------------------------- -Open WSL image in PyCharm -------------------------- - -This has been tested with PyCharm Professional only. - -- You can simply open projects in the WSL, e.g., `\\wsl$Ubuntu-22.04\home...`. -- You can add the interpreter in the `varfish-server` miniconda3 environment to PyCharm which gives you access to. diff --git a/docs/developer_kiosk.rst b/docs/developer_kiosk.rst deleted file mode 100644 index 74c5ad746..000000000 --- a/docs/developer_kiosk.rst +++ /dev/null @@ -1,44 +0,0 @@ -.. _developer_kiosk: - -===== -Kiosk -===== - -The Kiosk mode in VarFish enables users to upload VCF files. -This is not intended for production use as every upload will create it's own project, so there is no way of -organizing your cases properly. The mode serves only as a way to try out VarFish for external users. - -------------- -Configuration -------------- - -First, you need to download the VarFish annotator data (11Gb) and unpack it. - -.. code-block:: bash - - $ wget https://file-public.bihealth.org/transient/varfish/varfish-annotator-{,transcripts-}20191129.tar.gz{,.sha256} - $ tar xzvf varfish-annotator-20191129.tar.gz - $ tar xzvf varfish-transcripts-20191129.tar.gz - -If you want to enable Kiosk mode, add the following lines to the ``.env`` file. - -.. code-block:: bash - - export VARFISH_KIOSK_MODE=1 - export VARFISH_KIOSK_VARFISH_ANNOTATOR_REFSEQ_SER_PATH=/path/to/varfish-annotator-transcripts-20191129/hg19_refseq_curated.ser - export VARFISH_KIOSK_VARFISH_ANNOTATOR_ENSEMBL_SER_PATH=/path/to/varfish-annotator-transcripts-20191129/hg19_ensembl.ser - export VARFISH_KIOSK_VARFISH_ANNOTATOR_REFERENCE_PATH=/path/to/unpacked/varfish-annotator-20191129/hs37d5.fa - export VARFISH_KIOSK_VARFISH_ANNOTATOR_DB_PATH=/path/to/unpacked/varfish-annotator-20191129/varfish-annotator-db-20191129.h2.db - export VARFISH_KIOSK_CONDA_PATH=/path/to/miniconda/bin/activate - ---- -Run ---- - -To run the kiosk mode, simply (re)start the webserver server and the celery server. - -.. code-block:: bash - - terminal1$ make serve - terminal2$ make celery - diff --git a/docs/developer_templates.rst b/docs/developer_templates.rst deleted file mode 100644 index 31f896742..000000000 --- a/docs/developer_templates.rst +++ /dev/null @@ -1,182 +0,0 @@ -.. _developer_templates: - -=========================== -Templates (for Issues etc.) -=========================== - -We do organize bug reports and feature request in the -`Github issue tracker `_. -Please choose the template that fits best what you want to report and fill out -the questions to help us decide on how to approach the task. - ------------ -Bug Reports ------------ - -The template for bug reports has the following form (an up-to-date form is located in the Github issue tracker): - -.. code-block:: markdown - - **Describe the bug** - A clear and concise description of what the bug is. - - **To Reproduce** - Steps to reproduce the behavior: - 1. Go to '...' - 2. Click on '....' - 3. Scroll down to '....' - 4. See error - - **Expected behavior** - A clear and concise description of what you expected to happen. - - **Screenshots** - If applicable, add screenshots to help explain your problem. - - **Desktop (please complete the following information):** - - OS: [e.g. iOS] - - Browser [e.g. chrome, safari] - - Version [e.g. 22] - - **Smartphone (please complete the following information):** - - Device: [e.g. iPhone6] - - OS: [e.g. iOS8.1] - - Browser [e.g. stock browser, safari] - - Version [e.g. 22] - - **Additional context** - Add any other context about the problem here. - -Root Cause Analysis -=================== - -In the following, a root cause analysis (RCA) needs to be done. The ticket will get an answer with the title -**Root Cause Analysis** and a thorough description of what might cause the bug. - -Resolution Proposal -=================== - -When the root cause is determined, a solution needs to be proposed, following this form: - -.. code-block:: markdown - - **Resolution Proposal** - e.g. The component X needs to be changed to Y so Z is not executed when M occurs. - - **Affected Components** - e.g. VarFish server - - **Affected Modules/Files** - e.g. variants module or queries.py - - **Required Architectural Changes** - e.g. Function F needs to be moved to X. - - **Required Database Changes** - i.e. name any model that needs changing, to be added and will lead to a migration - - **Backport Possible?** - e.g., "Yes" if this is a bug fix or small change and should be backported to the current stable version - - **Resolution Sketch** - e.g. Change X in F. Then do Y. - - -Commits -======= - -All commits should adhere to `semantic pull requests `__. -That is, the commit messages look like this: - -:: - - prefix: message here - -Valid prefixes types are defined in `here `__. - -Common examples are: - -:: - - fix: fixing bug (#number) - feat: implementing feature (#number) - chore: will not go to changelog (#number) - -Almost all commits should refer to a ticket in trailing parenthesis, e.g. - -:: - - fix: resolve some issue (#NUMBER) - -Note that we enforce squash commits for pull requests. -All of your commits will be squashed when merged. -The pull request should be broken into semantic parts and checking this is part of the code review process. - -Fix & Pull Request -================== - -1. Create new branch (name starts with issue number), e.g. ``123-fix-for-issue`` -2. Create pull request in "Draft" state -3. Fix problem, ideally in a test-driven way, remove "Draft" state - -Review & Merge -============== - -1. Perform code review -2. Ensure fix is documented in changelog (link to bug and PR #ids) - ----------------- -Feature Requests ----------------- - -A feature request follows the same workflow as a bug request (an up-to-date form is located in the Github issue tracker): - -.. code-block:: markdown - - **Is your feature request related to a problem? Please describe.** - A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - - **Describe the solution you'd like** - A clear and concise description of what you want to happen. - - **Describe alternatives you've considered** - A clear and concise description of any alternative solutions or features you've considered. - - **Additional context** - Add any other context or screenshots about the feature request here. - - -Design -====== - -In the following, the design of the feature needs to be specified: - -.. code-block:: markdown - - **Implementation Proposal** - e.g. The component X needs to be changed to Y so Z is not executed when M occurs. - - **Affected Components** - e.g. VarFish server - - **Affected Modules/Files** - e.g. variants module or queries.py - - **Required Architectural Changes** - e.g. Function F needs to be moved to X. - - **Implementation Sketch** - e.g. Change X in F. Then do Y. - -Implement & Test -================ - -1. Create feature branch, named starting with issue ID -2. Perform implementation, ideally in a test-driven way -3. Tests and documentation must be augmented/updated as well - -Review & Merge -============== - -1. Perform code review -2. Ensure change is documented in changelog (link to feature issue and PR #ids) diff --git a/docs/developer_tf.rst b/docs/developer_tf.rst deleted file mode 100644 index a31fbe04c..000000000 --- a/docs/developer_tf.rst +++ /dev/null @@ -1,20 +0,0 @@ -.. _dev_tf: - -========================= -GitHub Project Management -========================= - -We use Terraform for managing the GitHub project settings (as applicable): - -.. code-block:: bash - - $ export GITHUB_OWNER=bihealth - $ export GITHUB_TOKEN=ghp_ - - $ cd utils/terraform - $ terraform init - $ terraform import github_repository.varfish-server varfish-server - $ terraform validate - $ terraform fmt - $ terraform plan - $ terraform apply diff --git a/docs/index.rst b/docs/index.rst index 93bbb1b24..73822436e 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -7,6 +7,9 @@ VarFish User Manual VarFish is a system for the filtration of variants. Currently, the main focus is on small/sequence variants called from high-througput sequencing data (in contrast to structural variants). +This is the documentation aimed at VarFish end users and operators (administrators). +The development documentation can be found at `varfish-dev-docs.readthedocs.io `__. + .. figure:: figures/varfish_home.png :alt: The VarFish home screen. :width: 80% @@ -138,26 +141,6 @@ Currently, the main focus is on small/sequence variants called from high-througp api_json_schemas api_beacon -.. raw:: latex - - \part{Developer's Manual} - -.. toctree:: - :maxdepth: 1 - :caption: Developer's Manual - :name: developers_manual - :hidden: - :titlesonly: - - developer_installation - developer_database - developer_development - developer_kiosk - developer_templates - developer_checklists - developer_data_builds - developer_tf - .. raw:: latex \part{Notes} diff --git a/utils/install_os_dependencies.sh b/utils/install_backend_os_dependencies.sh similarity index 100% rename from utils/install_os_dependencies.sh rename to utils/install_backend_os_dependencies.sh diff --git a/utils/install_vue_dev.sh b/utils/install_frontend_os_dependencies.sh similarity index 100% rename from utils/install_vue_dev.sh rename to utils/install_frontend_os_dependencies.sh