From 42ac45f183add18a278603eeb84bf85cc6c6e66e Mon Sep 17 00:00:00 2001 From: markuczy Date: Fri, 20 Dec 2024 11:19:56 +0100 Subject: [PATCH] feat:docs adopted --- docs/modules/general/pages/ai_support.adoc | 6 ++++++ docs/modules/general/pages/index.adoc | 4 ++++ 2 files changed, 10 insertions(+) diff --git a/docs/modules/general/pages/ai_support.adoc b/docs/modules/general/pages/ai_support.adoc index 273101b..e058f52 100644 --- a/docs/modules/general/pages/ai_support.adoc +++ b/docs/modules/general/pages/ai_support.adoc @@ -1,11 +1,15 @@ = Installation Instructions for Local Developer AI support +:idprefix: +:idseparator: - :description: A description for setting up a locally running AI for support developer on coding. +[#installation-ai-locally] == Installation AI locally (Ollama) You can set up ollama either through installation or using a Docker image. +[#install-on-localhost] === Install on local host This installation option has the advantage of out of the box gpu support. Currently windows has only a preview version available. @@ -18,6 +22,7 @@ Configure your ollama to run the desired large language model ollama run mistral ---- +[#run-ollama-with-docker] === Run ollama with Docker If docker is already setup this option is propably the fastest setup, but might require some additional steps for gpu support setup. @@ -34,6 +39,7 @@ docker exec -it ollama ollama run mistral +[#install-continue-dev] == Install continue.dev Use https://continue.dev/docs/quickstart to install it on your ide. diff --git a/docs/modules/general/pages/index.adoc b/docs/modules/general/pages/index.adoc index f760475..1e05854 100644 --- a/docs/modules/general/pages/index.adoc +++ b/docs/modules/general/pages/index.adoc @@ -1,8 +1,12 @@ = General +:idprefix: +:idseparator: - + This guide brings together the collective knowledge of the Onecx community around best practices for creating Onecx software products. +[#ai] == AI * xref:ai_support.adoc[Installation Instructions for Local Developer AI support]