Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SOLR-17492: Introduce recommendations of WAYS of running Solr from small to massive #2783

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 11 additions & 1 deletion solr/solr-ref-guide/build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ ext {
antoraLunrExtensionVersion = "1.0.0-alpha.8"
asciidoctorMathjaxVersion = "0.0.9"
asciidoctorTabsVersion = "1.0.0-beta.6"
asciidoctorKrokiVersion = "0.18.1"
linkCheckerVersion = "1.4.2"
gulpCliVersion = "2.3.0"
// Most recent commit as of 2022-06-24, this repo does not have tags
Expand Down Expand Up @@ -243,10 +244,18 @@ task downloadAsciidoctorMathjaxExtension(type: NpmTask) {
group = 'Build Dependency Download'
args = ["install", "@djencks/asciidoctor-mathjax@${project.ext.asciidoctorMathjaxVersion}"]

inputs.property("asciidoctor-mathjax version", project.ext.asciidoctorMathjaxVersion)
inputs.property("Antora asciidoctor-mathjax version", project.ext.asciidoctorMathjaxVersion)
outputs.dir("${project.ext.nodeProjectDir}/node_modules/@djencks/asciidoctor-mathjax")
}

task downloadAsciiDoctorKrokiExtension(type: NpmTask) {
group = 'Build Dependency Download'
args = ["install", "asciidoctor-kroki"]

inputs.property("asciidoctor-kroki version", project.ext.asciidoctorKrokiVersion)
outputs.dir("${project.ext.nodeProjectDir}/node_modules/asciidoctor-kroki")
}

task downloadAsciidoctorTabsExtension(type: NpmTask) {
group = 'Build Dependency Download'
args = ["install", "-D", "@asciidoctor/tabs@${project.ext.asciidoctorTabsVersion}"]
Expand All @@ -262,6 +271,7 @@ task downloadAntora {
dependsOn tasks.downloadAntoraCli
dependsOn tasks.downloadAntoraSiteGenerator
dependsOn tasks.downloadAntoraLunrExtension
dependsOn tasks.downloadAsciiDoctorKrokiExtension
dependsOn tasks.downloadAsciidoctorMathjaxExtension
dependsOn tasks.downloadAsciidoctorTabsExtension
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
.Deployment Guide

* xref:solr-control-script-reference.adoc[]

* xref:thinking-about-deployment-strategy.adoc[]
* Installation & Deployment
** xref:system-requirements.adoc[]
** xref:installing-solr.adoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ A very good blog post that discusses the issues to consider is https://lucidwork

One thing to note when planning your installation is that a hard limit exists in Lucene for the number of documents in a single index: approximately 2.14 billion documents (2,147,483,647 to be exact).
In practice, it is highly unlikely that such a large number of documents would fit and perform well in a single index, and you will likely need to distribute your index across a cluster before you ever approach this number.
If you know you will exceed this number of documents in total before you've even started indexing, it's best to plan your installation with xref:cluster-types.adoc#solrcloud-mode[SolrCloud] as part of your design from the start.
Fortunantly, by default Solr is configured to be deployed in xref:cluster-types.adoc#solrcloud-mode[SolrCloud] mode to let you scale up.

== Package Installation

Expand All @@ -84,9 +84,7 @@ This directory includes several important scripts that will make using Solr easi

solr and solr.cmd::: This is xref:solr-control-script-reference.adoc[Solr's Control Script], also known as `bin/solr` (*nix) / `bin/solr.cmd` (Windows).
This script is the preferred tool to start and stop Solr.
You can also create collections or cores, configure authentication, and work with configuration files when running in SolrCloud mode.

post::: The xref:indexing-guide:post-tool.adoc[], which provides a simple command line interface for POSTing content to Solr.
You can also create collections or cores, configure authentication, work with configuration files and even index documents into Solr.

solr.in.sh and solr.in.cmd:::
These are property files for *nix and Windows systems, respectively.
Expand Down Expand Up @@ -198,8 +196,7 @@ For instance, to launch the "techproducts" example, you would do:
bin/solr start --cloud -e techproducts
----

Currently, the available examples you can run are: techproducts, schemaless, and cloud.
See the section xref:solr-control-script-reference.adoc#running-with-example-configurations[Running with Example Configurations] for details on each example.
See the section xref:solr-control-script-reference.adoc#running-with-example-configurations[Running with Example Configurations] for details on all the examples available.

.Going deeper with SolrCloud
NOTE: Running the `cloud` example demonstrates running multiple nodes of Solr using xref:cluster-types.adoc#solrcloud-mode[SolrCloud] mode.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -247,18 +247,25 @@ You can also refer to xref:jvm-settings.adoc[] for tuning your memory and garbag
The `bin/solr` script uses the `-XX:+CrashOnOutOfMemoryError` JVM option to crash Solr on `OutOfMemoryError` exceptions.
This behavior is recommended. In SolrCloud mode ZooKeeper will be immediately notified that a node has experienced a non-recoverable error.

=== Going to Production with SolrCloud

To run Solr in SolrCloud mode, you need to set the `ZK_HOST` variable in the include file to point to your ZooKeeper ensemble.
Running the embedded ZooKeeper is not supported in production environments.
=== Going to Production with SolrCloud with Embedded ZooKeeper

Solr runs by default in SolrCloud mode with an embedded ZooKeeper, no additional configuration required.

=== Going to Production with SolrCloud with External ZooKeeper Ensemble

To run Solr in SolrCloud mode with an external ZooKeeper ensemble, you need to set the `ZK_HOST` variable in the include file to point to your ZooKeeper ensemble.

For instance, if you have a ZooKeeper ensemble hosted on the following three hosts on the default client port 2181 (zk1, zk2, and zk3), then you would set:

[source,bash]
----
ZK_HOST=zk1,zk2,zk3
----

When the `ZK_HOST` variable is set, Solr will launch in "cloud" mode.
When the `ZK_HOST` variable is set, Solr will launch and connect to the defined ZooKeepers instead of starting an embedded ZooKeeper.

See xref:zookeeper-ensemble[ZooKeeper Ensemble Configuration] for more on setting up ZooKeeper.

==== ZooKeeper chroot

Expand Down
Loading
Loading