Skip to content

Commit

Permalink
format&clean dist
Browse files Browse the repository at this point in the history
  • Loading branch information
msgui committed Feb 7, 2024
1 parent a614e7b commit 229f690
Show file tree
Hide file tree
Showing 19 changed files with 192 additions and 169 deletions.
39 changes: 25 additions & 14 deletions hugegraph-server/hugegraph-dist/docker/README.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,27 @@
# Deploy Hugegraph server with docker

> Note:
>
> 1. The docker image of hugegraph is a convenience release, not official distribution artifacts from ASF. You can find more details from [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub).
>
> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use `latest` tag to experience the newest functions in development.
>
> 1. The docker image of hugegraph is a convenience release, not official distribution artifacts
from ASF. You can find more details
from [ASF Release Distribution Policy](https://infra.apache.org/release-distribution.html#dockerhub).
>
> 2. Recommand to use `release tag`(like `1.2.0`) for the stable version. Use `latest` tag to
experience the newest functions in development.

## 1. Deploy

We can use docker to quickly start an inner HugeGraph server with RocksDB in background.

1. Using docker run

Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph` to start hugegraph server.
Use `docker run -itd --name=graph -p 8080:8080 hugegraph/hugegraph` to start hugegraph server.

2. Using docker compose

Certainly we can only deploy server without other instance. Additionally, if we want to manage other HugeGraph-related instances with `server` in a single file, we can deploy HugeGraph-related instances via `docker-compose up -d`. The `docker-compose.yaml` is as below:
Certainly we can only deploy server without other instance. Additionally, if we want to manage
other HugeGraph-related instances with `server` in a single file, we can deploy HugeGraph-related
instances via `docker-compose up -d`. The `docker-compose.yaml` is as below:

```yaml
version: '3'
Expand All @@ -29,18 +34,22 @@ We can use docker to quickly start an inner HugeGraph server with RocksDB in bac
## 2. Create Sample Graph on Server Startup
If you want to **pre-load** some (test) data or graphs in container(by default), you can set the env `PRELOAD=ture`
If you want to **pre-load** some (test) data or graphs in container(by default), you can set the
env `PRELOAD=ture`

If you want to customize the pre-loaded data, please mount the the groovy scripts (not necessary).

1. Using docker run

Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/yourScript:/hugegraph/scripts/example.groovy hugegraph/hugegraph`
to start hugegraph server.
Use `docker run -itd --name=graph -p 8080:8080 -e PRELOAD=true -v /path/to/yourScript:/hugegraph/scripts/example.groovy hugegraph/hugegraph`
to start hugegraph server.

2. Using docker compose
2. Using docker compose

We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is below. [example.groovy](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy) is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different data:
We can also use `docker-compose up -d` to quickly start. The `docker-compose.yaml` is
below. [example.groovy](https://github.com/apache/incubator-hugegraph/blob/master/hugegraph-dist/src/assembly/static/scripts/example.groovy)
is a pre-defined script. If needed, we can mount a new `example.groovy` to preload different
data:

```yaml
version: '3'
Expand All @@ -57,17 +66,19 @@ If you want to customize the pre-loaded data, please mount the the groovy script

3. Using start-hugegraph.sh

If you deploy HugeGraph server without docker, you can also pass arguments using `-p`, like this: `bin/start-hugegraph.sh -p true`.
If you deploy HugeGraph server without docker, you can also pass arguments using `-p`, like
this: `bin/start-hugegraph.sh -p true`.

## 3. Enable Authentication

1. Using docker run

Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=123456 hugegraph/hugegraph` to enable the authentication and set the password with `-e AUTH=true -e PASSWORD=123456`.
Use `docker run -itd --name=graph -p 8080:8080 -e AUTH=true -e PASSWORD=123456 hugegraph/hugegraph`
to enable the authentication and set the password with `-e AUTH=true -e PASSWORD=123456`.

2. Using docker compose

Similarly, we can set the envionment variables in the docker-compose.yaml:
Similarly, we can set the envionment variables in the docker-compose.yaml:

```yaml
version: '3'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ services:
depends_on:
- cassandra
healthcheck:
test: ["CMD", "bin/gremlin-console.sh", "--" ,"-e", "scripts/remote-connect.groovy"]
test: [ "CMD", "bin/gremlin-console.sh", "--" ,"-e", "scripts/remote-connect.groovy" ]
interval: 10s
timeout: 30s
retries: 3
Expand All @@ -49,7 +49,7 @@ services:
networks:
- ca-network
healthcheck:
test: ["CMD", "cqlsh", "--execute", "describe keyspaces;"]
test: [ "CMD", "cqlsh", "--execute", "describe keyspaces;" ]
interval: 10s
timeout: 30s
retries: 5
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,5 +15,5 @@
* under the License.
*/

:remote connect tinkerpop.server conf/remote.yaml
:> hugegraph
: remote connect tinkerpop . server conf / remote.yaml
: > hugegraph
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
hosts: [localhost]
hosts: [ localhost ]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -28,12 +28,12 @@ graphs: {
scriptEngines: {
gremlin-groovy: {
staticImports: [
org.opencypher.gremlin.process.traversal.CustomPredicates.*',
org.opencypher.gremlin.traversal.CustomFunctions.*
org.opencypher.gremlin.process.traversal.CustomPredicates.*',
org.opencypher.gremlin.traversal.CustomFunctions.*
],
plugins: {
org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.hugegraph.plugin.HugeGraphGremlinPlugin: { },
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: { },
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {
classImports: [
java.lang.Math,
Expand Down Expand Up @@ -70,13 +70,13 @@ scriptEngines: {
org.opencypher.gremlin.traversal.CustomPredicate
],
methodImports: [
java.lang.Math#*,
org.opencypher.gremlin.traversal.CustomPredicate#*,
org.opencypher.gremlin.traversal.CustomFunctions#*
java.lang.Math#*,
org.opencypher.gremlin.traversal.CustomPredicate#*,
org.opencypher.gremlin.traversal.CustomFunctions#*
]
},
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {
files: [scripts/empty-sample.groovy]
files: [ scripts/empty-sample.groovy ]
}
}
}
Expand All @@ -85,34 +85,34 @@ serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV2d0,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
metrics: {
consoleReporter: {enabled: false, interval: 180000},
csvReporter: {enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv},
jmxReporter: {enabled: false},
slf4jReporter: {enabled: false, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}
consoleReporter: { enabled: false, interval: 180000 },
csvReporter: { enabled: false, interval: 180000, fileName: ./metrics/gremlin-server-metrics.csv },
jmxReporter: { enabled: false },
slf4jReporter: { enabled: false, interval: 180000 },
gangliaReporter: { enabled: false, interval: 180000, addressingMode: MULTICAST },
graphiteReporter: { enabled: false, interval: 180000 }
}
maxInitialLineLength: 4096
maxHeaderSize: 8192
Expand Down
32 changes: 17 additions & 15 deletions hugegraph-server/hugegraph-dist/src/assembly/static/conf/log4j2.xml
Original file line number Diff line number Diff line change
Expand Up @@ -30,48 +30,48 @@

<!-- Normal server log config -->
<RollingRandomAccessFile name="file" fileName="${LOG_PATH}/${FILE_NAME}.log"
filePattern="${LOG_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-%i.log"
immediateFlush="false">
filePattern="${LOG_PATH}/$${date:yyyy-MM}/${FILE_NAME}-%d{yyyy-MM-dd}-%i.log"
immediateFlush="false">
<ThresholdFilter level="TRACE" onMatch="ACCEPT" onMismatch="DENY"/>
<PatternLayout pattern="%-d{yyyy-MM-dd HH:mm:ss} [%t] [%p] %c{1.} - %m%n"/>
<!-- Trigger after exceeding 1day or 50MB -->
<Policies>
<SizeBasedTriggeringPolicy size="50MB"/>
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
<!-- Keep 5 files per day & auto delete after over 2GB or 100 files -->
<DefaultRolloverStrategy max="5">
<Delete basePath="${LOG_PATH}" maxDepth="2">
<IfFileName glob="*/*.log"/>
<!-- Limit log amount & size -->
<IfAny>
<IfAccumulatedFileSize exceeds="2GB" />
<IfAccumulatedFileCount exceeds="100" />
<IfAccumulatedFileSize exceeds="2GB"/>
<IfAccumulatedFileCount exceeds="100"/>
</IfAny>
</Delete>
</DefaultRolloverStrategy>
</RollingRandomAccessFile>

<!-- Separate & compress audit log, buffer size is 512KB -->
<RollingRandomAccessFile name="audit" fileName="${LOG_PATH}/audit-${FILE_NAME}.log"
filePattern="${LOG_PATH}/$${date:yyyy-MM}/audit-${FILE_NAME}-%d{yyyy-MM-dd-HH}-%i.gz"
bufferSize="524288" immediateFlush="false">
filePattern="${LOG_PATH}/$${date:yyyy-MM}/audit-${FILE_NAME}-%d{yyyy-MM-dd-HH}-%i.gz"
bufferSize="524288" immediateFlush="false">
<ThresholdFilter level="TRACE" onMatch="ACCEPT" onMismatch="DENY"/>
<!-- Use simple format for audit log to speed up -->
<PatternLayout pattern="%-d{yyyy-MM-dd HH:mm:ss} - %m%n"/>
<!-- Trigger after exceeding 1hour or 500MB -->
<Policies>
<SizeBasedTriggeringPolicy size="500MB"/>
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
<!-- Keep 2 files per hour & auto delete [after 60 days] or [over 5GB or 500 files] -->
<DefaultRolloverStrategy max="2">
<Delete basePath="${LOG_PATH}" maxDepth="2">
<IfFileName glob="*/*.gz"/>
<IfLastModified age="60d"/>
<IfAny>
<IfAccumulatedFileSize exceeds="5GB" />
<IfAccumulatedFileCount exceeds="500" />
<IfAccumulatedFileSize exceeds="5GB"/>
<IfAccumulatedFileCount exceeds="500"/>
</IfAny>
</Delete>
</DefaultRolloverStrategy>
Expand All @@ -86,16 +86,16 @@
<!-- Trigger after exceeding 1day or 50MB -->
<Policies>
<SizeBasedTriggeringPolicy size="50MB"/>
<TimeBasedTriggeringPolicy interval="1" modulate="true" />
<TimeBasedTriggeringPolicy interval="1" modulate="true"/>
</Policies>
<!-- Keep 5 files per day & auto delete after over 2GB or 100 files -->
<DefaultRolloverStrategy max="5">
<Delete basePath="${LOG_PATH}" maxDepth="2">
<IfFileName glob="*/*.log"/>
<!-- Limit log amount & size -->
<IfAny>
<IfAccumulatedFileSize exceeds="2GB" />
<IfAccumulatedFileCount exceeds="100" />
<IfAccumulatedFileSize exceeds="2GB"/>
<IfAccumulatedFileCount exceeds="100"/>
</IfAny>
</Delete>
</DefaultRolloverStrategy>
Expand Down Expand Up @@ -134,10 +134,12 @@
<AsyncLogger name="org.apache.hugegraph.auth" level="INFO" additivity="false">
<appender-ref ref="audit"/>
</AsyncLogger>
<AsyncLogger name="org.apache.hugegraph.api.filter.AuthenticationFilter" level="INFO" additivity="false">
<AsyncLogger name="org.apache.hugegraph.api.filter.AuthenticationFilter" level="INFO"
additivity="false">
<appender-ref ref="audit"/>
</AsyncLogger>
<AsyncLogger name="org.apache.hugegraph.api.filter.AccessLogFilter" level="INFO" additivity="false">
<AsyncLogger name="org.apache.hugegraph.api.filter.AccessLogFilter" level="INFO"
additivity="false">
<appender-ref ref="slowQueryLog"/>
</AsyncLogger>
</loggers>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
hosts: [localhost]
hosts: [ localhost ]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
Expand All @@ -23,8 +23,8 @@ serializer: {
# The duplication of HugeGraphIoRegistry is meant to fix a bug in the
# 'org.apache.tinkerpop.gremlin.driver.Settings:from(Configuration)' method.
ioRegistries: [
org.apache.hugegraph.io.HugeGraphIoRegistry,
org.apache.hugegraph.io.HugeGraphIoRegistry
org.apache.hugegraph.io.HugeGraphIoRegistry,
org.apache.hugegraph.io.HugeGraphIoRegistry
]
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
hosts: [localhost]
hosts: [ localhost ]
port: 8182
serializer: {
className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0,
config: {
serializeResultToString: false,
ioRegistries: [org.apache.hugegraph.io.HugeGraphIoRegistry]
ioRegistries: [ org.apache.hugegraph.io.HugeGraphIoRegistry ]
}
}
Loading

0 comments on commit 229f690

Please sign in to comment.