Skip to content

Commit

Permalink
Merge branch 'opencurve:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
ilixiaocui authored Jun 9, 2023
2 parents a205164 + 1279ee9 commit fa5adec
Show file tree
Hide file tree
Showing 610 changed files with 29,907 additions and 11,199 deletions.
5 changes: 4 additions & 1 deletion .bazelrc
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
build --verbose_failures

build --define=with_glog=true --define=libunwind=true
build --copt -DHAVE_ZLIB=1 --copt -DGFLAGS_NS=google --copt -DUSE_BTHREAD_MUTEX
build --cxxopt -Wno-error=format-security
build:gcc7-later --cxxopt -faligned-new
build --incompatible_blacklisted_protos_requires_proto_info=false
build --copt=-fdiagnostics-color=always
run --copt=-fdiagnostics-color=always

run --copt=-fdiagnostics-color=always
19 changes: 19 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -147,3 +147,22 @@ tools-v2/proto/curvefs/*
tools-v2/*/*.test
tools-v2/__debug_bin
tools-v2/vendor/

.test
.note
.playground
.dumpfile
metastore_test.dat
GPATH
GRTAGS
GTAGS
core.*

test/integration/*.conf
test/integration/client/config/client.conf*
test/integration/snapshotcloneserver/config/*.conf

.pre-commit-config.yaml

*.deb
*.whl
2 changes: 2 additions & 0 deletions .obm.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
container_name: curve-build-playground-master
container_image: opencurvedocker/curve-base:build-debian9
11 changes: 10 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Copyright (C) 2021 Jingli Chen (Wine93), NetEase Inc.

.PHONY: list build dep install image
.PHONY: list build dep install image playground check test

stor?=""
prefix?= "$(PWD)/projects"
Expand Down Expand Up @@ -70,3 +70,12 @@ install:

image:
@bash util/image.sh $(stor) $(tag) $(os)

playground:
@bash util/playground.sh

check:
@bash util/check.sh $(stor)

test:
@bash util/test.sh $(stor) $(only)
5 changes: 4 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@

**A cloud-native distributed storage system**

**A sandbox project hosted by the CNCF Foundation**

#### English | [简体中文](README_cn.md)
### 📄 [Documents](https://github.com/opencurve/curve/tree/master/docs) || 🌐 [Official Website](https://www.opencurve.io/Curve/HOME) || 🏠 [Forum](https://ask.opencurve.io/t/topic/7)
<div align=left>
Expand Down Expand Up @@ -156,7 +158,7 @@ Curve supports deployment in private and public cloud environments, and can also
<div align=center> <image src="docs/images/Curve-deploy-on-premises-idc.png" width=60%>
<div align=left>

One of them, CurveFS shared file storage system, can be elasticly scaled to public cloud storage, which can provide users with greater capacity elasticity, lower cost, and better performance experience.
One of them, CurveFS shared file storage system, can be elastically scaled to public cloud storage, which can provide users with greater capacity elasticity, lower cost, and better performance experience.

</details>

Expand Down Expand Up @@ -223,6 +225,7 @@ Please refer to the [Test environment configuration](docs/cn/测试环境配置

## Practical
- [CurveBS+NFS Build NFS Server](docs/practical/curvebs_nfs.md)
- [CurveFS+MinIO S3 Gateway](https://github.com/opencurve/curve-meetup-slides/blob/main/PrePaper/2023/%E6%94%AF%E6%8C%81POSIX%E5%92%8CS3%E7%BB%9F%E4%B8%80%E5%91%BD%E5%90%8D%E7%A9%BA%E9%97%B4%E2%80%94%E2%80%94Curve%E6%96%87%E4%BB%B6%E7%B3%BB%E7%BB%9FS3%E7%BD%91%E5%85%B3%E9%83%A8%E7%BD%B2%E5%AE%9E%E8%B7%B5.md)

## Governance
See [Governance](https://github.com/opencurve/community/blob/master/GOVERNANCE.md).
Expand Down
5 changes: 4 additions & 1 deletion README_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@

<div align=center> <image src="docs/images/cncf-icon-color.png" width = 8%>

**A cloud-native distributed storage system**
**云原生高性能分布式存储系统**

**CNCF基金会的沙箱托管项目**

#### [English](README.md) | 简体中文
### 📄 [文档](https://github.com/opencurve/curve/tree/master/docs) || 🌐 [官网](https://www.opencurve.io/Curve/HOME) || 🏠 [论坛](https://ask.opencurve.io/t/topic/7)
Expand Down Expand Up @@ -225,6 +227,7 @@ $ ./fio --thread --rw=randwrite --bs=4k --ioengine=nebd --nebd=cbd:pool//pfstest

## 最佳实践
- [CurveBS+NFS搭建NFS存储](docs/practical/curvebs_nfs.md)
- [CurveFS+S3网关部署实践](https://github.com/opencurve/curve-meetup-slides/blob/main/PrePaper/2023/%E6%94%AF%E6%8C%81POSIX%E5%92%8CS3%E7%BB%9F%E4%B8%80%E5%91%BD%E5%90%8D%E7%A9%BA%E9%97%B4%E2%80%94%E2%80%94Curve%E6%96%87%E4%BB%B6%E7%B3%BB%E7%BB%9FS3%E7%BD%91%E5%85%B3%E9%83%A8%E7%BD%B2%E5%AE%9E%E8%B7%B5.md)

## 行为守则
Curve 的行为守则遵循[CNCF Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md)
Expand Down
22 changes: 20 additions & 2 deletions WORKSPACE
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,24 @@ http_archive(
sha256 = "59b862f50e710277f8ede96f083a5bb8d7c9595376146838b9580be90374ee1f",
)

# fmt
http_archive(
name = "fmt",
url = "https://github.com/fmtlib/fmt/archive/9.1.0.tar.gz",
sha256 = "5dea48d1fcddc3ec571ce2058e13910a0d4a6bab4cc09a809d8b1dd1c88ae6f2",
strip_prefix = "fmt-9.1.0",
build_file = "//:thirdparties/fmt.BUILD",
)

# spdlog
http_archive(
name = "spdlog",
urls = ["https://github.com/gabime/spdlog/archive/refs/tags/v1.11.0.tar.gz"],
strip_prefix = "spdlog-1.11.0",
sha256 = "ca5cae8d6cac15dae0ec63b21d6ad3530070650f68076f3a4a862ca293a858bb",
build_file = "//:thirdparties/spdlog.BUILD",
)

# Bazel platform rules.
http_archive(
name = "platforms",
Expand All @@ -248,14 +266,14 @@ new_local_repository(
http_archive(
name = "hedron_compile_commands",

# Replace the commit hash in both places (below) with the latest, rather than using the stale one here.
# Replace the commit hash in both places (below) with the latest, rather than using the stale one here.
# Even better, set up Renovate and let it do the work for you (see "Suggestion: Updates" in the README).
urls = [
"https://curve-build.nos-eastchina1.126.net/bazel-compile-commands-extractor-af9af15f7bc16fc3e407e2231abfcb62907d258f.tar.gz",
"https://github.com/hedronvision/bazel-compile-commands-extractor/archive/af9af15f7bc16fc3e407e2231abfcb62907d258f.tar.gz",
],
strip_prefix = "bazel-compile-commands-extractor-af9af15f7bc16fc3e407e2231abfcb62907d258f",
# When you first run this tool, it'll recommend a sha256 hash to put here with a message like: "DEBUG: Rule 'hedron_compile_commands' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = ..."
# When you first run this tool, it'll recommend a sha256 hash to put here with a message like: "DEBUG: Rule 'hedron_compile_commands' indicated that a canonical reproducible form can be obtained by modifying arguments sha256 = ..."
)
load("@hedron_compile_commands//:workspace_setup.bzl", "hedron_compile_commands_setup")
hedron_compile_commands_setup()
15 changes: 13 additions & 2 deletions buildfs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,22 @@ then
exit
fi

if [ `gcc -dumpversion | awk -F'.' '{print $1}'` -le 6 ]
then
bazelflags=''
else
bazelflags='--copt -faligned-new'
fi

if [ "$1" = "debug" ]
then
DEBUG_FLAG="--compilation_mode=dbg"
fi

bazel build curvefs/... --copt -DHAVE_ZLIB=1 ${DEBUG_FLAG} -s --define=with_glog=true --define=libunwind=true --copt -DGFLAGS_NS=google --copt -Wno-error=format-security --copt -DUSE_BTHREAD_MUTEX --copt -DCURVEVERSION=${curve_version} --linkopt -L/usr/local/lib
bazel build curvefs/... --copt -DHAVE_ZLIB=1 ${DEBUG_FLAG} -s \
--define=with_glog=true --define=libunwind=true --copt -DGFLAGS_NS=google --copt -Wno-error=format-security --copt \
-DUSE_BTHREAD_MUTEX --copt -DCURVEVERSION=${curve_version} --linkopt -L/usr/local/lib ${bazelflags}

if [ $? -ne 0 ]
then
echo "build curvefs failed"
Expand All @@ -34,4 +44,5 @@ then
echo "mds_test failed"
exit
fi
fi
fi
echo "end compile"
15 changes: 14 additions & 1 deletion conf/mds.conf
Original file line number Diff line number Diff line change
Expand Up @@ -156,7 +156,7 @@ mds.topology.CreateCopysetRpcRetryTimes=20
# 请求chunkserver上创建copyset重试间隔
mds.topology.CreateCopysetRpcRetrySleepTimeMs=1000
# Topology模块刷新metric时间间隔
mds.topology.UpdateMetricIntervalSec=60
mds.topology.UpdateMetricIntervalSec=10
#和mds.chunkserver.failure.tolerance设置有关,一个zone 标准配置20台节点,如果允许3台节点failover,
#那么剩余17台机器需要承载原先20台机器的空间,17/20=0.85,即使用量超过这个值即不再往这个池分配,
#具体分为来两种情况, 当不使用chunkfilepool,物理池限制使用百分比,当使用 chunkfilepool 进行chunkfilepool分配时需预留failover空间,
Expand Down Expand Up @@ -237,3 +237,16 @@ mds.throttle.iopsPerGB=30
mds.throttle.bpsMinInMB=120
mds.throttle.bpsMaxInMB=260
mds.throttle.bpsPerGBInMB=0.3

#
## poolset rules
#
# for backward compatibility, rules are applied for select poolset when creating file
#
# for example
# mds.poolset.rules=/dir1/:poolset1;/dir2/:poolset2;/dir1/sub/:sub
#
# when creating file reqeust doesn't have poolset, above rules are used to select poolset
# - if filename is /dir1/file, then poolset1 is select
# - if filename is /dir1/sub/file, then sub is select
mds.poolset.rules=
70 changes: 58 additions & 12 deletions curvefs/conf/client.conf
Original file line number Diff line number Diff line change
Expand Up @@ -77,17 +77,7 @@ rpc.healthCheckIntervalSec=0

#### fuseClient
# TODO(xuchaojie): add unit
fuseClient.attrTimeOut=1.0
fuseClient.entryTimeOut=1.0
fuseClient.listDentryLimit=65536
fuseClient.flushPeriodSec=5
fuseClient.maxNameLength=255
fuseClient.iCacheLruSize=65536
fuseClient.dCacheLruSize=1000000
fuseClient.enableICacheMetrics=true
fuseClient.enableDCacheMetrics=true
fuseClient.lruTimeOutSec=60
fuseClient.cto=true
fuseClient.downloadMaxRetryTimes=3

### kvcache opt
Expand All @@ -114,6 +104,61 @@ fuseClient.maxDataSize=1024
fuseClient.refreshDataIntervalSec=30
fuseClient.warmupThreadsNum=10

# the write throttle bps of fuseClient, default no limit
fuseClient.throttle.avgWriteBytes=0
# the write burst bps of fuseClient, default no limit
fuseClient.throttle.burstWriteBytes=0
# the times that write burst bps can continue, default 180s
fuseClient.throttle.burstWriteBytesSecs=180

# the write throttle iops of fuseClient, default no limit
fuseClient.throttle.avgWriteIops=0
# the write burst iops of fuseClient, default no limit
fuseClient.throttle.burstWriteIops=0
# the times that write burst Iops can continue, default 180s
fuseClient.throttle.burstWriteIopsSecs=180

# the read throttle bps of fuseClient, default no limit
fuseClient.throttle.avgReadBytes=0
# the read burst bps of fuseClient, default no limit
fuseClient.throttle.burstReadBytes=0
# the times that read burst bps can continue, default 180s
fuseClient.throttle.burstReadBytesSecs=180

# the read throttle iops of fuseClient, default no limit
fuseClient.throttle.avgReadIops=0
# the read burst Iops of fuseClient, default no limit
fuseClient.throttle.burstReadIops=0
# the times that read burst Iops can continue, default 180s
fuseClient.throttle.burstReadIopsSecs=180

#### filesystem metadata
# {
# fs.disableXattr:
# if you want to get curvefs specified xattr,
# you can mount another fs with |fs.disableXattr| is true
#
# fs.lookupCache.negativeTimeoutSec:
# entry which not found will be cached if |timeout| > 0
fs.cto=true
fs.maxNameLength=255
fs.disableXattr=true
fs.accessLogging=true
fs.kernelCache.attrTimeoutSec=3600
fs.kernelCache.dirAttrTimeoutSec=3600
fs.kernelCache.entryTimeoutSec=3600
fs.kernelCache.dirEntryTimeoutSec=3600
fs.lookupCache.negativeTimeoutSec=0
fs.lookupCache.minUses=1
fs.lookupCache.lruSize=100000
fs.dirCache.lruSize=5000000
fs.openFile.lruSize=65536
fs.attrWatcher.lruSize=5000000
fs.rpc.listDentryLimit=65536
fs.deferSync.delay=3
fs.deferSync.deferDirMtime=false
# }

#### volume
volume.bigFileSize=1048576
volume.volBlockSize=4096
Expand All @@ -135,8 +180,6 @@ volume.blockGroup.allocateOnce=4
#### s3
# this is for test. if s3.fakeS3=true, all data will be discarded
s3.fakeS3=false
# the max size that fuse send
s3.fuseMaxSize=131072
s3.pageSize=65536
# prefetch blocks that disk cache use
s3.prefetchBlocks=1
Expand All @@ -151,8 +194,11 @@ s3.baseSleepUs=500
s3.threadScheduleInterval=3
# data cache flush wait time
s3.cacheFlushIntervalSec=5
# write cache < 8,388,608 (8MB) is not allowed
s3.writeCacheMaxByte=838860800
s3.readCacheMaxByte=209715200
# file cache read thread num
s3.readCacheThreads=5
# http = 0, https = 1
s3.http_scheme=0
s3.verify_SSL=False
Expand Down
23 changes: 16 additions & 7 deletions curvefs/conf/metaserver.conf
Original file line number Diff line number Diff line change
Expand Up @@ -128,16 +128,25 @@ copyset.trash.scan_periodsec=120
# this config item should be tuned according cpu/memory/disk
service.max_inflight_request=5000

### apply queue options for each copyset
### apply queue is used to isolate raft threads, each worker has its own queue
### when a task can be applied it's been pushed into a corresponding worker queue by certain rules
# number of apply queue workers for each, each worker will start a indepent thread
applyqueue.worker_count=3
#
# Concurrent apply queue
### concurrent apply queue options for each copyset
### concurrent apply queue is used to isolate raft threads, each worker has its own queue
### when a task can be applied it's been pushed into a corresponding read/write worker queue by certain rules

# apply queue depth for each copyset
# worker_count: number of apply queue workers for each, each worker will start a indepent thread
# queue_depth: apply queue depth for each copyset
# all tasks in queue must be done when do raft snapshot, and raft apply and raft snapshot are executed in same thread
# so, if queue depth is too large, it will cause other tasks to wait too long for apply
applyqueue.queue_depth=1
# write apply queue workers count
applyqueue.write_worker_count=3
# write apply queue depth
applyqueue.write_queue_depth=1
# read apply queue workers count
applyqueue.read_worker_count=2
# read apply queue depth
applyqueue.read_queue_depth=1


# number of worker threads that created by brpc::Server
# if set to |auto|, threads create by brpc::Server is equal to `getconf _NPROCESSORS_ONLN` + 1
Expand Down
3 changes: 3 additions & 0 deletions curvefs/conf/tools.conf
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ s3.bucket_name=bucket
s3.blocksize=4194304
s3.chunksize=67108864
s3.useVirtualAddressing=false
# s3 objectPrefix, if set 0, means no prefix, if set 1, means inode prefix
# if set 2 and other values mean hash prefix
s3.objectPrefix=0
# statistic info in xattr, hardlink will not be supported when enable
enableSumInDir=true

Expand Down
2 changes: 1 addition & 1 deletion curvefs/docker/debian9/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@ COPY curvefs /curvefs
COPY libmemcached.so libmemcached.so.11 libhashkit.so.2 /usr/lib/
RUN mkdir -p /etc/curvefs /core /etc/curve && chmod a+x /entrypoint.sh \
&& cp /curvefs/tools/sbin/curvefs_tool /usr/bin \
&& cp curvefs/tools-v2/sbin/curve /usr/bin/
&& cp /curvefs/tools-v2/sbin/curve /usr/bin/
ENTRYPOINT ["/entrypoint.sh"]
2 changes: 2 additions & 0 deletions curvefs/docker/debian9/entrypoint.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ g_args=""
g_prefix=""
g_binary=""
g_start_args=""
g_preexec="/curvefs/tools-v2/sbin/daemon"

############################ BASIC FUNCTIONS
function msg() {
Expand Down Expand Up @@ -119,6 +120,7 @@ function main() {
prepare
create_directory
[[ $(command -v crontab) ]] && cron
[[ ! -z $g_preexec ]] && $g_preexec &
if [ $g_role == "etcd" ]; then
exec $g_binary $g_start_args >>$g_prefix/logs/etcd.log 2>&1
elif [ $g_role == "monitor" ]; then
Expand Down
Loading

0 comments on commit fa5adec

Please sign in to comment.