-
Notifications
You must be signed in to change notification settings - Fork 80
NSFS on Kubernetes
NSFS (short for Namespace-Filesystem) is a capability to use a shared filesystem (mounted in the endpoints) for the storage of S3 buckets, while keeping a 1-1 mapping between Object and File.
This feature is currently under development, and it is recommended to use latest weekly master releases, starting from the build tagged with master-20210419
.
Refer to this guide: https://github.com/noobaa/noobaa-core/wiki/Weekly-Master-Builds
For NSFS to work it requires a PVC for the filesystem, with a ReadWriteMany accessMode so that we can scale the endpoints to any node in the cluster and still be able to share the volume.
Ideally this PVC will be allocated by a provisioner, such as the CSI provisioner of rook-ceph.cephfs.csi.ceph.com (link).
If you don't have a CSI provisioner you can just set up a local volume manually using this guide: https://github.com/noobaa/noobaa-core/wiki/NSFS-using-a-Local-PV-on-k8s
S3 access will be determined by the mapping of each S3 account to UID/GID (see Step 7 - Create Account) and the access of that UID/GID to the directories and files in the filesystem. The filesystem admin should set up the ACLs/unix-permissions for the mounted FS path to the needed UIDs/GIDs that would be used to access it.
For dev/test the simplest way to set this up is to give full access to all:
mkdir -p /nsfs/bucket-path
chmod -R 777 /nsfs/bucket-path
NOTE: on minikube run minikube ssh
and then run the above with sudo
# on minikube
minikube ssh "sudo mkdir -p /nsfs/bucket-path"
minikube ssh "sudo chmod -R 777 /nsfs/bucket-path"
A namespace resource is a configuration entity that represents the mounted filesystem in the noobaa system.
You need to provide it with some information:
- namespace-store-name - Choose how to name it, perhaps follow the same name as the PVC or the Filesystem. You will use this name later when creating buckets that use this filesystem.
- pvc-name - The name of the pvc in which the filesystem resides.
- fs-backend (optional) - When empty will assume basic POSIX filesystem only. Supported backend types:
NFSv4
,CEPH_FS
,GPFS
. Setting the more specific backend will allow optimization based on the capabilities of the underlying filesystem.
Here is an example of calling this API:
noobaa namespacestore create nsfs fs1 --pvc-name='nsfs-vol' --fs-backend='GPFS'
NOTE: on minkube do not use the fs-backend
flag, leave it empty.
# on minikube
noobaa namespacestore create nsfs fs1 --pvc-name='nsfs-vol'
It's possible to create NSFS buckets in two ways:
- Via the NooBaa API -
NSFS Buckets are like creating an "export" for a filesystem directory in the S3 service.
The following API call will create a bucket with the specified name, and redirect it to a specified path from the NSFS resource that was created in Step 4 - Create NSFS Resource.
noobaa api bucket_api create_bucket '{
"name": "fs1-jenia-bucket",
"namespace":{
"write_resource": { "resource": "fs1", "path": "bucket-path/" },
"read_resources": [ { "resource": "fs1", "path": "bucket-path/" }]
}
}'
- Via the NooBaa CLI / OBC YAML -
Make sure that path
points to the path that was created, chown
ed and chmod
ed as needed in step 4.
Note that it's possible to use either distinguished_name
or gid
+uid
in both CLI and YAML.
If you use this method to create a bucket, skip step 7 - NooBaa will create an account for you. Step 9 will be irrelevant since accounts created via OBC have no permission to create subsequent buckets.
If you use the CLI, the account credentials will printed once the OBC has been created.
noobaa obc create my-bucket-claim -n my-app --app-namespace my-app --distinguished-name "current_user" --path 'mybucketclaim'
If you apply the YAML, a secret with an identical name to your OBC will be created, which will contain the appropriate credentials, encoded in base64.
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
name: my-bucket-claim
namespace: my-app
spec:
generateBucketName: my-bucket
storageClassName: noobaa.noobaa.io
additionalConfig:
nsfsAccountConfig: { "gid": 42, "uid": 505 }
path: "mybucketclaim"
- We should update the bucket policy using admin account. Use the admin credentials from
noobaa status --show-secrets
-
endpoint-url
is noobaa endpoint address it can be taken from NodesPorts address from noobaa status or an address in localhost after port-forward (for example if you're usingkubectl port-forward -n default service/s3 12443:443
it will be--endpoint-url=https://localhost:12443
.
aws --endpoint-url=<insert_addresss> --no-verify-ssl s3api put-bucket-policy --bucket fs1-jenia-bucket --policy file://policy.json
policy.json is a JSON file in the current directory:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"id-1",
"Effect":"Allow",
"Principal":"*",
"Action":["s3:*"],
"Resource":["arn:aws:s3:::*"]
}
]
}
Create accounts with NSFS configuration:
- Map the account to a UID/GID
- Set up the directory for new buckets created from S3 for this account (TBD)
-
default_resource
should be the same as NSFS resource (see Step 4 - Create NSFS Resource). -
new_buckets_path
is the path in which new buckets created via s3 will be placed. -
nsfs_only
is a boolean field that defines the access permissions of an account to non-NSFS buckets.
Here is an example:
noobaa api account_api create_account '{
"email": "[email protected]",
"name": "jenia",
"has_login": false,
"s3_access": true,
"default_resource": "fs1",
"nsfs_account_config": {
"uid": 1001,
"gid": 0,
"new_buckets_path": "/",
"nsfs_only": false
}
}'
Create account returns a response with S3 credentials:
INFO[0001] ✅ RPC: account.create_account() Response OK: took 205.7ms
access_keys:
- access_key: *NOOBAA_ACCOUNT_ACCESS_KEY*
secret_key: *NOOBAA_ACCOUNT_SECRET_KEY*
You can also perform a list accounts command in order to see the configured NSFS accounts (besides all other accounts of the system)
noobaa api account_api list_accounts {}
If you are interested in a particular account you can read its information directly by email:
noobaa api account_api read_account '{"email":"[email protected]"}'
Configure the S3 client application and access the FS via S3 from the endpoint. Use the S3 credentials, access_key and secret_key, resulted from step 7 of "Create Account(s)".
Application S3 config:
AWS_ACCESS_KEY_ID=*NOOBAA_ACCOUNT_ACCESS_KEY*
AWS_SECRET_ACCESS_KEY=*NOOBAA_ACCOUNT_SECRET_KEY*
S3_ENDPOINT=s3.noobaa.svc (or nodePort address from noobaa status or address after kubectl port forwarding see comment above)
BUCKET_NAME=fs1-jenia-bucket
As we can create different accounts, it will be helpful to have these keys and end-points configured as an alias which can be used in step 9. For Example:
alias s3-user-1='AWS_ACCESS_KEY_ID=NsFsisNamEFlSytm AWS_SECRET_ACCESS_KEY=HiN00baa0nK8SDfsyV+VLoGK6ZMyCEDvklQCqW0 aws --endpoint "NodePort address" --no-verify-ssl s3'
Use the S3 client configured in step 8 to create new buckets under the new_buckets_path in the default_resource configured by the requesting account.
S3 CLI tool is the part of alias created in step 8.
s3-user-1 mb s3://test-bucket
A new filesystem directory called "test-bucket" will be created by noobaa.
Based on the input we provided in this guide, "test-bucket" directory can be seen in
/nsfs/fs1
of "endpoint" pod.