NeoFS is a decentralized distributed object storage integrated with the NEO Blockchain.
NeoFS HTTP Gateway bridges NeoFS internal protocol and HTTP standard.
- you can download one file per request from the NeoFS Network
- you can upload one file per request into the NeoFS Network
go install github.com/nspcc-dev/neofs-http-gw
Or you can call make
to build it from the cloned repository (the binary will
end up in bin/neofs-http-gw
). To build neofs-http-gw binary in clean docker
environment, call make docker/bin/neofs-http-gw
.
Other notable make targets:
dep Check and ensure dependencies
image Build clean docker image
dirty-image Build dirty docker image with host-built binaries
fmt Format the code
lint Run linters
version Show current version
Or you can also use a Docker
image provided for the released
(and occasionally unreleased) versions of the gateway (:latest
points to the
latest stable release).
HTTP gateway itself is not a NeoFS node, so to access NeoFS it uses node's
gRPC interface and you need to provide some node that it will connect to. This
can be done either via -p
parameter or via HTTP_GW_PEERS_<N>_ADDRESS
and
HTTP_GW_PEERS_<N>_WEIGHT
environment variables (the gate supports multiple
NeoFS nodes with weighted load balancing).
If you launch HTTP gateway in bundle with neofs-dev-env,
you can get the IP address of the node in the output of make hosts
command
(with s0*.neofs.devenv name).
These two commands are functionally equivalent, they run the gate with one backend node (and otherwise default settings):
$ neofs-http-gw -p 192.168.130.72:8080
$ HTTP_GW_PEERS_0_ADDRESS=192.168.130.72:8080 neofs-http-gw
It's also possible to specify uri scheme (grpc or grpcs) when using -p
:
$ neofs-http-gw -p grpc://192.168.130.72:8080
$ HTTP_GW_PEERS_0_ADDRESS=grpcs://192.168.130.72:8080 neofs-http-gw
In general, everything available as CLI parameter can also be specified via
environment variables (see example), so they're not specifically mentioned in most cases
(see --help
also). If you prefer a config file you can use it in yaml format.
You can specify multiple -p
options to add more NeoFS nodes, this will make
gateway spread requests equally among them (using weight 1 and priority 1 for every node):
$ neofs-http-gw -p 192.168.130.72:8080 -p 192.168.130.71:8080
If you want some specific load distribution proportions, use weights and priorities:
$ HTTP_GW_PEERS_0_ADDRESS=192.168.130.71:8080 HTTP_GW_PEERS_0_WEIGHT=1 HTTP_GW_PEERS_0_PRIORITY=1 \
HTTP_GW_PEERS_1_ADDRESS=192.168.130.72:8080 HTTP_GW_PEERS_1_WEIGHT=9 HTTP_GW_PEERS_1_PRIORITY=2 \
HTTP_GW_PEERS_2_ADDRESS=192.168.130.73:8080 HTTP_GW_PEERS_2_WEIGHT=1 HTTP_GW_PEERS_2_PRIORITY=2 \
neofs-http-gw
This command will make gateway use 192.168.130.71 while it is healthy. Otherwise, it will make the gateway use 192.168.130.72 for 90% of requests and 192.168.130.73 for remaining 10%.
You can provide a wallet via --wallet
or -w
flag. You can also specify the account address using --address
(if no address provided default one will be used). If wallet is used, you need to set HTTP_GW_WALLET_PASSPHRASE
variable to decrypt the wallet.
If no wallet provided, the gateway autogenerates a key pair it will use for NeoFS requests.
$ neofs-http-gw -p $NEOFS_NODE -w $WALLET_PATH --address $ACCOUNT_ADDRESS
Example:
$ neofs-http-gw -p 192.168.130.72:8080 -w wallet.json --address NfgHwwTi3wHAS8aFAN243C5vGbkYDpqLHP
Gateway binds to 0.0.0.0:8082
by default and you can change that with
--listen_address
option.
It can also provide TLS interface for its users, just specify paths to the key and
certificate files via --tls_key
and --tls_certificate
parameters. Note
that using these options makes gateway TLS-only. If you need to serve both TLS
and plain text HTTP, you either have to run two gateway instances or use some
external redirecting solution.
Example to bind to 192.168.130.130:443
and serve TLS there:
$ neofs-http-gw -p 192.168.130.72:8080 --listen_address 192.168.130.130:443 \
--tls_key=key.pem --tls_certificate=cert.pem
You can tune HTTP read and write buffer sizes as well as timeouts with
HTTP_GW_WEB_READ_BUFFER_SIZE
, HTTP_GW_WEB_READ_TIMEOUT
,
HTTP_GW_WEB_WRITE_BUFFER_SIZE
and HTTP_GW_WEB_WRITE_TIMEOUT
environment
variables.
Note: to allow upload and download of big data streams, disable read
and write timeouts correspondingly. To do that, set HTTP_GW_WEB_READ_TIMEOUT=0
and HTTP_GW_WEB_WRITE_TIMEOUT=0
. Otherwise, HTTP Gateway will terminate
request with data stream after timeout.
HTTP_GW_WEB_STREAM_REQUEST_BODY
environment variable can be used to disable
request body streaming (effectively it'll make the gateway accept the file completely
first and only then try sending it to NeoFS).
HTTP_GW_WEB_MAX_REQUEST_BODY_SIZE
controls maximum request body size
limiting uploads to files slightly lower than this limit.
Gateway can automatically set timestamps for uploaded files based on local
time source, use HTTP_GW_UPLOAD_HEADER_USE_DEFAULT_TIMESTAMP
environment
variable to control this behavior.
Pprof and Prometheus are integrated into the gateway, but they are not enabled by
default. To enable them use --pprof
and --metrics
flags or
HTTP_GW_PPROF
/HTTP_GW_METRICS
environment variables.
You can tune gRPC interface parameters with --connect_timeout
(for
connection to a node) and --request_timeout
(for request processing over
established connection) options.
gRPC-level checks allow the gateway to detect dead peers, but it declares them
unhealthy at pool level once per --rebalance_timer
interval, so check for it
if needed.
All timing options accept values with suffixes, so "15s" is 15 seconds and "2m" is 2 minutes.
The gateway supports downloading files by common prefix (like dir) in zip format. You can enable compression
using config or HTTP_GW_ZIP_COMPRESSION=true
environment variable.
You can specify logging level (default info
) using variable:
HTTP_GW_LOGGER_LEVEL=debug
Configuration file is optional and can be used instead of environment variables/other parameters.
It can be specified with --config
parameter:
$ neofs-http-gw --config your-config.yaml
See config for example.
This gateway intentionally provides limited feature set and doesn't try to substitute (or completely wrap) regular gRPC NeoFS interface. You can download and upload objects with it, but deleting, searching, managing ACLs, creating containers and other activities are not supported and not planned to be supported.
Note: in all download/upload routes you can use container name instead of it's id ($CID
), but resolvers must be configured properly (see configs for examples).
Before uploading or downloading a file make sure you have a prepared container. You can create it with instructions below.
Also ,in case of downloading, you need to have a file inside a container.
You can create a container via neofs-cli:
$ neofs-cli -r $NEOFS_NODE -k $KEY container create --policy $POLICY --basic-acl $ACL
where $KEY
can be a path to a private key file (as raw bytes), a hex string or
a (unencrypted) WIF string,
$ACL
-- hex encoded basic ACL value or keywords 'private, 'public-read', 'public-read-write' and
$POLICY
-- QL-encoded or JSON-encoded placement policy or path to file with it
For example:
$ neofs-cli -r 192.168.130.72:8080 -k 6PYLKJhiSub5imt6WCVy6Quxtd9xu176omev1vWYovzkAQCTSQabAAQXii container create --policy "REP 3" --basic-acl public --await
If you have launched nodes via neofs-dev-env,
you can get the key value from wallets/wallet.json
or write the path to
the file wallets/wallet.key
.
To create a file via neofs-cli, run a command below:
$ neofs-cli -r $NEOFS_NODE -k $KEY object put --file $FILENAME --cid $CID
where
$KEY
-- the key, please read the information above,
$CID
-- container ID.
For example:
$ neofs-cli -r 192.168.130.72:8080 -k 6PYLKJhiSub5imt6WCVy6Quxtd9xu176omev1vWYovzkAQCTSQabAAQXii object put --file cat.png --cid DPL2tpRiuDNmoTj5KZjD1nzDuCS8tVcxa7hsvSLDWpVM --attributes img_type=cat,my_attr=cute
The following requests support GET/HEAD methods.
Basic downloading involves container ID and object ID and is done via GET
requests to /get/$CID/$OID
path, where $CID
is a container ID,
$OID
is an object's (i.e. your file's) ID.
For example:
$ wget http://localhost:8082/get/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ/2m8PtaoricLouCn5zE8hAFr3gZEBDCZFe9BEgVJTSocY
There is also more complex interface provided for attribute-based downloads, it's usually used to retrieve files by their names, but any other attribute can be used as well. The generic syntax for it looks like this:
/get_by_attribute/$CID/$ATTRIBUTE_NAME/$ATTRIBUTE_VALUE
where
$CID
is a container ID,
$ATTRIBUTE_NAME
is the name of the attribute we want to use,
$ATTRIBUTE_VALUE
is the value of this attribute that the target object should have.
NB! The attribute key and value must be url encoded, i.e., if you want to download an object with the attribute value
a cat
, the value in the request must be a+cat
. In the same way with the attribute key.
If multiple objects have specified attribute with specified value, then the first one of them is returned (and you can't get others via this interface).
Example for file name attribute:
$ wget http://localhost:8082/get_by_attribute/88GdaZFTcYJn1dqiSECss8kKPmmun6d6BfvC4zhwfLYM/FileName/cat.jpeg
Or when the filename includes special symbols:
$ wget http://localhost:8082/get_by_attribute/88GdaZFTcYJn1dqiSECss8kKPmmun6d6BfvC4zhwfLYM/FileName/cat+jpeg # means 'cat jpeg'
$ wget http://localhost:8082/get_by_attribute/88GdaZFTcYJn1dqiSECss8kKPmmun6d6BfvC4zhwfLYM/FileName/cat%25jpeg # means 'cat%jpeg'
Some other user-defined attributes:
$ wget http://localhost:8082/get_by_attribute/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ/Ololo/100500
Or when the attribute includes special symbols:
$ wget http://localhost:8082/get_by_attribute/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ/Olo%2Blo/100500 # means Olo+lo
An optional download=true
argument for Content-Disposition
management is
also supported (more on that below):
$ wget http://localhost:8082/get/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ/2m8PtaoricLouCn5zE8hAFr3gZEBDCZFe9BEgVJTSocY?download=true
You can download some dir (files with the same prefix) in zip (it will be compressed if config contains appropriate param):
$ wget http://localhost:8082/zip/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ/common/prefix
Note: the objects must have a FilePath
attribute, otherwise they will not be in the zip archive.
You can upload file with this attribute using curl
:
$ curl -F '[email protected];filename=cat.jpeg' -H 'X-Attribute-FilePath: common/prefix/cat.jpeg' http://localhost:8082/upload/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ
You get object contents in the reply body (if GET method was used), but at the same time you also get a set of reply headers generated using the following rules:
Content-Length
is set to the length of the objectContent-Type
is autodetected dynamically by gatewayContent-Disposition
isinline
for regular requests andattachment
for requests withdownload=true
argument,filename
is also added if there isFileName
attribute set for this objectLast-Modified
header is set toTimestamp
attribute value if it's present for the objectx-container-id
contains container IDx-object-id
contains object IDx-owner-id
contains owner address- all the other NeoFS attributes are converted to
X-Attribute-*
headers (but only if they can be safely represented in HTTP header), for exampleFileName
attribute becomesX-Attribute-FileName
header
You can POST files to /upload/$CID
path where $CID
is a container ID. The
request must contain multipart form with mandatory filename
parameter. Only
one part in multipart form will be processed, so to upload another file just
issue a new POST request.
Example request:
$ curl -F '[email protected];filename=cat.jpeg' http://localhost:8082/upload/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ
Chunked encoding is supported by the server (but check for request read timeouts if you're planning some streaming). You can try streaming support with a large file piped through named FIFO pipe:
$ mkfifo pipe
$ cat video.mp4 > pipe &
$ curl --no-buffer -F 'file=@pipe;filename=catvideo.mp4' http://localhost:8082/upload/Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ
You can also add some attributes to your file using the following rules:
- all "X-Attribute-*" headers get converted to object attributes with "X-Attribute-" prefix stripped, that is if you add "X-Attribute-Ololo: 100500" header to your request the resulting object will get "Ololo: 100500" attribute
- "X-Attribute-NEOFS-*" headers are special
(
-NEOFS-
part can also be-neofs-
or-Neofs-
), they're used to set internal NeoFS attributes starting with__NEOFS__
prefix, for these attributes all dashes get converted to underscores and all letters are capitalized. For example, you can use "X-Attribute-NEOFS-Expiration-Epoch" header to set__NEOFS__EXPIRATION_EPOCH
attribute FileName
attribute is set from multipart'sfilename
if not set explicitly viaX-Attribute-FileName
headerTimestamp
attribute can be set using gateway local time if using HTTP_GW_UPLOAD_HEADER_USE_DEFAULT_TIMESTAMP option and if request doesn't provideX-Attribute-Timestamp
header of its own
NOTE
There are some reserved headers type of X-Attribute-NEOFS-*
(headers are arranged in descending order of priority):
X-Attribute-Neofs-Expiration-Epoch: 100
X-Attribute-Neofs-Expiration-Duration: 24h30m
X-Attribute-Neofs-Expiration-Timestamp: 1637574797
X-Attribute-Neofs-Expiration-RFC3339: 2021-11-22T09:55:49Z
which transforms to X-Attribute-Neofs-Expiration-Epoch
. So you can provide expiration any convenient way.
For successful uploads you get JSON data in reply body with a container and object ID, like this:
{
"object_id": "9ANhbry2ryjJY1NZbcjryJMRXG5uGNKd73kD3V1sVFsX",
"container_id": "Dxhf4PNprrJHWWTG5RGLdfLkJiSQ3AQqit1MSnEPRkDZ"
}
You can always upload files to public containers (open for anyone to put objects into), but for restricted containers you need to explicitly allow PUT operations for a request signed with your HTTP Gateway keys.
If your don't want to manage gateway's secret keys and adjust eACL rules when gateway configuration changes (new gate, key rotation, etc) or you plan to use public services, there is an option to let your application backend (or you) to issue Bearer Tokens ans pass them from the client via gate down to NeoFS level to grant access.
NeoFS Bearer Token basically is a container owner-signed ACL data (refer to NeoFS documentation for more details). There are two options to pass them to gateway:
- "Authorization" header with "Bearer" type and base64-encoded token in credentials field
- "Bearer" cookie with base64-encoded token contents
For example, you have a mobile application frontend with a backend part storing data in NeoFS. When a user authorizes in the mobile app, the backend issues a NeoFS Bearer token and provides it to the frontend. Then, the mobile app may generate some data and upload it via any available NeoFS HTTP Gateway by adding the corresponding header to the upload request. Accessing the ACL protected data works the same way.
In order to generate a bearer token, you need to know the container owner key and the address of the sender who will do the request to NeoFS (in our case, it's a gateway wallet address).
Suppose we have:
- KxDgvEKzgSBPPfuVfw67oPQBSjidEiqTHURKSDL1R7yGaGYAeYnr (container owner key)
- NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3 (token owner address)
- BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K (container id)
Firstly, we need to encode the container id and the sender address to base64 (now it's base58). So use base58 and base64 utils.
- Encoding container id:
$ echo 'BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K' | base58 --decode | base64
# output: mRnZWzewzxjzIPa7Fqlfqdl3TM1KpJ0YnsXsEhafJJg=
- Encoding token owner id:
$ echo 'NhVtreTTCoqsMQV5Wp55fqnriiUCpEaKm3' | base58 --decode | base64
# output: NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg==
Now, we can form a Bearer token (10000 is liftetime expiration in epoch) and save it to bearer.json:
{
"body": {
"eaclTable": {
"version": {
"major": 0,
"minor": 0
},
"containerID": {
"value": "mRnZWzewzxjzIPa7Fqlfqdl3TM1KpJ0YnsXsEhafJJg="
},
"records": []
},
"ownerID": {
"value": "NezFK4ujidF+X7bB88uzREQzRQeAvdj3Gg=="
},
"lifetime": {
"exp": "10000",
"nbf": "0",
"iat": "0"
}
},
"signature": null
}
Next, sign it with the container owner key:
$ neofs-cli util sign bearer-token --from bearer.json --to signed.json -k KxDgvEKzgSBPPfuVfw67oPQBSjidEiqTHURKSDL1R7yGaGYAeYnr
Encoding to base64 to use via the header:
$ base64 -w 0 signed.json
# output: Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==
After that, the Bearer token can be used:
$ curl -F '[email protected];filename=cat.jpeg' -H "Authorization: Bearer Ck4KKgoECAIQBhIiCiCZGdlbN7DPGPMg9rsWqV+p2XdMzUqknRiexewSFp8kmBIbChk17MUri6OJ0X5ftsHzy7NERDNFB4C92PcaGgMIkE4SZgohAxpsb7vfAso1F0X6hrm6WpRS14WsT3/Ct1SMoqRsT89KEkEEGxKi8GjKSf52YqhppgaOTQHbUsL3jn7SHLqS3ndAQ7NtAATnmRHleZw2V2xRRSRBQdjDC05KK83LhdSax72Fsw==" \
http://localhost:8082/upload/BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
# output:
# {
# "object_id": "DhfES9nVrFksxGDD2jQLunGADfrXExxNwqXbDafyBn9X",
# "container_id": "BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K"
# }
For the token to work correctly, you need to create a container with a basic ACL that:
- Allow PUT operation to others
- Doesn't set "final" bit
For example:
$ neofs-cli --key KxDgvEKzgSBPPfuVfw67oPQBSjidEiqTHURKSDL1R7yGaGYAeYnr --basic-acl 0x0FFFCFFF -r 192.168.130.72:8080 container create --policy "REP 3" --await
To deny access to a container without a token, set the eACL rules:
$ neofs-cli --key KxDgvEKzgSBPPfuVfw67oPQBSjidEiqTHURKSDL1R7yGaGYAeYnr -r 192.168.130.72:8080 container set-eacl --table eacl.json --await --cid BJeErH9MWmf52VsR1mLWKkgF3pRm3FkubYxM7TZkBP4K
File eacl.json:
{
"version": {
"major": 0,
"minor": 0
},
"containerID": {
"value": "mRnZWzewzxjzIPa7Fqlfqdl3TM1KpJ0YnsXsEhafJJg="
},
"records": [
{
"operation": "PUT",
"action": "DENY",
"filters": [],
"targets": [
{
"role": "OTHERS",
"keys": []
}
]
}
]
}
If enabled, Prometheus metrics are available at /metrics/
path and Pprof at
/debug/pprof
.