Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[#5188] feat(python-client): Support s3 fileset in python client #5209

Merged
merged 135 commits into from
Oct 24, 2024
Merged
Show file tree
Hide file tree
Changes from 118 commits
Commits
Show all changes
135 commits
Select commit Hold shift + click to select a range
d2447a2
Add a framework to support multi-storage in a pluginized manner for …
yuqi1129 Sep 25, 2024
7e5a8b5
Fix compile distribution error.
yuqi1129 Sep 25, 2024
f53c5ef
fix
yuqi1129 Sep 25, 2024
36fedcd
fix
yuqi1129 Sep 25, 2024
e93fba5
fix
yuqi1129 Sep 25, 2024
b1e04b6
fix
yuqi1129 Sep 26, 2024
db00e65
Changed according to comments.
yuqi1129 Sep 29, 2024
c793582
fix
yuqi1129 Sep 29, 2024
013f5cb
fix
yuqi1129 Sep 30, 2024
dba5753
Merge branch 'main' of github.com:datastrato/graviton into issue_5019
yuqi1129 Oct 8, 2024
16dfc73
resolve comments.
yuqi1129 Oct 8, 2024
278fcd8
Polish code.
yuqi1129 Oct 9, 2024
3fb55ad
fix
yuqi1129 Oct 9, 2024
cd04666
fix
yuqi1129 Oct 9, 2024
d0bf13e
Support GCS fileset.
yuqi1129 Oct 9, 2024
ffaa064
Change gvfs accordingly.
yuqi1129 Oct 11, 2024
32d7f3d
Merge remote-tracking branch 'me/issue_5019' into issue_5019
yuqi1129 Oct 11, 2024
d82bf76
Update Java doc for FileSystemProvider
yuqi1129 Oct 11, 2024
dfdb772
Fix
yuqi1129 Oct 11, 2024
8708a8a
Fix
yuqi1129 Oct 11, 2024
ba9f8fa
Fix test error.
yuqi1129 Oct 11, 2024
dae99f7
Polish.
yuqi1129 Oct 11, 2024
e22053b
Polish
yuqi1129 Oct 12, 2024
4fb89e0
Polish
yuqi1129 Oct 12, 2024
e5746c0
Rename `AbstractIT` to `BaseIT`
yuqi1129 Oct 12, 2024
b2d7bed
Fix
yuqi1129 Oct 14, 2024
380717b
Merge branch 'apache:main' into issue_5019
yuqi1129 Oct 14, 2024
f4041ec
Fix python ut error again.
yuqi1129 Oct 14, 2024
66247ab
Merge branch 'issue_5019' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 14, 2024
3cfb94f
Fix test error again.
yuqi1129 Oct 14, 2024
7d1150f
Fix minor.
yuqi1129 Oct 14, 2024
608081b
fix
yuqi1129 Oct 14, 2024
9edfe82
Fix
yuqi1129 Oct 14, 2024
3079bf0
Fix
yuqi1129 Oct 14, 2024
da49e60
Fix
yuqi1129 Oct 14, 2024
b621d89
Fix
yuqi1129 Oct 14, 2024
05dd006
Merge branch 'issue_5019' into issue_5074
yuqi1129 Oct 15, 2024
9d5b8dc
rebase issue_5019
yuqi1129 Oct 15, 2024
e58f9a0
Fix
yuqi1129 Oct 15, 2024
c521daf
resolve comments
yuqi1129 Oct 15, 2024
46e996a
Resolve comments again.
yuqi1129 Oct 15, 2024
da0b7ca
Polish again.
yuqi1129 Oct 15, 2024
e9ccda4
Merge branch 'issue_5019' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 15, 2024
ba1fe5f
Rebase branch issue_5019
yuqi1129 Oct 15, 2024
4ffe389
Merge branch 'main' of github.com:datastrato/graviton into issue_5139
yuqi1129 Oct 15, 2024
7c44a57
Support python gvfs
yuqi1129 Oct 15, 2024
992ba0a
fix
yuqi1129 Oct 15, 2024
5dbca5f
fix
yuqi1129 Oct 15, 2024
f27520a
Update code.
yuqi1129 Oct 15, 2024
e29e47b
Merge branch 'issue_5019' into issue_5074
yuqi1129 Oct 15, 2024
bc1e76f
Rebase branch issue_5019
yuqi1129 Oct 15, 2024
8a9d3bf
Merge branch 'issue_5074' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 15, 2024
2115e31
fix
yuqi1129 Oct 15, 2024
c2e55d4
fix
yuqi1129 Oct 15, 2024
557aa02
fix
yuqi1129 Oct 15, 2024
5c3fa5c
fix
yuqi1129 Oct 15, 2024
408eca7
fix
yuqi1129 Oct 15, 2024
a02065d
fix
yuqi1129 Oct 15, 2024
8762bae
fix
yuqi1129 Oct 15, 2024
dc7a915
fix
yuqi1129 Oct 15, 2024
c230991
fix
yuqi1129 Oct 15, 2024
dc54880
skip some test.
yuqi1129 Oct 15, 2024
7ecc040
fix
yuqi1129 Oct 15, 2024
da46321
fix
yuqi1129 Oct 15, 2024
27bc2ab
Fix
yuqi1129 Oct 16, 2024
017c42e
Merge branch 'issue_5019' into issue_5074
yuqi1129 Oct 16, 2024
9dc0f5a
Fix
yuqi1129 Oct 16, 2024
41ff00d
Merge branch 'issue_5019' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 16, 2024
1fee1e4
Fix
yuqi1129 Oct 16, 2024
1789bd2
fix
yuqi1129 Oct 16, 2024
2ee1709
Fix
yuqi1129 Oct 16, 2024
05e5d20
Fix
yuqi1129 Oct 16, 2024
8f28211
Fix
yuqi1129 Oct 16, 2024
35cba1e
Fix
yuqi1129 Oct 16, 2024
bcf2f12
Fix
yuqi1129 Oct 16, 2024
f25a37d
Fix a problem
yuqi1129 Oct 16, 2024
a3da011
Merge branch 'issue_5019' into issue_5074
yuqi1129 Oct 16, 2024
3517996
Merge branch 'main' of github.com:apache/gravitino into issue_5074
yuqi1129 Oct 16, 2024
27a911a
fix
yuqi1129 Oct 16, 2024
e34dbea
Merge branch 'main' of github.com:apache/gravitino into issue_5074
yuqi1129 Oct 16, 2024
6bae7e5
Fix a problem
yuqi1129 Oct 16, 2024
fe13f5e
Fix a problem
yuqi1129 Oct 16, 2024
0181632
Fix a problem
yuqi1129 Oct 16, 2024
3ff9eef
Fix
yuqi1129 Oct 16, 2024
2ce660c
Fix
yuqi1129 Oct 16, 2024
f0fa87b
Fix
yuqi1129 Oct 16, 2024
d2921a8
Fix
yuqi1129 Oct 16, 2024
dc68dd1
Fix
yuqi1129 Oct 16, 2024
6431e2f
Fix
yuqi1129 Oct 16, 2024
242888f
Fix
yuqi1129 Oct 16, 2024
3ec2dcc
Merge branch 'issue_5074' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 16, 2024
f754997
Fix
yuqi1129 Oct 16, 2024
2cdfb35
Fix
yuqi1129 Oct 16, 2024
67dbc3a
Resolve comments.
yuqi1129 Oct 17, 2024
70a545e
Fix the java doc problem.
yuqi1129 Oct 17, 2024
4d54acf
Merge branch 'issue_5074' into issue_5139
yuqi1129 Oct 17, 2024
15bbf99
rebase issue_5074
yuqi1129 Oct 17, 2024
cfcc544
Optimize code.
yuqi1129 Oct 17, 2024
acf51e1
Merge branch 'issue_5074' of github.com:yuqi1129/gravitino into issue…
yuqi1129 Oct 17, 2024
11f9992
Merge branch 'main' of github.com:datastrato/graviton into issue_5139
yuqi1129 Oct 17, 2024
4f00a2f
Remove s3 related code.
yuqi1129 Oct 17, 2024
b9ef8f0
fix
yuqi1129 Oct 17, 2024
9f65fb5
try to import lazily.
yuqi1129 Oct 18, 2024
4defcc6
format code.
yuqi1129 Oct 18, 2024
3a907f4
fix
yuqi1129 Oct 18, 2024
76912b7
fix
yuqi1129 Oct 18, 2024
4478673
fix
yuqi1129 Oct 18, 2024
5a194df
Merge branch 'main' of github.com:datastrato/graviton into issue_5139
yuqi1129 Oct 18, 2024
c8b5c7c
Merge branch 'main' of github.com:datastrato/graviton into issue_5139
yuqi1129 Oct 21, 2024
6ff9353
Resolve comments
yuqi1129 Oct 21, 2024
1dac0f0
Merge branch 'main' of github.com:datastrato/graviton into issue_5188
yuqi1129 Oct 21, 2024
f592289
Support python client for S3.
yuqi1129 Oct 21, 2024
7069b6b
fix
yuqi1129 Oct 21, 2024
1281e72
fix
yuqi1129 Oct 21, 2024
f022d6e
fix
yuqi1129 Oct 21, 2024
0d2ccab
fix
yuqi1129 Oct 22, 2024
55633e8
Merge branch 'main' of github.com:datastrato/graviton into issue_5188
yuqi1129 Oct 22, 2024
217cc5f
fix
yuqi1129 Oct 22, 2024
6958aa8
fix
yuqi1129 Oct 22, 2024
8b9b8d7
Replace pyarrow gvfs with gcsfs.
yuqi1129 Oct 23, 2024
b4d5728
fix
yuqi1129 Oct 23, 2024
0306ac5
fix
yuqi1129 Oct 23, 2024
a35a40d
fix
yuqi1129 Oct 23, 2024
cfd8a89
fix
yuqi1129 Oct 23, 2024
a131564
fix
yuqi1129 Oct 23, 2024
34e3ff4
Replace ArrowFSWrapper with s3fs.S3FileSystem.
yuqi1129 Oct 23, 2024
2cc234a
fix
yuqi1129 Oct 23, 2024
ba0237e
fix
yuqi1129 Oct 23, 2024
4df5ea1
fix
yuqi1129 Oct 23, 2024
4e49e6e
fix
yuqi1129 Oct 23, 2024
015b788
fix comments.
yuqi1129 Oct 23, 2024
c414b4d
fix test error.
yuqi1129 Oct 23, 2024
798b4d1
fix
yuqi1129 Oct 23, 2024
aa96f63
fix
yuqi1129 Oct 23, 2024
804622c
fix
yuqi1129 Oct 23, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 61 additions & 5 deletions clients/client-python/gravitino/filesystem/gvfs.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,8 @@ class StorageType(Enum):
HDFS = "hdfs"
LOCAL = "file"
GCS = "gs"
S3A = "s3a"
S3 = "s3"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we add two, "s3a" and "s3"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have the same question, because we only use the s3a scheme in the S3FileSystemProvider(https://github.com/apache/gravitino/blob/main/bundles/aws-bundle/src/main/java/org/apache/gravitino/s3/fs/S3FileSystemProvider.java#L44), is there any case will use the s3 scheme?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am concerned about any instances where the location starts with s3 NOT s3a before, as clarified by @xloya , there seems to be only one entrance and Gravitino is the only ways that can create fileset, so we can safely remove s3 here.



class FilesetContextPair:
Expand Down Expand Up @@ -314,7 +316,12 @@ def mv(self, path1, path2, recursive=False, maxdepth=None, **kwargs):

# convert the following to in

if storage_type in [StorageType.HDFS, StorageType.GCS]:
if storage_type in [
StorageType.HDFS,
StorageType.GCS,
StorageType.S3,
StorageType.S3A,
]:
src_context_pair.filesystem().mv(
self._strip_storage_protocol(storage_type, src_actual_path),
self._strip_storage_protocol(storage_type, dst_actual_path),
Expand Down Expand Up @@ -547,9 +554,12 @@ def _convert_actual_path(
"""

# If the storage path starts with hdfs, gcs, we should use the path as the prefix.
if storage_location.startswith(
f"{StorageType.HDFS.value}://"
) or storage_location.startswith(f"{StorageType.GCS.value}://"):
if (
storage_location.startswith(f"{StorageType.HDFS.value}://")
or storage_location.startswith(f"{StorageType.GCS.value}://")
or storage_location.startswith(f"{StorageType.S3.value}://")
or storage_location.startswith(f"{StorageType.S3A.value}://")
):
actual_prefix = infer_storage_options(storage_location)["path"]
elif storage_location.startswith(f"{StorageType.LOCAL.value}:/"):
actual_prefix = storage_location[len(f"{StorageType.LOCAL.value}:") :]
Expand Down Expand Up @@ -692,6 +702,10 @@ def _recognize_storage_type(path: str):
return StorageType.LOCAL
if path.startswith(f"{StorageType.GCS.value}://"):
return StorageType.GCS
if path.startswith(f"{StorageType.S3A.value}://"):
return StorageType.S3A
if path.startswith(f"{StorageType.S3.value}://"):
return StorageType.S3
raise GravitinoRuntimeException(
f"Storage type doesn't support now. Path:{path}"
)
Expand All @@ -716,7 +730,12 @@ def _strip_storage_protocol(storage_type: StorageType, path: str):
:param path: The path
:return: The stripped path
"""
if storage_type in (StorageType.HDFS, StorageType.GCS):
if storage_type in (
StorageType.HDFS,
StorageType.GCS,
StorageType.S3A,
StorageType.S3,
):
return path
if storage_type == StorageType.LOCAL:
return path[len(f"{StorageType.LOCAL.value}:") :]
Expand Down Expand Up @@ -792,6 +811,8 @@ def _get_filesystem(self, actual_file_location: str):
fs = LocalFileSystem()
elif storage_type == StorageType.GCS:
fs = ArrowFSWrapper(self._get_gcs_filesystem())
elif storage_type in (StorageType.S3A, StorageType.S3):
fs = ArrowFSWrapper(self._get_s3_filesystem())
else:
raise GravitinoRuntimeException(
f"Storage type: `{storage_type}` doesn't support now."
Expand Down Expand Up @@ -819,5 +840,40 @@ def _get_gcs_filesystem(self):

return importlib.import_module("pyarrow.fs").GcsFileSystem()

def _get_s3_filesystem(self):
# get All keys from the options that start with 'gravitino.bypass.s3.' and remove the prefix
s3_options = {
key[len(GVFSConfig.GVFS_FILESYSTEM_BY_PASS_S3) :]: value
for key, value in self._options.items()
if key.startswith(GVFSConfig.GVFS_FILESYSTEM_BY_PASS_S3)
}

# get 'aws_access_key_id' from s3_options, if the key is not found, throw an exception
aws_access_key_id = s3_options.get(GVFSConfig.GVFS_FILESYSTEM_S3_ACCESS_KEY)
if aws_access_key_id is None:
raise GravitinoRuntimeException(
"AWS access key id is not found in the options."
)

# get 'aws_secret_access_key' from s3_options, if the key is not found, throw an exception
aws_secret_access_key = s3_options.get(GVFSConfig.GVFS_FILESYSTEM_S3_SECRET_KEY)
if aws_secret_access_key is None:
raise GravitinoRuntimeException(
"AWS secret access key is not found in the options."
)

# get 'aws_endpoint_url' from s3_options, if the key is not found, throw an exception
aws_endpoint_url = s3_options.get(GVFSConfig.GVFS_FILESYSTEM_S3_ENDPOINT)
if aws_endpoint_url is None:
raise GravitinoRuntimeException(
"AWS endpoint url is not found in the options."
)

return importlib.import_module("pyarrow.fs").S3FileSystem(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I didn't notice this before, GCS and S3 also have the fsspec implementation(https://github.com/fsspec/gcsfs, https://github.com/fsspec/s3fs), how do you consider the selection here to use PyArrow's implementation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyArrow's implementation provides an uniform API to users, for example, combined with ArrowFSWrapper, we can support all kinds of storage throught API exposed by ArrowFSWrapper.

I have viewed the implementation by fsspec, it's seems that there are no big difference compared to that provided by pyarrow.

Considering the efficiency brought by arrow and arrow has been used by HDFS, so I continue to use pyarrow

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact, PyArrow officially supports a limited number of storage systems. If you need to expand the storage system, you need to modify the Arrow source code. HDFS chooses to use PyArrow because fsspec actually also calls PyArrow, which is almost the only choice. For other storage, PyArrow may not be the only choice. My advice is not to be restricted by the current selection. We should make the best choice in terms of performance and interface adaptability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My advice is not to be restricted by the current selection. We should make the best choice in terms of performance and interface adaptability.

I agree with this point and I also noticed that the filesystem that Pyarrow supports is very limited. Due to time limitations, I have not completed a comprehensive survey about it. thanks for your suggestion, I will modify the code accordingly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@xloya
I have replaced s3fs and gcsfs with arrowfs , please help to take a look again.

access_key=aws_access_key_id,
secret_key=aws_secret_access_key,
endpoint_override=aws_endpoint_url,
)


fsspec.register_implementation(PROTOCOL_NAME, GravitinoVirtualFileSystem)
5 changes: 5 additions & 0 deletions clients/client-python/gravitino/filesystem/gvfs_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,3 +35,8 @@ class GVFSConfig:
GVFS_FILESYSTEM_BY_PASS = "gravitino.bypass"
GVFS_FILESYSTEM_BY_PASS_GCS = "gravitino.bypass.gcs."
GVFS_FILESYSTEM_KEY_FILE = "service-account-key-path"

GVFS_FILESYSTEM_BY_PASS_S3 = "gravitino.bypass.s3."
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that why do we need a prefix "gravitino.bypass.s3." for gcs and aws configurations? I think the configuration keys mentioned above are important for a client to work, it may not be good to use a bypass prefix, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, let me think a bit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have removed the prefix gravitino.bypass and optimized the key name.

GVFS_FILESYSTEM_S3_ACCESS_KEY = "access-key"
GVFS_FILESYSTEM_S3_SECRET_KEY = "secret-key"
GVFS_FILESYSTEM_S3_ENDPOINT = "endpoint"
166 changes: 166 additions & 0 deletions clients/client-python/tests/integration/test_gvfs_with_s3.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.

import logging
import os
from random import randint
import unittest

from fsspec.implementations.arrow import ArrowFSWrapper
from pyarrow.fs import S3FileSystem

from tests.integration.test_gvfs_with_hdfs import TestGvfsWithHDFS
from gravitino import (
gvfs,
GravitinoClient,
Catalog,
Fileset,
)
from gravitino.exceptions.base import GravitinoRuntimeException
from gravitino.filesystem.gvfs_config import GVFSConfig

logger = logging.getLogger(__name__)


@unittest.skip("This test require S3 service account")
class TestGvfsWithS3(TestGvfsWithHDFS):
# Before running this test, please set the make sure aws-bundle-x.jar has been
# copy to the $GRAVITINO_HOME/catalogs/hadoop/libs/ directory
s3_access_key = "your_access_key"
s3_secret_key = "your_secret_key"
s3_endpoint = "your_endpoint"
bucket_name = "your_bucket_name"

metalake_name: str = "TestGvfsWithS3_metalake" + str(randint(1, 10000))

def setUp(self):
self.options = {
f"{GVFSConfig.GVFS_FILESYSTEM_BY_PASS_S3}{GVFSConfig.GVFS_FILESYSTEM_S3_ACCESS_KEY}": self.s3_access_key,
f"{GVFSConfig.GVFS_FILESYSTEM_BY_PASS_S3}{GVFSConfig.GVFS_FILESYSTEM_S3_SECRET_KEY}": self.s3_secret_key,
f"{GVFSConfig.GVFS_FILESYSTEM_BY_PASS_S3}{GVFSConfig.GVFS_FILESYSTEM_S3_ENDPOINT}": self.s3_endpoint,
}

def tearDown(self):
self.options = {}

@classmethod
def setUpClass(cls):
cls._get_gravitino_home()

cls.hadoop_conf_path = f"{cls.gravitino_home}/catalogs/hadoop/conf/hadoop.conf"
# restart the server
cls.restart_server()
# create entity
cls._init_test_entities()

@classmethod
def tearDownClass(cls):
cls._clean_test_data()
# reset server conf in case of other ITs like HDFS has changed it and fail
# to reset it
cls._reset_conf(cls.config, cls.hadoop_conf_path)
# restart server
cls.restart_server()

# clear all config in the conf_path
@classmethod
def _reset_conf(cls, config, conf_path):
logger.info("Reset %s.", conf_path)
if not os.path.exists(conf_path):
raise GravitinoRuntimeException(f"Conf file is not found at `{conf_path}`.")
filtered_lines = []
with open(conf_path, mode="r", encoding="utf-8") as file:
origin_lines = file.readlines()

for line in origin_lines:
line = line.strip()
if line.startswith("#"):
# append annotations directly
filtered_lines.append(line + "\n")

with open(conf_path, mode="w", encoding="utf-8") as file:
for line in filtered_lines:
file.write(line)

@classmethod
def _init_test_entities(cls):
cls.gravitino_admin_client.create_metalake(
name=cls.metalake_name, comment="", properties={}
)
cls.gravitino_client = GravitinoClient(
uri="http://localhost:8090", metalake_name=cls.metalake_name
)

cls.config = {}
cls.conf = {}
catalog = cls.gravitino_client.create_catalog(
name=cls.catalog_name,
catalog_type=Catalog.Type.FILESET,
provider=cls.catalog_provider,
comment="",
properties={
"filesystem-providers": "s3",
"gravitino.bypass.fs.s3a.access.key": cls.s3_access_key,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also for server side, maybe we should clearly define some configurations, not using "gravitino.bypass." for all configurations. I have to think a bit on this, can you please also think a bit from user side?

Copy link
Contributor Author

@yuqi1129 yuqi1129 Oct 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jerryshao
I will use this #5220 to optimize it and won't change it in this PR.

"gravitino.bypass.fs.s3a.secret.key": cls.s3_secret_key,
"gravitino.bypass.fs.s3a.endpoint": cls.s3_endpoint,
},
)
catalog.as_schemas().create_schema(
schema_name=cls.schema_name, comment="", properties={}
)

cls.fileset_storage_location: str = (
f"s3a://{cls.bucket_name}/{cls.catalog_name}/{cls.schema_name}/{cls.fileset_name}"
)
cls.fileset_gvfs_location = (
f"gvfs://fileset/{cls.catalog_name}/{cls.schema_name}/{cls.fileset_name}"
)
catalog.as_fileset_catalog().create_fileset(
ident=cls.fileset_ident,
fileset_type=Fileset.Type.MANAGED,
comment=cls.fileset_comment,
storage_location=cls.fileset_storage_location,
properties=cls.fileset_properties,
)

arrow_s3_fs = S3FileSystem(
access_key=cls.s3_access_key,
secret_key=cls.s3_secret_key,
endpoint_override=cls.s3_endpoint,
)
cls.fs = ArrowFSWrapper(arrow_s3_fs)

def test_modified(self):
modified_dir = self.fileset_gvfs_location + "/test_modified"
modified_actual_dir = self.fileset_storage_location + "/test_modified"
fs = gvfs.GravitinoVirtualFileSystem(
server_uri="http://localhost:8090",
metalake_name=self.metalake_name,
options=self.options,
**self.conf,
)
self.fs.mkdir(modified_actual_dir)
self.assertTrue(self.fs.exists(modified_actual_dir))
self.assertTrue(fs.exists(modified_dir))

self.assertIsNone(fs.modified(modified_dir))

# create a file under the dir 'modified_dir'.
file_path = modified_dir + "/test.txt"
fs.touch(file_path)
self.assertTrue(fs.exists(file_path))
self.assertIsNotNone(fs.modified(file_path))
Loading