Skip to content
This repository has been archived by the owner on Jun 23, 2022. It is now read-only.

feat(bulk_load): add bulk load manager for meta #986

Merged
merged 5 commits into from
Dec 24, 2021

Conversation

GiantKing
Copy link
Contributor

No description provided.

@@ -101,6 +111,8 @@ void bulk_load_service::on_start_bulk_load(start_bulk_load_rpc rpc)
request.file_provider_type,
request.remote_root_path);

// clean old bulk load result
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

clean --> clear or clean up

@@ -234,21 +246,31 @@ void bulk_load_service::create_app_bulk_load_dir(const std::string &app_name,
ainfo.cluster_name = req.cluster_name;
ainfo.file_provider_type = req.file_provider_type;
ainfo.remote_root_path = req.remote_root_path;
blob value = dsn::json::json_forwarder<app_bulk_load_info>::encode(ainfo);
ainfo.is_ever_ingesting = false;
ainfo.bulk_load_err = ERR_OK;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

auto bulk_load_path = get_app_bulk_load_path(app_id);
  _meta_svc->get_meta_storage()->delete_node_recursively(
        bulk_load_path, [this, rpc, ainfo]() {
            ddebug_f("remove app({}) bulk load dir {} succeed", ainfo.app_name, bulk_load_path);
......

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"delete_node_recursively" need a r-value, so I can't pass bulk_load_path in to lambda.

src/common/bulk_load.thrift Show resolved Hide resolved
src/common/bulk_load.thrift Show resolved Hide resolved
src/meta/meta_bulk_load_service.cpp Outdated Show resolved Hide resolved
foreverneverer
foreverneverer previously approved these changes Dec 20, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants