diff --git a/docs/resources/obs_bucket.md b/docs/resources/obs_bucket.md
index 4367ab15f46..25cea9c5000 100644
--- a/docs/resources/obs_bucket.md
+++ b/docs/resources/obs_bucket.md
@@ -2,7 +2,7 @@
subcategory: "Object Storage Service (OBS)"
---
-# huaweicloud\_obs\_bucket
+# huaweicloud_obs_bucket
Provides an OBS bucket resource.
@@ -151,9 +151,11 @@ The following arguments are supported:
* The name cannot be an IP address.
* If the name contains any periods (.), a security certificate verification message may appear when you access the bucket or its objects by entering a domain name.
-* `storage_class` - (Optional, String) Specifies the storage class of the bucket. OBS provides three storage classes: "STANDARD", "WARM" (Infrequent Access) and "COLD" (Archive). Defaults to `STANDARD`.
+* `storage_class` - (Optional, String) Specifies the storage class of the bucket. OBS provides three storage classes:
+ "STANDARD", "WARM" (Infrequent Access) and "COLD" (Archive). Defaults to `STANDARD`.
-* `acl` - (Optional, String) Specifies the ACL policy for a bucket. The predefined common policies are as follows: "private", "public-read", "public-read-write" and "log-delivery-write". Defaults to `private`.
+* `acl` - (Optional, String) Specifies the ACL policy for a bucket. The predefined common policies are as follows:
+ "private", "public-read", "public-read-write" and "log-delivery-write". Defaults to `private`.
* `policy` - (Optional, String) Specifies the text of the bucket policy in JSON format. For more information about
obs format bucket policy, see the [Developer Guide](https://support.huaweicloud.com/intl/en-us/devg-obs/obs_06_0048.html).
@@ -167,17 +169,29 @@ The following arguments are supported:
* `logging` - (Optional, Map) A settings of bucket logging (documented below).
-* `quota` - (Optional, Int) Specifies bucket storage quota. Must be a positive integer in the unit of byte. The maximum storage quota is 263 – 1 bytes. The default bucket storage quota is 0, indicating that the bucket storage quota is not limited.
+* `quota` - (Optional, Int) Specifies bucket storage quota. Must be a positive integer in the unit of byte.
+ The maximum storage quota is 263 – 1 bytes. The default bucket storage quota is 0, indicating that
+ the bucket storage quota is not limited.
* `website` - (Optional, List) A website object (documented below).
* `cors_rule` - (Optional, List) A rule of Cross-Origin Resource Sharing (documented below).
* `lifecycle_rule` - (Optional, List) A configuration of object lifecycle management (documented below).
-* `force_destroy` - (Optional, Bool) A boolean that indicates all objects should be deleted from the bucket so that the bucket can be destroyed without error. Default to `false`.
+* `force_destroy` - (Optional, Bool) A boolean that indicates all objects should be deleted from the bucket,
+ so that the bucket can be destroyed without error. Default to `false`.
-* `region` - (Optional, String, ForceNew) Specified the region where this bucket will be created. If not specified, used the region by the provider.
+* `region` - (Optional, String, ForceNew) Specifies the region where this bucket will be created.
+ If not specified, used the region by the provider. Changing this will create a new bucket.
-* `enterprise_project_id` - (Optional, String, ForceNew) The enterprise project id of the OBS bucket. Changing this creates a OBS bucket.
+* `multi_az` - (Optional, Bool, ForceNew) Whether to enable the multi-AZ mode for the bucket.
+ When the multi-AZ mode is enabled, data in the bucket is duplicated and stored in multiple AZs.
+
+ -> **NOTE:** Once a bucket is created, you cannot enable or disable the multi-AZ mode.
+ Changing this will create a new bucket, but the name of a deleted bucket can be reused
+ for another bucket at least 30 minutes after the deletion. Exercise caution when changing this field.
+
+* `enterprise_project_id` - (Optional, String, ForceNew) Specifies the enterprise project id of the OBS bucket.
+ Changing this will create a new bucket.
The `logging` object supports the following:
@@ -187,22 +201,25 @@ The `logging` object supports the following:
The `website` object supports the following:
-* `index_document` - (Required, String) Unless using `redirect_all_requests_to`. Specifies the default homepage of the static website, only HTML web pages are supported.
+* `index_document` - (Required, String) Unless using `redirect_all_requests_to`. Specifies the default homepage of
+ the static website, only HTML web pages are supported.
OBS only allows files such as `index.html` in the root directory of a bucket to function as the default homepage.
That is to say, do not set the default homepage with a multi-level directory structure (for example, /page/index.html).
* `error_document` - (Optional, String) Specifies the error page returned when an error occurs during static website access.
Only HTML, JPG, PNG, BMP, and WEBP files under the root directory are supported.
-* `redirect_all_requests_to` - (Optional, String) A hostname to redirect all website requests for this bucket to. Hostname can optionally be prefixed with a protocol (`http://` or `https://`) to use when redirecting requests. The default is the protocol that is used in the original request.
+* `redirect_all_requests_to` - (Optional, String) A hostname to redirect all website requests for this bucket to.
+ Hostname can optionally be prefixed with a protocol (`http://` or `https://`) to use when redirecting requests.
+ The default is the protocol that is used in the original request.
-* `routing_rules` - (Optional, String) A JSON or XML format containing routing rules describing redirect behavior and when redirects are applied.
- Each rule contains a `Condition` and a `Redirect` as shown in the following table:
+* `routing_rules` - (Optional, String) A JSON or XML format containing routing rules describing redirect behavior and
+ when redirects are applied. Each rule contains a `Condition` and a `Redirect` as shown in the following table:
-Parameter | Key
--|-
-Condition | KeyPrefixEquals, HttpErrorCodeReturnedEquals
-Redirect | Protocol, HostName, ReplaceKeyPrefixWith, ReplaceKeyWith, HttpRedirectCode
+ Parameter | Key
+ --- | ---
+ Condition | KeyPrefixEquals, HttpErrorCodeReturnedEquals
+ Redirect | Protocol, HostName, ReplaceKeyPrefixWith, ReplaceKeyWith, HttpRedirectCode
The `cors_rule` object supports the following:
@@ -228,7 +245,8 @@ The `lifecycle_rule` object supports the following:
* `prefix` - (Optional, String) Object key prefix identifying one or more objects to which the rule applies.
If omitted, all objects in the bucket will be managed by the lifecycle rule.
- The prefix cannot start or end with a slash (/), cannot have consecutive slashes (/), and cannot contain the following special characters: \:*?"<>|.
+ The prefix cannot start or end with a slash (/), cannot have consecutive slashes (/), and
+ cannot contain the following special characters: \:*?"<>|.
* `expiration` - (Optional, List) Specifies a period when objects that have been last updated are automatically deleted. (documented below).
* `transition` - (Optional, List) Specifies a period when objects that have been last updated are automatically transitioned to `WARM` or `COLD` storage class (documented below).
diff --git a/go.mod b/go.mod
index bd91d1a45bb..a4acbe42756 100644
--- a/go.mod
+++ b/go.mod
@@ -6,7 +6,7 @@ require (
github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/go-multierror v1.0.0
github.com/hashicorp/terraform-plugin-sdk v1.16.0
- github.com/huaweicloud/golangsdk v0.0.0-20210528023633-c90ae4249a71
+ github.com/huaweicloud/golangsdk v0.0.0-20210602080359-3d6e5cdfc40f
github.com/jen20/awspolicyequivalence v1.1.0
github.com/smartystreets/goconvey v0.0.0-20190222223459-a17d461953aa // indirect
github.com/stretchr/testify v1.4.0
diff --git a/go.sum b/go.sum
index bcce0a69555..71a9aa6de5b 100644
--- a/go.sum
+++ b/go.sum
@@ -208,6 +208,8 @@ github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d h1:kJCB4vdITiW1eC1
github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d/go.mod h1:+NfK9FKeTrX5uv1uIXGdwYDTeHna2qgaIlx54MXqjAM=
github.com/huaweicloud/golangsdk v0.0.0-20210528023633-c90ae4249a71 h1:o2s9CcW277XbOQp0EolTCWHrNdBUHCjuu0aKtAedF/k=
github.com/huaweicloud/golangsdk v0.0.0-20210528023633-c90ae4249a71/go.mod h1:fcOI5u+0f62JtJd7zkCch/Z57BNC6bhqb32TKuiF4r0=
+github.com/huaweicloud/golangsdk v0.0.0-20210602080359-3d6e5cdfc40f h1:7FSmwn+mnDmezxuOjfDotwmyxQCQOU1waZDLORl7kGc=
+github.com/huaweicloud/golangsdk v0.0.0-20210602080359-3d6e5cdfc40f/go.mod h1:fcOI5u+0f62JtJd7zkCch/Z57BNC6bhqb32TKuiF4r0=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
diff --git a/huaweicloud/resource_huaweicloud_obs_bucket.go b/huaweicloud/resource_huaweicloud_obs_bucket.go
index 2a75b5c1f25..ae79145b4a8 100644
--- a/huaweicloud/resource_huaweicloud_obs_bucket.go
+++ b/huaweicloud/resource_huaweicloud_obs_bucket.go
@@ -264,12 +264,17 @@ func ResourceObsBucket() *schema.Resource {
Computed: true,
ForceNew: true,
},
-
+ "multi_az": {
+ Type: schema.TypeBool,
+ Optional: true,
+ Computed: true,
+ ForceNew: true,
+ },
"enterprise_project_id": {
Type: schema.TypeString,
Optional: true,
- ForceNew: true,
Computed: true,
+ ForceNew: true,
},
"bucket_domain_name": {
@@ -298,8 +303,11 @@ func resourceObsBucketCreate(d *schema.ResourceData, meta interface{}) error {
Epid: GetEnterpriseProjectID(d, config),
}
opts.Location = region
- log.Printf("[DEBUG] OBS bucket create opts: %#v", opts)
+ if _, ok := d.GetOk("multi_az"); ok {
+ opts.AvailableZone = "3az"
+ }
+ log.Printf("[DEBUG] OBS bucket create opts: %#v", opts)
_, err = obsClient.CreateBucket(opts)
if err != nil {
return getObsError("Error creating bucket", bucket, err)
@@ -422,8 +430,8 @@ func resourceObsBucketRead(d *schema.ResourceData, meta interface{}) error {
return err
}
- // Read enterprise project id
- if err := setObsBucketEnterpriseProjectID(obsClient, d); err != nil {
+ // Read enterprise project id and multi_az
+ if err := setObsBucketMetadata(obsClient, d); err != nil {
return err
}
@@ -922,7 +930,7 @@ func setObsBucketStorageClass(obsClient *obs.ObsClient, d *schema.ResourceData)
return nil
}
-func setObsBucketEnterpriseProjectID(obsClient *obs.ObsClient, d *schema.ResourceData) error {
+func setObsBucketMetadata(obsClient *obs.ObsClient, d *schema.ResourceData) error {
bucket := d.Id()
input := &obs.GetBucketMetadataInput{
Bucket: bucket,
@@ -932,9 +940,15 @@ func setObsBucketEnterpriseProjectID(obsClient *obs.ObsClient, d *schema.Resourc
return getObsError("Error getting metadata of OBS bucket", bucket, err)
}
- epsId := string(output.Epid)
- log.Printf("[DEBUG] getting enterprise project id of OBS bucket %s: %s", bucket, epsId)
- d.Set("enterprise_project_id", epsId)
+ epsID := string(output.Epid)
+ log.Printf("[DEBUG] getting enterprise project id of OBS bucket %s: %s", bucket, epsID)
+ d.Set("enterprise_project_id", epsID)
+
+ if output.AvailableZone == "3az" {
+ d.Set("multi_az", true)
+ } else {
+ d.Set("multi_az", false)
+ }
return nil
}
diff --git a/huaweicloud/resource_huaweicloud_obs_bucket_test.go b/huaweicloud/resource_huaweicloud_obs_bucket_test.go
index 3c7ae86b796..8bd6e90e617 100644
--- a/huaweicloud/resource_huaweicloud_obs_bucket_test.go
+++ b/huaweicloud/resource_huaweicloud_obs_bucket_test.go
@@ -23,26 +23,24 @@ func TestAccObsBucket_basic(t *testing.T) {
Config: testAccObsBucket_basic(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckObsBucketExists(resourceName),
- resource.TestCheckResourceAttr(
- resourceName, "bucket", testAccObsBucketName(rInt)),
- resource.TestCheckResourceAttr(
- resourceName, "bucket_domain_name", testAccObsBucketDomainName(rInt)),
- resource.TestCheckResourceAttr(
- resourceName, "acl", "private"),
- resource.TestCheckResourceAttr(
- resourceName, "storage_class", "STANDARD"),
- resource.TestCheckResourceAttr(
- resourceName, "region", HW_REGION_NAME),
+ resource.TestCheckResourceAttr(resourceName, "bucket", testAccObsBucketName(rInt)),
+ resource.TestCheckResourceAttr(resourceName, "bucket_domain_name", testAccObsBucketDomainName(rInt)),
+ resource.TestCheckResourceAttr(resourceName, "acl", "private"),
+ resource.TestCheckResourceAttr(resourceName, "storage_class", "STANDARD"),
+ resource.TestCheckResourceAttr(resourceName, "multi_az", "false"),
+ resource.TestCheckResourceAttr(resourceName, "region", HW_REGION_NAME),
+ resource.TestCheckResourceAttr(resourceName, "tags.foo", "bar"),
+ resource.TestCheckResourceAttr(resourceName, "tags.key", "value"),
),
},
{
Config: testAccObsBucket_basic_update(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckObsBucketExists(resourceName),
- resource.TestCheckResourceAttr(
- resourceName, "acl", "public-read"),
- resource.TestCheckResourceAttr(
- resourceName, "storage_class", "WARM"),
+ resource.TestCheckResourceAttr(resourceName, "acl", "public-read"),
+ resource.TestCheckResourceAttr(resourceName, "storage_class", "WARM"),
+ resource.TestCheckResourceAttr(resourceName, "tags.owner", "terraform"),
+ resource.TestCheckResourceAttr(resourceName, "tags.key", "value1"),
),
},
},
@@ -72,7 +70,7 @@ func TestAccObsBucket_withEpsId(t *testing.T) {
})
}
-func TestAccObsBucket_tags(t *testing.T) {
+func TestAccObsBucket_multiAZ(t *testing.T) {
rInt := acctest.RandInt()
resourceName := "huaweicloud_obs_bucket.bucket"
@@ -82,14 +80,14 @@ func TestAccObsBucket_tags(t *testing.T) {
CheckDestroy: testAccCheckObsBucketDestroy,
Steps: []resource.TestStep{
{
- Config: testAccObsBucketConfigWithTags(rInt),
+ Config: testAccObsBucketConfigMultiAZ(rInt),
Check: resource.ComposeTestCheckFunc(
- resource.TestCheckResourceAttr(
- resourceName, "tags.name", testAccObsBucketName(rInt)),
- resource.TestCheckResourceAttr(
- resourceName, "tags.foo", "bar"),
- resource.TestCheckResourceAttr(
- resourceName, "tags.key1", "value1"),
+ testAccCheckObsBucketExists(resourceName),
+ resource.TestCheckResourceAttr(resourceName, "bucket", testAccObsBucketName(rInt)),
+ resource.TestCheckResourceAttr(resourceName, "acl", "private"),
+ resource.TestCheckResourceAttr(resourceName, "storage_class", "STANDARD"),
+ resource.TestCheckResourceAttr(resourceName, "multi_az", "true"),
+ resource.TestCheckResourceAttr(resourceName, "tags.multi_az", "3az"),
),
},
},
@@ -127,7 +125,7 @@ func TestAccObsBucket_versioning(t *testing.T) {
func TestAccObsBucket_logging(t *testing.T) {
rInt := acctest.RandInt()
- target_bucket := fmt.Sprintf("tf-test-log-bucket-%d", rInt)
+ targetBucket := fmt.Sprintf("tf-test-log-bucket-%d", rInt)
resourceName := "huaweicloud_obs_bucket.bucket"
resource.ParallelTest(t, resource.TestCase{
@@ -139,7 +137,7 @@ func TestAccObsBucket_logging(t *testing.T) {
Config: testAccObsBucketConfigWithLogging(rInt),
Check: resource.ComposeTestCheckFunc(
testAccCheckObsBucketExists(resourceName),
- testAccCheckObsBucketLogging(resourceName, target_bucket, "log/"),
+ testAccCheckObsBucketLogging(resourceName, targetBucket, "log/"),
),
},
},
@@ -354,55 +352,65 @@ func testAccObsBucketDomainName(randInt int) string {
func testAccObsBucket_basic(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- storage_class = "STANDARD"
- acl = "private"
+ bucket = "tf-test-bucket-%d"
+ storage_class = "STANDARD"
+ acl = "private"
+
+ tags = {
+ foo = "bar"
+ key = "value"
+ }
}
`, randInt)
}
-func testAccObsBucket_epsId(randInt int) string {
+func testAccObsBucket_basic_update(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- storage_class = "STANDARD"
- acl = "private"
- enterprise_project_id = "%s"
+ bucket = "tf-test-bucket-%d"
+ storage_class = "WARM"
+ acl = "public-read"
+
+ tags = {
+ owner = "terraform"
+ key = "value1"
+ }
}
-`, randInt, HW_ENTERPRISE_PROJECT_ID_TEST)
+`, randInt)
}
-func testAccObsBucket_basic_update(randInt int) string {
+func testAccObsBucket_epsId(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- storage_class = "WARM"
- acl = "public-read"
+ bucket = "tf-test-bucket-%d"
+ storage_class = "STANDARD"
+ acl = "private"
+ enterprise_project_id = "%s"
}
-`, randInt)
+`, randInt, HW_ENTERPRISE_PROJECT_ID_TEST)
}
-func testAccObsBucketConfigWithTags(randInt int) string {
+func testAccObsBucketConfigMultiAZ(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ multi_az = true
- tags = {
- name = "tf-test-bucket-%d"
- foo = "bar"
- key1 = "value1"
- }
+ tags = {
+ key = "value"
+ multi_az = "3az"
+ }
}
-`, randInt, randInt)
+`, randInt)
}
func testAccObsBucketConfigWithVersioning(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
- versioning = true
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ versioning = true
}
`, randInt)
}
@@ -410,9 +418,9 @@ resource "huaweicloud_obs_bucket" "bucket" {
func testAccObsBucketConfigWithDisableVersioning(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
- versioning = false
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ versioning = false
}
`, randInt)
}
@@ -420,18 +428,19 @@ resource "huaweicloud_obs_bucket" "bucket" {
func testAccObsBucketConfigWithLogging(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "log_bucket" {
- bucket = "tf-test-log-bucket-%d"
- acl = "log-delivery-write"
- force_destroy = "true"
+ bucket = "tf-test-log-bucket-%d"
+ acl = "log-delivery-write"
+ force_destroy = true
}
+
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
- logging {
- target_bucket = huaweicloud_obs_bucket.log_bucket.id
- target_prefix = "log/"
- }
+ logging {
+ target_bucket = huaweicloud_obs_bucket.log_bucket.id
+ target_prefix = "log/"
+ }
}
`, randInt, randInt)
}
@@ -439,9 +448,9 @@ resource "huaweicloud_obs_bucket" "bucket" {
func testAccObsBucketConfigWithQuota(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
- quota = 1000000000
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ quota = 1000000000
}
`, randInt)
}
@@ -449,55 +458,54 @@ resource "huaweicloud_obs_bucket" "bucket" {
func testAccObsBucketConfigWithLifecycle(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "private"
- versioning = true
-
- lifecycle_rule {
- name = "rule1"
- prefix = "path1/"
- enabled = true
-
- expiration {
- days = 365
- }
- }
- lifecycle_rule {
- name = "rule2"
- prefix = "path2/"
- enabled = true
-
- expiration {
- days = 365
- }
-
- transition {
- days = 30
- storage_class = "WARM"
- }
- transition {
- days = 180
- storage_class = "COLD"
- }
- }
- lifecycle_rule {
- name = "rule3"
- prefix = "path3/"
- enabled = true
-
- noncurrent_version_expiration {
- days = 365
- }
-
- noncurrent_version_transition {
- days = 60
- storage_class = "WARM"
- }
- noncurrent_version_transition {
- days = 180
- storage_class = "COLD"
- }
- }
+ bucket = "tf-test-bucket-%d"
+ acl = "private"
+ versioning = true
+
+ lifecycle_rule {
+ name = "rule1"
+ prefix = "path1/"
+ enabled = true
+
+ expiration {
+ days = 365
+ }
+ }
+ lifecycle_rule {
+ name = "rule2"
+ prefix = "path2/"
+ enabled = true
+
+ expiration {
+ days = 365
+ }
+
+ transition {
+ days = 30
+ storage_class = "WARM"
+ }
+ transition {
+ days = 180
+ storage_class = "COLD"
+ }
+ }
+ lifecycle_rule {
+ name = "rule3"
+ prefix = "path3/"
+ enabled = true
+
+ noncurrent_version_expiration {
+ days = 365
+ }
+ noncurrent_version_transition {
+ days = 60
+ storage_class = "WARM"
+ }
+ noncurrent_version_transition {
+ days = 180
+ storage_class = "COLD"
+ }
+ }
}
`, randInt)
}
@@ -505,23 +513,23 @@ resource "huaweicloud_obs_bucket" "bucket" {
func testAccObsBucketWebsiteConfigWithRoutingRules(randInt int) string {
return fmt.Sprintf(`
resource "huaweicloud_obs_bucket" "bucket" {
- bucket = "tf-test-bucket-%d"
- acl = "public-read"
+ bucket = "tf-test-bucket-%d"
+ acl = "public-read"
- website {
- index_document = "index.html"
- error_document = "error.html"
- routing_rules = < 1 {
+ query = strings.Split(parmas[1], "&")
+ for _, value := range query {
+ if strings.HasPrefix(value, HEADER_STS_TOKEN_AMZ+"=") || strings.HasPrefix(value, HEADER_STS_TOKEN_OBS+"=") {
+ if value[len(HEADER_STS_TOKEN_AMZ)+1:] != "" {
+ securityToken = []string{value[len(HEADER_STS_TOKEN_AMZ)+1:]}
+ isSecurityToken = true
+ }
+ }
+ }
+ }
+ }
+ logStringToSign := stringToSign
+ if isSecurityToken && len(securityToken) > 0 {
+ logStringToSign = strings.Replace(logStringToSign, securityToken[0], "******", -1)
+ }
+ doLog(LEVEL_DEBUG, "The v2 auth stringToSign:\n%s", logStringToSign)
return stringToSign
}
-func v2Auth(ak, sk, method, canonicalizedUrl string, headers map[string][]string, isObs bool) map[string]string {
- stringToSign := getV2StringToSign(method, canonicalizedUrl, headers, isObs)
+func v2Auth(ak, sk, method, canonicalizedURL string, headers map[string][]string, isObs bool) map[string]string {
+ stringToSign := getV2StringToSign(method, canonicalizedURL, headers, isObs)
return map[string]string{"Signature": Base64Encode(HmacSha1([]byte(sk), []byte(stringToSign)))}
}
@@ -295,13 +332,13 @@ func getCredential(ak, region, shortDate string) (string, string) {
return fmt.Sprintf("%s/%s", ak, scope), scope
}
-func getV4StringToSign(method, canonicalizedUrl, queryUrl, scope, longDate, payload string, signedHeaders []string, headers map[string][]string) string {
+func getV4StringToSign(method, canonicalizedURL, queryURL, scope, longDate, payload string, signedHeaders []string, headers map[string][]string) string {
canonicalRequest := make([]string, 0, 10+len(signedHeaders)*4)
canonicalRequest = append(canonicalRequest, method)
canonicalRequest = append(canonicalRequest, "\n")
- canonicalRequest = append(canonicalRequest, canonicalizedUrl)
+ canonicalRequest = append(canonicalRequest, canonicalizedURL)
canonicalRequest = append(canonicalRequest, "\n")
- canonicalRequest = append(canonicalRequest, queryUrl)
+ canonicalRequest = append(canonicalRequest, queryURL)
canonicalRequest = append(canonicalRequest, "\n")
for _, signedHeader := range signedHeaders {
@@ -320,7 +357,28 @@ func getV4StringToSign(method, canonicalizedUrl, queryUrl, scope, longDate, payl
_canonicalRequest := strings.Join(canonicalRequest, "")
- doLog(LEVEL_DEBUG, "The v4 auth canonicalRequest:\n%s", _canonicalRequest)
+ var isSecurityToken bool
+ var securityToken []string
+ if securityToken, isSecurityToken = headers[HEADER_STS_TOKEN_OBS]; !isSecurityToken {
+ securityToken, isSecurityToken = headers[HEADER_STS_TOKEN_AMZ]
+ }
+ var query []string
+ if !isSecurityToken {
+ query = strings.Split(queryURL, "&")
+ for _, value := range query {
+ if strings.HasPrefix(value, HEADER_STS_TOKEN_AMZ+"=") || strings.HasPrefix(value, HEADER_STS_TOKEN_OBS+"=") {
+ if value[len(HEADER_STS_TOKEN_AMZ)+1:] != "" {
+ securityToken = []string{value[len(HEADER_STS_TOKEN_AMZ)+1:]}
+ isSecurityToken = true
+ }
+ }
+ }
+ }
+ logCanonicalRequest := _canonicalRequest
+ if isSecurityToken && len(securityToken) > 0 {
+ logCanonicalRequest = strings.Replace(logCanonicalRequest, securityToken[0], "******", -1)
+ }
+ doLog(LEVEL_DEBUG, "The v4 auth canonicalRequest:\n%s", logCanonicalRequest)
stringToSign := make([]string, 0, 7)
stringToSign = append(stringToSign, V4_HASH_PREFIX)
@@ -362,11 +420,12 @@ func getSignature(stringToSign, sk, region, shortDate string) string {
return Hex(HmacSha256(key, []byte(stringToSign)))
}
-func V4Auth(ak, sk, region, method, canonicalizedUrl, queryUrl string, headers map[string][]string) map[string]string {
- return v4Auth(ak, sk, region, method, canonicalizedUrl, queryUrl, headers)
+// V4Auth is a wrapper for v4Auth
+func V4Auth(ak, sk, region, method, canonicalizedURL, queryURL string, headers map[string][]string) map[string]string {
+ return v4Auth(ak, sk, region, method, canonicalizedURL, queryURL, headers)
}
-func v4Auth(ak, sk, region, method, canonicalizedUrl, queryUrl string, headers map[string][]string) map[string]string {
+func v4Auth(ak, sk, region, method, canonicalizedURL, queryURL string, headers map[string][]string) map[string]string {
var t time.Time
if val, ok := headers[HEADER_DATE_AMZ]; ok {
var err error
@@ -402,11 +461,11 @@ func v4Auth(ak, sk, region, method, canonicalizedUrl, queryUrl string, headers m
credential, scope := getCredential(ak, region, shortDate)
- payload := EMPTY_CONTENT_SHA256
+ payload := UNSIGNED_PAYLOAD
if val, ok := headers[HEADER_CONTENT_SHA256_AMZ]; ok {
payload = val[0]
}
- stringToSign := getV4StringToSign(method, canonicalizedUrl, queryUrl, scope, longDate, payload, signedHeaders, _headers)
+ stringToSign := getV4StringToSign(method, canonicalizedURL, queryURL, scope, longDate, payload, signedHeaders, _headers)
signature := getSignature(stringToSign, sk, region, shortDate)
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/client.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/client.go
index cf4aaf2832e..5bf04452740 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/client.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/client.go
@@ -22,13 +22,18 @@ import (
"strings"
)
+// ObsClient defines OBS client.
type ObsClient struct {
conf *config
httpClient *http.Client
}
+// New creates a new ObsClient instance.
func New(ak, sk, endpoint string, configurers ...configurer) (*ObsClient, error) {
- conf := &config{securityProvider: &securityProvider{ak: ak, sk: sk}, endpoint: endpoint}
+ conf := &config{endpoint: endpoint}
+ conf.securityProviders = make([]securityProvider, 0, 3)
+ conf.securityProviders = append(conf.securityProviders, NewBasicSecurityProvider(ak, sk, ""))
+
conf.maxRetryCount = -1
conf.maxRedirectCount = -1
for _, configurer := range configurers {
@@ -45,7 +50,7 @@ func New(ak, sk, endpoint string, configurers ...configurer) (*ObsClient, error)
if isWarnLogEnabled() {
info := make([]string, 3)
- info[0] = fmt.Sprintf("[OBS SDK Version=%s", obs_sdk_version)
+ info[0] = fmt.Sprintf("[OBS SDK Version=%s", obsSdkVersion)
info[1] = fmt.Sprintf("Endpoint=%s", conf.endpoint)
accessMode := "Virtual Hosting"
if conf.pathStyle {
@@ -59,67 +64,100 @@ func New(ak, sk, endpoint string, configurers ...configurer) (*ObsClient, error)
return obsClient, nil
}
+// Refresh refreshes ak, sk and securityToken for obsClient.
func (obsClient ObsClient) Refresh(ak, sk, securityToken string) {
- sp := &securityProvider{ak: strings.TrimSpace(ak), sk: strings.TrimSpace(sk), securityToken: strings.TrimSpace(securityToken)}
- obsClient.conf.securityProvider = sp
+ for _, sp := range obsClient.conf.securityProviders {
+ if bsp, ok := sp.(*BasicSecurityProvider); ok {
+ bsp.refresh(strings.TrimSpace(ak), strings.TrimSpace(sk), strings.TrimSpace(securityToken))
+ break
+ }
+ }
+}
+
+func (obsClient ObsClient) getSecurity() securityHolder {
+ if obsClient.conf.securityProviders != nil {
+ for _, sp := range obsClient.conf.securityProviders {
+ if sp == nil {
+ continue
+ }
+ sh := sp.getSecurity()
+ if sh.ak != "" && sh.sk != "" {
+ return sh
+ }
+ }
+ }
+ return emptySecurityHolder
}
+// Close closes ObsClient.
func (obsClient ObsClient) Close() {
obsClient.httpClient = nil
obsClient.conf.transport.CloseIdleConnections()
obsClient.conf = nil
- SyncLog()
}
-func (obsClient ObsClient) ListBuckets(input *ListBucketsInput) (output *ListBucketsOutput, err error) {
+// ListBuckets lists buckets.
+//
+// You can use this API to obtain the bucket list. In the list, bucket names are displayed in lexicographical order.
+func (obsClient ObsClient) ListBuckets(input *ListBucketsInput, extensions ...extensionOptions) (output *ListBucketsOutput, err error) {
if input == nil {
input = &ListBucketsInput{}
}
output = &ListBucketsOutput{}
- err = obsClient.doActionWithoutBucket("ListBuckets", HTTP_GET, input, output)
+ err = obsClient.doActionWithoutBucket("ListBuckets", HTTP_GET, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) CreateBucket(input *CreateBucketInput) (output *BaseModel, err error) {
+// CreateBucket creates a bucket.
+//
+// You can use this API to create a bucket and name it as you specify. The created bucket name must be unique in OBS.
+func (obsClient ObsClient) CreateBucket(input *CreateBucketInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("CreateBucketInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("CreateBucket", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("CreateBucket", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucket(bucketName string) (output *BaseModel, err error) {
+// DeleteBucket deletes a bucket.
+//
+// You can use this API to delete a bucket. The bucket to be deleted must be empty
+// (containing no objects, noncurrent object versions, or part fragments).
+func (obsClient ObsClient) DeleteBucket(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucket", HTTP_DELETE, bucketName, defaultSerializable, output)
+ err = obsClient.doActionWithBucket("DeleteBucket", HTTP_DELETE, bucketName, defaultSerializable, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketStoragePolicy(input *SetBucketStoragePolicyInput) (output *BaseModel, err error) {
+// SetBucketStoragePolicy sets bucket storage class.
+//
+// You can use this API to set storage class for bucket.
+func (obsClient ObsClient) SetBucketStoragePolicy(input *SetBucketStoragePolicyInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketStoragePolicyInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketStoragePolicy", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketStoragePolicy", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) getBucketStoragePolicyS3(bucketName string) (output *GetBucketStoragePolicyOutput, err error) {
+func (obsClient ObsClient) getBucketStoragePolicyS3(bucketName string, extensions []extensionOptions) (output *GetBucketStoragePolicyOutput, err error) {
output = &GetBucketStoragePolicyOutput{}
var outputS3 *getBucketStoragePolicyOutputS3
outputS3 = &getBucketStoragePolicyOutputS3{}
- err = obsClient.doActionWithBucket("GetBucketStoragePolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStoragePolicy), outputS3)
+ err = obsClient.doActionWithBucket("GetBucketStoragePolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStoragePolicy), outputS3, extensions)
if err != nil {
output = nil
return
@@ -129,11 +167,11 @@ func (obsClient ObsClient) getBucketStoragePolicyS3(bucketName string) (output *
return
}
-func (obsClient ObsClient) getBucketStoragePolicyObs(bucketName string) (output *GetBucketStoragePolicyOutput, err error) {
+func (obsClient ObsClient) getBucketStoragePolicyObs(bucketName string, extensions []extensionOptions) (output *GetBucketStoragePolicyOutput, err error) {
output = &GetBucketStoragePolicyOutput{}
var outputObs *getBucketStoragePolicyOutputObs
outputObs = &getBucketStoragePolicyOutputObs{}
- err = obsClient.doActionWithBucket("GetBucketStoragePolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStorageClass), outputObs)
+ err = obsClient.doActionWithBucket("GetBucketStoragePolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStorageClass), outputObs, extensions)
if err != nil {
output = nil
return
@@ -142,90 +180,151 @@ func (obsClient ObsClient) getBucketStoragePolicyObs(bucketName string) (output
output.StorageClass = outputObs.StorageClass
return
}
-func (obsClient ObsClient) GetBucketStoragePolicy(bucketName string) (output *GetBucketStoragePolicyOutput, err error) {
+
+// GetBucketStoragePolicy gets bucket storage class.
+//
+// You can use this API to obtain the storage class of a bucket.
+func (obsClient ObsClient) GetBucketStoragePolicy(bucketName string, extensions ...extensionOptions) (output *GetBucketStoragePolicyOutput, err error) {
if obsClient.conf.signature == SignatureObs {
- return obsClient.getBucketStoragePolicyObs(bucketName)
+ return obsClient.getBucketStoragePolicyObs(bucketName, extensions)
}
- return obsClient.getBucketStoragePolicyS3(bucketName)
+ return obsClient.getBucketStoragePolicyS3(bucketName, extensions)
}
-func (obsClient ObsClient) ListObjects(input *ListObjectsInput) (output *ListObjectsOutput, err error) {
+// ListObjects lists objects in a bucket.
+//
+// You can use this API to list objects in a bucket. By default, a maximum of 1000 objects are listed.
+func (obsClient ObsClient) ListObjects(input *ListObjectsInput, extensions ...extensionOptions) (output *ListObjectsOutput, err error) {
if input == nil {
return nil, errors.New("ListObjectsInput is nil")
}
output = &ListObjectsOutput{}
- err = obsClient.doActionWithBucket("ListObjects", HTTP_GET, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("ListObjects", HTTP_GET, input.Bucket, input, output, extensions)
if err != nil {
output = nil
} else {
if location, ok := output.ResponseHeaders[HEADER_BUCKET_REGION]; ok {
output.Location = location[0]
}
+ if output.EncodingType == "url" {
+ err = decodeListObjectsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListObjectsOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
-func (obsClient ObsClient) ListVersions(input *ListVersionsInput) (output *ListVersionsOutput, err error) {
+// ListVersions lists versioning objects in a bucket.
+//
+// You can use this API to list versioning objects in a bucket. By default, a maximum of 1000 versioning objects are listed.
+func (obsClient ObsClient) ListVersions(input *ListVersionsInput, extensions ...extensionOptions) (output *ListVersionsOutput, err error) {
if input == nil {
return nil, errors.New("ListVersionsInput is nil")
}
output = &ListVersionsOutput{}
- err = obsClient.doActionWithBucket("ListVersions", HTTP_GET, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("ListVersions", HTTP_GET, input.Bucket, input, output, extensions)
if err != nil {
output = nil
} else {
if location, ok := output.ResponseHeaders[HEADER_BUCKET_REGION]; ok {
output.Location = location[0]
}
+ if output.EncodingType == "url" {
+ err = decodeListVersionsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListVersionsOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
-func (obsClient ObsClient) ListMultipartUploads(input *ListMultipartUploadsInput) (output *ListMultipartUploadsOutput, err error) {
+// ListMultipartUploads lists the multipart uploads.
+//
+// You can use this API to list the multipart uploads that are initialized but not combined or aborted in a specified bucket.
+func (obsClient ObsClient) ListMultipartUploads(input *ListMultipartUploadsInput, extensions ...extensionOptions) (output *ListMultipartUploadsOutput, err error) {
if input == nil {
return nil, errors.New("ListMultipartUploadsInput is nil")
}
output = &ListMultipartUploadsOutput{}
- err = obsClient.doActionWithBucket("ListMultipartUploads", HTTP_GET, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("ListMultipartUploads", HTTP_GET, input.Bucket, input, output, extensions)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeListMultipartUploadsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListMultipartUploadsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
-func (obsClient ObsClient) SetBucketQuota(input *SetBucketQuotaInput) (output *BaseModel, err error) {
+// SetBucketQuota sets the bucket quota.
+//
+// You can use this API to set the bucket quota. A bucket quota must be expressed in bytes and the maximum value is 2^63-1.
+func (obsClient ObsClient) SetBucketQuota(input *SetBucketQuotaInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketQuotaInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketQuota", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketQuota", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketQuota(bucketName string) (output *GetBucketQuotaOutput, err error) {
+// GetBucketQuota gets the bucket quota.
+//
+// You can use this API to obtain the bucket quota. Value 0 indicates that no upper limit is set for the bucket quota.
+func (obsClient ObsClient) GetBucketQuota(bucketName string, extensions ...extensionOptions) (output *GetBucketQuotaOutput, err error) {
output = &GetBucketQuotaOutput{}
- err = obsClient.doActionWithBucket("GetBucketQuota", HTTP_GET, bucketName, newSubResourceSerial(SubResourceQuota), output)
+ err = obsClient.doActionWithBucket("GetBucketQuota", HTTP_GET, bucketName, newSubResourceSerial(SubResourceQuota), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) HeadBucket(bucketName string) (output *BaseModel, err error) {
+// HeadBucket checks whether a bucket exists.
+//
+// You can use this API to check whether a bucket exists.
+func (obsClient ObsClient) HeadBucket(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("HeadBucket", HTTP_HEAD, bucketName, defaultSerializable, output)
+ err = obsClient.doActionWithBucket("HeadBucket", HTTP_HEAD, bucketName, defaultSerializable, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketMetadata(input *GetBucketMetadataInput) (output *GetBucketMetadataOutput, err error) {
+// HeadObject checks whether an object exists.
+//
+// You can use this API to check whether an object exists.
+func (obsClient ObsClient) HeadObject(input *HeadObjectInput, extensions ...extensionOptions) (output *BaseModel, err error) {
+ if input == nil {
+ return nil, errors.New("HeadObjectInput is nil")
+ }
+ output = &BaseModel{}
+ err = obsClient.doActionWithBucketAndKey("HeadObject", HTTP_HEAD, input.Bucket, input.Key, input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// GetBucketMetadata gets the metadata of a bucket.
+//
+// You can use this API to send a HEAD request to a bucket to obtain the bucket
+// metadata such as the storage class and CORS rules (if set).
+func (obsClient ObsClient) GetBucketMetadata(input *GetBucketMetadataInput, extensions ...extensionOptions) (output *GetBucketMetadataOutput, err error) {
output = &GetBucketMetadataOutput{}
- err = obsClient.doActionWithBucket("GetBucketMetadata", HTTP_HEAD, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("GetBucketMetadata", HTTP_HEAD, input.Bucket, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -234,9 +333,10 @@ func (obsClient ObsClient) GetBucketMetadata(input *GetBucketMetadataInput) (out
return
}
-func (obsClient ObsClient) SetObjectMetadata(input *SetObjectMetadataInput) (output *SetObjectMetadataOutput, err error) {
+// SetObjectMetadata sets object metadata.
+func (obsClient ObsClient) SetObjectMetadata(input *SetObjectMetadataInput, extensions ...extensionOptions) (output *SetObjectMetadataOutput, err error) {
output = &SetObjectMetadataOutput{}
- err = obsClient.doActionWithBucketAndKey("SetObjectMetadata", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("SetObjectMetadata", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -245,20 +345,24 @@ func (obsClient ObsClient) SetObjectMetadata(input *SetObjectMetadataInput) (out
return
}
-func (obsClient ObsClient) GetBucketStorageInfo(bucketName string) (output *GetBucketStorageInfoOutput, err error) {
+// GetBucketStorageInfo gets storage information about a bucket.
+//
+// You can use this API to obtain storage information about a bucket, including the
+// bucket size and number of objects in the bucket.
+func (obsClient ObsClient) GetBucketStorageInfo(bucketName string, extensions ...extensionOptions) (output *GetBucketStorageInfoOutput, err error) {
output = &GetBucketStorageInfoOutput{}
- err = obsClient.doActionWithBucket("GetBucketStorageInfo", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStorageInfo), output)
+ err = obsClient.doActionWithBucket("GetBucketStorageInfo", HTTP_GET, bucketName, newSubResourceSerial(SubResourceStorageInfo), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) getBucketLocationS3(bucketName string) (output *GetBucketLocationOutput, err error) {
+func (obsClient ObsClient) getBucketLocationS3(bucketName string, extensions []extensionOptions) (output *GetBucketLocationOutput, err error) {
output = &GetBucketLocationOutput{}
var outputS3 *getBucketLocationOutputS3
outputS3 = &getBucketLocationOutputS3{}
- err = obsClient.doActionWithBucket("GetBucketLocation", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLocation), outputS3)
+ err = obsClient.doActionWithBucket("GetBucketLocation", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLocation), outputS3, extensions)
if err != nil {
output = nil
} else {
@@ -267,11 +371,11 @@ func (obsClient ObsClient) getBucketLocationS3(bucketName string) (output *GetBu
}
return
}
-func (obsClient ObsClient) getBucketLocationObs(bucketName string) (output *GetBucketLocationOutput, err error) {
+func (obsClient ObsClient) getBucketLocationObs(bucketName string, extensions []extensionOptions) (output *GetBucketLocationOutput, err error) {
output = &GetBucketLocationOutput{}
var outputObs *getBucketLocationOutputObs
outputObs = &getBucketLocationOutputObs{}
- err = obsClient.doActionWithBucket("GetBucketLocation", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLocation), outputObs)
+ err = obsClient.doActionWithBucket("GetBucketLocation", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLocation), outputObs, extensions)
if err != nil {
output = nil
} else {
@@ -280,29 +384,36 @@ func (obsClient ObsClient) getBucketLocationObs(bucketName string) (output *GetB
}
return
}
-func (obsClient ObsClient) GetBucketLocation(bucketName string) (output *GetBucketLocationOutput, err error) {
+
+// GetBucketLocation gets the location of a bucket.
+//
+// You can use this API to obtain the bucket location.
+func (obsClient ObsClient) GetBucketLocation(bucketName string, extensions ...extensionOptions) (output *GetBucketLocationOutput, err error) {
if obsClient.conf.signature == SignatureObs {
- return obsClient.getBucketLocationObs(bucketName)
+ return obsClient.getBucketLocationObs(bucketName, extensions)
}
- return obsClient.getBucketLocationS3(bucketName)
+ return obsClient.getBucketLocationS3(bucketName, extensions)
}
-func (obsClient ObsClient) SetBucketAcl(input *SetBucketAclInput) (output *BaseModel, err error) {
+// SetBucketAcl sets the bucket ACL.
+//
+// You can use this API to set the ACL for a bucket.
+func (obsClient ObsClient) SetBucketAcl(input *SetBucketAclInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketAclInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketAcl", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketAcl", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) getBucketAclObs(bucketName string) (output *GetBucketAclOutput, err error) {
+func (obsClient ObsClient) getBucketACLObs(bucketName string, extensions []extensionOptions) (output *GetBucketAclOutput, err error) {
output = &GetBucketAclOutput{}
- var outputObs *getBucketAclOutputObs
- outputObs = &getBucketAclOutputObs{}
- err = obsClient.doActionWithBucket("GetBucketAcl", HTTP_GET, bucketName, newSubResourceSerial(SubResourceAcl), outputObs)
+ var outputObs *getBucketACLOutputObs
+ outputObs = &getBucketACLOutputObs{}
+ err = obsClient.doActionWithBucket("GetBucketAcl", HTTP_GET, bucketName, newSubResourceSerial(SubResourceAcl), outputObs, extensions)
if err != nil {
output = nil
} else {
@@ -323,237 +434,307 @@ func (obsClient ObsClient) getBucketAclObs(bucketName string) (output *GetBucket
}
return
}
-func (obsClient ObsClient) GetBucketAcl(bucketName string) (output *GetBucketAclOutput, err error) {
+
+// GetBucketAcl gets the bucket ACL.
+//
+// You can use this API to obtain a bucket ACL.
+func (obsClient ObsClient) GetBucketAcl(bucketName string, extensions ...extensionOptions) (output *GetBucketAclOutput, err error) {
output = &GetBucketAclOutput{}
if obsClient.conf.signature == SignatureObs {
- return obsClient.getBucketAclObs(bucketName)
+ return obsClient.getBucketACLObs(bucketName, extensions)
}
- err = obsClient.doActionWithBucket("GetBucketAcl", HTTP_GET, bucketName, newSubResourceSerial(SubResourceAcl), output)
+ err = obsClient.doActionWithBucket("GetBucketAcl", HTTP_GET, bucketName, newSubResourceSerial(SubResourceAcl), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketPolicy(input *SetBucketPolicyInput) (output *BaseModel, err error) {
+// SetBucketPolicy sets the bucket policy.
+//
+// You can use this API to set a bucket policy. If the bucket already has a policy, the
+// policy will be overwritten by the one specified in this request.
+func (obsClient ObsClient) SetBucketPolicy(input *SetBucketPolicyInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketPolicy is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketPolicy", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketPolicy", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketPolicy(bucketName string) (output *GetBucketPolicyOutput, err error) {
+// GetBucketPolicy gets the bucket policy.
+//
+// You can use this API to obtain the policy of a bucket.
+func (obsClient ObsClient) GetBucketPolicy(bucketName string, extensions ...extensionOptions) (output *GetBucketPolicyOutput, err error) {
output = &GetBucketPolicyOutput{}
- err = obsClient.doActionWithBucketV2("GetBucketPolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourcePolicy), output)
+ err = obsClient.doActionWithBucketV2("GetBucketPolicy", HTTP_GET, bucketName, newSubResourceSerial(SubResourcePolicy), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucketPolicy(bucketName string) (output *BaseModel, err error) {
+// DeleteBucketPolicy deletes the bucket policy.
+//
+// You can use this API to delete the policy of a bucket.
+func (obsClient ObsClient) DeleteBucketPolicy(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucketPolicy", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourcePolicy), output)
+ err = obsClient.doActionWithBucket("DeleteBucketPolicy", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourcePolicy), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketCors(input *SetBucketCorsInput) (output *BaseModel, err error) {
+// SetBucketCors sets CORS rules for a bucket.
+//
+// You can use this API to set CORS rules for a bucket to allow client browsers to send cross-origin requests.
+func (obsClient ObsClient) SetBucketCors(input *SetBucketCorsInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketCorsInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketCors", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketCors", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketCors(bucketName string) (output *GetBucketCorsOutput, err error) {
+// GetBucketCors gets CORS rules of a bucket.
+//
+// You can use this API to obtain the CORS rules of a specified bucket.
+func (obsClient ObsClient) GetBucketCors(bucketName string, extensions ...extensionOptions) (output *GetBucketCorsOutput, err error) {
output = &GetBucketCorsOutput{}
- err = obsClient.doActionWithBucket("GetBucketCors", HTTP_GET, bucketName, newSubResourceSerial(SubResourceCors), output)
+ err = obsClient.doActionWithBucket("GetBucketCors", HTTP_GET, bucketName, newSubResourceSerial(SubResourceCors), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucketCors(bucketName string) (output *BaseModel, err error) {
+// DeleteBucketCors deletes CORS rules of a bucket.
+//
+// You can use this API to delete the CORS rules of a specified bucket.
+func (obsClient ObsClient) DeleteBucketCors(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucketCors", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceCors), output)
+ err = obsClient.doActionWithBucket("DeleteBucketCors", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceCors), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketVersioning(input *SetBucketVersioningInput) (output *BaseModel, err error) {
+// SetBucketVersioning sets the versioning status for a bucket.
+//
+// You can use this API to set the versioning status for a bucket.
+func (obsClient ObsClient) SetBucketVersioning(input *SetBucketVersioningInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketVersioningInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketVersioning", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketVersioning", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketVersioning(bucketName string) (output *GetBucketVersioningOutput, err error) {
+// GetBucketVersioning gets the versioning status of a bucket.
+//
+// You can use this API to obtain the versioning status of a bucket.
+func (obsClient ObsClient) GetBucketVersioning(bucketName string, extensions ...extensionOptions) (output *GetBucketVersioningOutput, err error) {
output = &GetBucketVersioningOutput{}
- err = obsClient.doActionWithBucket("GetBucketVersioning", HTTP_GET, bucketName, newSubResourceSerial(SubResourceVersioning), output)
+ err = obsClient.doActionWithBucket("GetBucketVersioning", HTTP_GET, bucketName, newSubResourceSerial(SubResourceVersioning), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketWebsiteConfiguration(input *SetBucketWebsiteConfigurationInput) (output *BaseModel, err error) {
+// SetBucketWebsiteConfiguration sets website hosting for a bucket.
+//
+// You can use this API to set website hosting for a bucket.
+func (obsClient ObsClient) SetBucketWebsiteConfiguration(input *SetBucketWebsiteConfigurationInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketWebsiteConfigurationInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketWebsiteConfiguration", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketWebsiteConfiguration", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketWebsiteConfiguration(bucketName string) (output *GetBucketWebsiteConfigurationOutput, err error) {
+// GetBucketWebsiteConfiguration gets the website hosting settings of a bucket.
+//
+// You can use this API to obtain the website hosting settings of a bucket.
+func (obsClient ObsClient) GetBucketWebsiteConfiguration(bucketName string, extensions ...extensionOptions) (output *GetBucketWebsiteConfigurationOutput, err error) {
output = &GetBucketWebsiteConfigurationOutput{}
- err = obsClient.doActionWithBucket("GetBucketWebsiteConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceWebsite), output)
+ err = obsClient.doActionWithBucket("GetBucketWebsiteConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceWebsite), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucketWebsiteConfiguration(bucketName string) (output *BaseModel, err error) {
+// DeleteBucketWebsiteConfiguration deletes the website hosting settings of a bucket.
+//
+// You can use this API to delete the website hosting settings of a bucket.
+func (obsClient ObsClient) DeleteBucketWebsiteConfiguration(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucketWebsiteConfiguration", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceWebsite), output)
+ err = obsClient.doActionWithBucket("DeleteBucketWebsiteConfiguration", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceWebsite), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketLoggingConfiguration(input *SetBucketLoggingConfigurationInput) (output *BaseModel, err error) {
+// SetBucketLoggingConfiguration sets the bucket logging.
+//
+// You can use this API to configure access logging for a bucket.
+func (obsClient ObsClient) SetBucketLoggingConfiguration(input *SetBucketLoggingConfigurationInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketLoggingConfigurationInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketLoggingConfiguration", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketLoggingConfiguration", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketLoggingConfiguration(bucketName string) (output *GetBucketLoggingConfigurationOutput, err error) {
+// GetBucketLoggingConfiguration gets the logging settings of a bucket.
+//
+// You can use this API to obtain the access logging settings of a bucket.
+func (obsClient ObsClient) GetBucketLoggingConfiguration(bucketName string, extensions ...extensionOptions) (output *GetBucketLoggingConfigurationOutput, err error) {
output = &GetBucketLoggingConfigurationOutput{}
- err = obsClient.doActionWithBucket("GetBucketLoggingConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLogging), output)
+ err = obsClient.doActionWithBucket("GetBucketLoggingConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLogging), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketLifecycleConfiguration(input *SetBucketLifecycleConfigurationInput) (output *BaseModel, err error) {
+// SetBucketLifecycleConfiguration sets lifecycle rules for a bucket.
+//
+// You can use this API to set lifecycle rules for a bucket, to periodically transit
+// storage classes of objects and delete objects in the bucket.
+func (obsClient ObsClient) SetBucketLifecycleConfiguration(input *SetBucketLifecycleConfigurationInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketLifecycleConfigurationInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketLifecycleConfiguration", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketLifecycleConfiguration", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketLifecycleConfiguration(bucketName string) (output *GetBucketLifecycleConfigurationOutput, err error) {
+// GetBucketLifecycleConfiguration gets lifecycle rules of a bucket.
+//
+// You can use this API to obtain the lifecycle rules of a bucket.
+func (obsClient ObsClient) GetBucketLifecycleConfiguration(bucketName string, extensions ...extensionOptions) (output *GetBucketLifecycleConfigurationOutput, err error) {
output = &GetBucketLifecycleConfigurationOutput{}
- err = obsClient.doActionWithBucket("GetBucketLifecycleConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLifecycle), output)
+ err = obsClient.doActionWithBucket("GetBucketLifecycleConfiguration", HTTP_GET, bucketName, newSubResourceSerial(SubResourceLifecycle), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucketLifecycleConfiguration(bucketName string) (output *BaseModel, err error) {
+// DeleteBucketLifecycleConfiguration deletes lifecycle rules of a bucket.
+//
+// You can use this API to delete all lifecycle rules of a bucket.
+func (obsClient ObsClient) DeleteBucketLifecycleConfiguration(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucketLifecycleConfiguration", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceLifecycle), output)
+ err = obsClient.doActionWithBucket("DeleteBucketLifecycleConfiguration", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceLifecycle), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketTagging(input *SetBucketTaggingInput) (output *BaseModel, err error) {
+// SetBucketTagging sets bucket tags.
+//
+// You can use this API to set bucket tags.
+func (obsClient ObsClient) SetBucketTagging(input *SetBucketTaggingInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketTaggingInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketTagging", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketTagging", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketTagging(bucketName string) (output *GetBucketTaggingOutput, err error) {
+// GetBucketTagging gets bucket tags.
+//
+// You can use this API to obtain the tags of a specified bucket.
+func (obsClient ObsClient) GetBucketTagging(bucketName string, extensions ...extensionOptions) (output *GetBucketTaggingOutput, err error) {
output = &GetBucketTaggingOutput{}
- err = obsClient.doActionWithBucket("GetBucketTagging", HTTP_GET, bucketName, newSubResourceSerial(SubResourceTagging), output)
+ err = obsClient.doActionWithBucket("GetBucketTagging", HTTP_GET, bucketName, newSubResourceSerial(SubResourceTagging), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) DeleteBucketTagging(bucketName string) (output *BaseModel, err error) {
+// DeleteBucketTagging deletes bucket tags.
+//
+// You can use this API to delete the tags of a specified bucket.
+func (obsClient ObsClient) DeleteBucketTagging(bucketName string, extensions ...extensionOptions) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doActionWithBucket("DeleteBucketTagging", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceTagging), output)
+ err = obsClient.doActionWithBucket("DeleteBucketTagging", HTTP_DELETE, bucketName, newSubResourceSerial(SubResourceTagging), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) SetBucketNotification(input *SetBucketNotificationInput) (output *BaseModel, err error) {
+// SetBucketNotification sets event notification for a bucket.
+//
+// You can use this API to configure event notification for a bucket. You will be notified of all
+// specified operations performed on the bucket.
+func (obsClient ObsClient) SetBucketNotification(input *SetBucketNotificationInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetBucketNotificationInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucket("SetBucketNotification", HTTP_PUT, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("SetBucketNotification", HTTP_PUT, input.Bucket, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetBucketNotification(bucketName string) (output *GetBucketNotificationOutput, err error) {
+// GetBucketNotification gets event notification settings of a bucket.
+//
+// You can use this API to obtain the event notification configuration of a bucket.
+func (obsClient ObsClient) GetBucketNotification(bucketName string, extensions ...extensionOptions) (output *GetBucketNotificationOutput, err error) {
if obsClient.conf.signature != SignatureObs {
- return obsClient.getBucketNotificationS3(bucketName)
+ return obsClient.getBucketNotificationS3(bucketName, extensions)
}
output = &GetBucketNotificationOutput{}
- err = obsClient.doActionWithBucket("GetBucketNotification", HTTP_GET, bucketName, newSubResourceSerial(SubResourceNotification), output)
+ err = obsClient.doActionWithBucket("GetBucketNotification", HTTP_GET, bucketName, newSubResourceSerial(SubResourceNotification), output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) getBucketNotificationS3(bucketName string) (output *GetBucketNotificationOutput, err error) {
+func (obsClient ObsClient) getBucketNotificationS3(bucketName string, extensions []extensionOptions) (output *GetBucketNotificationOutput, err error) {
outputS3 := &getBucketNotificationOutputS3{}
- err = obsClient.doActionWithBucket("GetBucketNotification", HTTP_GET, bucketName, newSubResourceSerial(SubResourceNotification), outputS3)
+ err = obsClient.doActionWithBucket("GetBucketNotification", HTTP_GET, bucketName, newSubResourceSerial(SubResourceNotification), outputS3, extensions)
if err != nil {
return nil, err
}
@@ -578,12 +759,15 @@ func (obsClient ObsClient) getBucketNotificationS3(bucketName string) (output *G
return
}
-func (obsClient ObsClient) DeleteObject(input *DeleteObjectInput) (output *DeleteObjectOutput, err error) {
+// DeleteObject deletes an object.
+//
+// You can use this API to delete an object from a specified bucket.
+func (obsClient ObsClient) DeleteObject(input *DeleteObjectInput, extensions ...extensionOptions) (output *DeleteObjectOutput, err error) {
if input == nil {
return nil, errors.New("DeleteObjectInput is nil")
}
output = &DeleteObjectOutput{}
- err = obsClient.doActionWithBucketAndKey("DeleteObject", HTTP_DELETE, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("DeleteObject", HTTP_DELETE, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -592,64 +776,83 @@ func (obsClient ObsClient) DeleteObject(input *DeleteObjectInput) (output *Delet
return
}
-func (obsClient ObsClient) DeleteObjects(input *DeleteObjectsInput) (output *DeleteObjectsOutput, err error) {
+// DeleteObjects deletes objects in a batch.
+//
+// You can use this API to batch delete objects from a specified bucket.
+func (obsClient ObsClient) DeleteObjects(input *DeleteObjectsInput, extensions ...extensionOptions) (output *DeleteObjectsOutput, err error) {
if input == nil {
return nil, errors.New("DeleteObjectsInput is nil")
}
output = &DeleteObjectsOutput{}
- err = obsClient.doActionWithBucket("DeleteObjects", HTTP_POST, input.Bucket, input, output)
+ err = obsClient.doActionWithBucket("DeleteObjects", HTTP_POST, input.Bucket, input, output, extensions)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeDeleteObjectsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get DeleteObjectsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
-func (obsClient ObsClient) SetObjectAcl(input *SetObjectAclInput) (output *BaseModel, err error) {
+// SetObjectAcl sets ACL for an object.
+//
+// You can use this API to set the ACL for an object in a specified bucket.
+func (obsClient ObsClient) SetObjectAcl(input *SetObjectAclInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("SetObjectAclInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucketAndKey("SetObjectAcl", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("SetObjectAcl", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetObjectAcl(input *GetObjectAclInput) (output *GetObjectAclOutput, err error) {
+// GetObjectAcl gets the ACL of an object.
+//
+// You can use this API to obtain the ACL of an object in a specified bucket.
+func (obsClient ObsClient) GetObjectAcl(input *GetObjectAclInput, extensions ...extensionOptions) (output *GetObjectAclOutput, err error) {
if input == nil {
return nil, errors.New("GetObjectAclInput is nil")
}
output = &GetObjectAclOutput{}
- err = obsClient.doActionWithBucketAndKey("GetObjectAcl", HTTP_GET, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("GetObjectAcl", HTTP_GET, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
- if versionId, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
- output.VersionId = versionId[0]
+ if versionID, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
+ output.VersionId = versionID[0]
}
}
return
}
-func (obsClient ObsClient) RestoreObject(input *RestoreObjectInput) (output *BaseModel, err error) {
+// RestoreObject restores an object.
+func (obsClient ObsClient) RestoreObject(input *RestoreObjectInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("RestoreObjectInput is nil")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucketAndKey("RestoreObject", HTTP_POST, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("RestoreObject", HTTP_POST, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) GetObjectMetadata(input *GetObjectMetadataInput) (output *GetObjectMetadataOutput, err error) {
+// GetObjectMetadata gets object metadata.
+//
+// You can use this API to send a HEAD request to the object of a specified bucket to obtain its metadata.
+func (obsClient ObsClient) GetObjectMetadata(input *GetObjectMetadataInput, extensions ...extensionOptions) (output *GetObjectMetadataOutput, err error) {
if input == nil {
return nil, errors.New("GetObjectMetadataInput is nil")
}
output = &GetObjectMetadataOutput{}
- err = obsClient.doActionWithBucketAndKey("GetObjectMetadata", HTTP_HEAD, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("GetObjectMetadata", HTTP_HEAD, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -658,12 +861,15 @@ func (obsClient ObsClient) GetObjectMetadata(input *GetObjectMetadataInput) (out
return
}
-func (obsClient ObsClient) GetObject(input *GetObjectInput) (output *GetObjectOutput, err error) {
+// GetObject downloads object.
+//
+// You can use this API to download an object in a specified bucket.
+func (obsClient ObsClient) GetObject(input *GetObjectInput, extensions ...extensionOptions) (output *GetObjectOutput, err error) {
if input == nil {
return nil, errors.New("GetObjectInput is nil")
}
output = &GetObjectOutput{}
- err = obsClient.doActionWithBucketAndKey("GetObject", HTTP_GET, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("GetObject", HTTP_GET, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -672,17 +878,17 @@ func (obsClient ObsClient) GetObject(input *GetObjectInput) (output *GetObjectOu
return
}
-func (obsClient ObsClient) PutObject(input *PutObjectInput) (output *PutObjectOutput, err error) {
+// PutObject uploads an object to the specified bucket.
+func (obsClient ObsClient) PutObject(input *PutObjectInput, extensions ...extensionOptions) (output *PutObjectOutput, err error) {
if input == nil {
return nil, errors.New("PutObjectInput is nil")
}
if input.ContentType == "" && input.Key != "" {
- if contentType, ok := mime_types[strings.ToLower(input.Key[strings.LastIndex(input.Key, ".")+1:])]; ok {
+ if contentType, ok := mimeTypes[strings.ToLower(input.Key[strings.LastIndex(input.Key, ".")+1:])]; ok {
input.ContentType = contentType
}
}
-
output = &PutObjectOutput{}
var repeatable bool
if input.Body != nil {
@@ -692,9 +898,9 @@ func (obsClient ObsClient) PutObject(input *PutObjectInput) (output *PutObjectOu
}
}
if repeatable {
- err = obsClient.doActionWithBucketAndKey("PutObject", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("PutObject", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
} else {
- err = obsClient.doActionWithBucketAndKeyUnRepeatable("PutObject", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKeyUnRepeatable("PutObject", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
}
if err != nil {
output = nil
@@ -704,7 +910,25 @@ func (obsClient ObsClient) PutObject(input *PutObjectInput) (output *PutObjectOu
return
}
-func (obsClient ObsClient) PutFile(input *PutFileInput) (output *PutObjectOutput, err error) {
+func (obsClient ObsClient) getContentType(input *PutObjectInput, sourceFile string) (contentType string) {
+ if contentType, ok := mimeTypes[strings.ToLower(input.Key[strings.LastIndex(input.Key, ".")+1:])]; ok {
+ return contentType
+ }
+ if contentType, ok := mimeTypes[strings.ToLower(sourceFile[strings.LastIndex(sourceFile, ".")+1:])]; ok {
+ return contentType
+ }
+ return
+}
+
+func (obsClient ObsClient) isGetContentType(input *PutObjectInput) bool {
+ if input.ContentType == "" && input.Key != "" {
+ return true
+ }
+ return false
+}
+
+// PutFile uploads a file to the specified bucket.
+func (obsClient ObsClient) PutFile(input *PutFileInput, extensions ...extensionOptions) (output *PutObjectOutput, err error) {
if input == nil {
return nil, errors.New("PutFileInput is nil")
}
@@ -712,14 +936,21 @@ func (obsClient ObsClient) PutFile(input *PutFileInput) (output *PutObjectOutput
var body io.Reader
sourceFile := strings.TrimSpace(input.SourceFile)
if sourceFile != "" {
- fd, err := os.Open(sourceFile)
- if err != nil {
+ fd, _err := os.Open(sourceFile)
+ if _err != nil {
+ err = _err
return nil, err
}
- defer fd.Close()
+ defer func() {
+ errMsg := fd.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", errMsg)
+ }
+ }()
- stat, err := fd.Stat()
- if err != nil {
+ stat, _err := fd.Stat()
+ if _err != nil {
+ err = _err
return nil, err
}
fileReaderWrapper := &fileReaderWrapper{filePath: sourceFile}
@@ -739,16 +970,12 @@ func (obsClient ObsClient) PutFile(input *PutFileInput) (output *PutObjectOutput
_input.PutObjectBasicInput = input.PutObjectBasicInput
_input.Body = body
- if _input.ContentType == "" && _input.Key != "" {
- if contentType, ok := mime_types[strings.ToLower(_input.Key[strings.LastIndex(_input.Key, ".")+1:])]; ok {
- _input.ContentType = contentType
- } else if contentType, ok := mime_types[strings.ToLower(sourceFile[strings.LastIndex(sourceFile, ".")+1:])]; ok {
- _input.ContentType = contentType
- }
+ if obsClient.isGetContentType(_input) {
+ _input.ContentType = obsClient.getContentType(_input, sourceFile)
}
output = &PutObjectOutput{}
- err = obsClient.doActionWithBucketAndKey("PutFile", HTTP_PUT, _input.Bucket, _input.Key, _input, output)
+ err = obsClient.doActionWithBucketAndKey("PutFile", HTTP_PUT, _input.Bucket, _input.Key, _input, output, extensions)
if err != nil {
output = nil
} else {
@@ -757,7 +984,10 @@ func (obsClient ObsClient) PutFile(input *PutFileInput) (output *PutObjectOutput
return
}
-func (obsClient ObsClient) CopyObject(input *CopyObjectInput) (output *CopyObjectOutput, err error) {
+// CopyObject creates a copy for an existing object.
+//
+// You can use this API to create a copy for an object in a specified bucket.
+func (obsClient ObsClient) CopyObject(input *CopyObjectInput, extensions ...extensionOptions) (output *CopyObjectOutput, err error) {
if input == nil {
return nil, errors.New("CopyObjectInput is nil")
}
@@ -770,7 +1000,7 @@ func (obsClient ObsClient) CopyObject(input *CopyObjectInput) (output *CopyObjec
}
output = &CopyObjectOutput{}
- err = obsClient.doActionWithBucketAndKey("CopyObject", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("CopyObject", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -779,7 +1009,8 @@ func (obsClient ObsClient) CopyObject(input *CopyObjectInput) (output *CopyObjec
return
}
-func (obsClient ObsClient) AbortMultipartUpload(input *AbortMultipartUploadInput) (output *BaseModel, err error) {
+// AbortMultipartUpload aborts a multipart upload in a specified bucket by using the multipart upload ID.
+func (obsClient ObsClient) AbortMultipartUpload(input *AbortMultipartUploadInput, extensions ...extensionOptions) (output *BaseModel, err error) {
if input == nil {
return nil, errors.New("AbortMultipartUploadInput is nil")
}
@@ -787,35 +1018,48 @@ func (obsClient ObsClient) AbortMultipartUpload(input *AbortMultipartUploadInput
return nil, errors.New("UploadId is empty")
}
output = &BaseModel{}
- err = obsClient.doActionWithBucketAndKey("AbortMultipartUpload", HTTP_DELETE, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("AbortMultipartUpload", HTTP_DELETE, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
}
return
}
-func (obsClient ObsClient) InitiateMultipartUpload(input *InitiateMultipartUploadInput) (output *InitiateMultipartUploadOutput, err error) {
+// InitiateMultipartUpload initializes a multipart upload.
+func (obsClient ObsClient) InitiateMultipartUpload(input *InitiateMultipartUploadInput, extensions ...extensionOptions) (output *InitiateMultipartUploadOutput, err error) {
if input == nil {
return nil, errors.New("InitiateMultipartUploadInput is nil")
}
if input.ContentType == "" && input.Key != "" {
- if contentType, ok := mime_types[strings.ToLower(input.Key[strings.LastIndex(input.Key, ".")+1:])]; ok {
+ if contentType, ok := mimeTypes[strings.ToLower(input.Key[strings.LastIndex(input.Key, ".")+1:])]; ok {
input.ContentType = contentType
}
}
output = &InitiateMultipartUploadOutput{}
- err = obsClient.doActionWithBucketAndKey("InitiateMultipartUpload", HTTP_POST, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("InitiateMultipartUpload", HTTP_POST, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
ParseInitiateMultipartUploadOutput(output)
+ if output.EncodingType == "url" {
+ err = decodeInitiateMultipartUploadOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get InitiateMultipartUploadOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
-func (obsClient ObsClient) UploadPart(_input *UploadPartInput) (output *UploadPartOutput, err error) {
+// UploadPart uploads a part to a specified bucket by using a specified multipart upload ID.
+//
+// After a multipart upload is initialized, you can use this API to upload a part to a specified bucket
+// by using the multipart upload ID. Except for the last uploaded part whose size ranges from 0 to 5 GB,
+// sizes of the other parts range from 100 KB to 5 GB. The upload part ID ranges from 1 to 10000.
+func (obsClient ObsClient) UploadPart(_input *UploadPartInput, extensions ...extensionOptions) (output *UploadPartOutput, err error) {
if _input == nil {
return nil, errors.New("UploadPartInput is nil")
}
@@ -844,14 +1088,21 @@ func (obsClient ObsClient) UploadPart(_input *UploadPartInput) (output *UploadPa
input.Body = &readerWrapper{reader: input.Body, totalCount: input.PartSize}
}
} else if sourceFile := strings.TrimSpace(input.SourceFile); sourceFile != "" {
- fd, err := os.Open(sourceFile)
- if err != nil {
+ fd, _err := os.Open(sourceFile)
+ if _err != nil {
+ err = _err
return nil, err
}
- defer fd.Close()
+ defer func() {
+ errMsg := fd.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", errMsg)
+ }
+ }()
- stat, err := fd.Stat()
- if err != nil {
+ stat, _err := fd.Stat()
+ if _err != nil {
+ err = _err
return nil, err
}
fileSize := stat.Size()
@@ -873,9 +1124,9 @@ func (obsClient ObsClient) UploadPart(_input *UploadPartInput) (output *UploadPa
repeatable = true
}
if repeatable {
- err = obsClient.doActionWithBucketAndKey("UploadPart", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("UploadPart", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
} else {
- err = obsClient.doActionWithBucketAndKeyUnRepeatable("UploadPart", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKeyUnRepeatable("UploadPart", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
}
if err != nil {
output = nil
@@ -886,7 +1137,8 @@ func (obsClient ObsClient) UploadPart(_input *UploadPartInput) (output *UploadPa
return
}
-func (obsClient ObsClient) CompleteMultipartUpload(input *CompleteMultipartUploadInput) (output *CompleteMultipartUploadOutput, err error) {
+// CompleteMultipartUpload combines the uploaded parts in a specified bucket by using the multipart upload ID.
+func (obsClient ObsClient) CompleteMultipartUpload(input *CompleteMultipartUploadInput, extensions ...extensionOptions) (output *CompleteMultipartUploadOutput, err error) {
if input == nil {
return nil, errors.New("CompleteMultipartUploadInput is nil")
}
@@ -899,16 +1151,24 @@ func (obsClient ObsClient) CompleteMultipartUpload(input *CompleteMultipartUploa
sort.Sort(parts)
output = &CompleteMultipartUploadOutput{}
- err = obsClient.doActionWithBucketAndKey("CompleteMultipartUpload", HTTP_POST, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("CompleteMultipartUpload", HTTP_POST, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
ParseCompleteMultipartUploadOutput(output)
+ if output.EncodingType == "url" {
+ err = decodeCompleteMultipartUploadOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get CompleteMultipartUploadOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
-func (obsClient ObsClient) ListParts(input *ListPartsInput) (output *ListPartsOutput, err error) {
+// ListParts lists the uploaded parts in a bucket by using the multipart upload ID.
+func (obsClient ObsClient) ListParts(input *ListPartsInput, extensions ...extensionOptions) (output *ListPartsOutput, err error) {
if input == nil {
return nil, errors.New("ListPartsInput is nil")
}
@@ -916,14 +1176,23 @@ func (obsClient ObsClient) ListParts(input *ListPartsInput) (output *ListPartsOu
return nil, errors.New("UploadId is empty")
}
output = &ListPartsOutput{}
- err = obsClient.doActionWithBucketAndKey("ListParts", HTTP_GET, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("ListParts", HTTP_GET, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeListPartsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListPartsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
-func (obsClient ObsClient) CopyPart(input *CopyPartInput) (output *CopyPartOutput, err error) {
+// CopyPart copy a part to a specified bucket by using a specified multipart upload ID.
+//
+// After a multipart upload is initialized, you can use this API to copy a part to a specified bucket by using the multipart upload ID.
+func (obsClient ObsClient) CopyPart(input *CopyPartInput, extensions ...extensionOptions) (output *CopyPartOutput, err error) {
if input == nil {
return nil, errors.New("CopyPartInput is nil")
}
@@ -938,7 +1207,7 @@ func (obsClient ObsClient) CopyPart(input *CopyPartInput) (output *CopyPartOutpu
}
output = &CopyPartOutput{}
- err = obsClient.doActionWithBucketAndKey("CopyPart", HTTP_PUT, input.Bucket, input.Key, input, output)
+ err = obsClient.doActionWithBucketAndKey("CopyPart", HTTP_PUT, input.Bucket, input.Key, input, output, extensions)
if err != nil {
output = nil
} else {
@@ -947,3 +1216,159 @@ func (obsClient ObsClient) CopyPart(input *CopyPartInput) (output *CopyPartOutpu
}
return
}
+
+// SetBucketRequestPayment sets requester-pays setting for a bucket.
+func (obsClient ObsClient) SetBucketRequestPayment(input *SetBucketRequestPaymentInput, extensions ...extensionOptions) (output *BaseModel, err error) {
+ if input == nil {
+ return nil, errors.New("SetBucketRequestPaymentInput is nil")
+ }
+ output = &BaseModel{}
+ err = obsClient.doActionWithBucket("SetBucketRequestPayment", HTTP_PUT, input.Bucket, input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// GetBucketRequestPayment gets requester-pays setting of a bucket.
+func (obsClient ObsClient) GetBucketRequestPayment(bucketName string, extensions ...extensionOptions) (output *GetBucketRequestPaymentOutput, err error) {
+ output = &GetBucketRequestPaymentOutput{}
+ err = obsClient.doActionWithBucket("GetBucketRequestPayment", HTTP_GET, bucketName, newSubResourceSerial(SubResourceRequestPayment), output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// UploadFile resume uploads.
+//
+// This API is an encapsulated and enhanced version of multipart upload, and aims to eliminate large file
+// upload failures caused by poor network conditions and program breakdowns.
+func (obsClient ObsClient) UploadFile(input *UploadFileInput, extensions ...extensionOptions) (output *CompleteMultipartUploadOutput, err error) {
+ if input.EnableCheckpoint && input.CheckpointFile == "" {
+ input.CheckpointFile = input.UploadFile + ".uploadfile_record"
+ }
+
+ if input.TaskNum <= 0 {
+ input.TaskNum = 1
+ }
+ if input.PartSize < MIN_PART_SIZE {
+ input.PartSize = MIN_PART_SIZE
+ } else if input.PartSize > MAX_PART_SIZE {
+ input.PartSize = MAX_PART_SIZE
+ }
+
+ output, err = obsClient.resumeUpload(input, extensions)
+ return
+}
+
+// DownloadFile resume downloads.
+//
+// This API is an encapsulated and enhanced version of partial download, and aims to eliminate large file
+// download failures caused by poor network conditions and program breakdowns.
+func (obsClient ObsClient) DownloadFile(input *DownloadFileInput, extensions ...extensionOptions) (output *GetObjectMetadataOutput, err error) {
+ if input.DownloadFile == "" {
+ input.DownloadFile = input.Key
+ }
+
+ if input.EnableCheckpoint && input.CheckpointFile == "" {
+ input.CheckpointFile = input.DownloadFile + ".downloadfile_record"
+ }
+
+ if input.TaskNum <= 0 {
+ input.TaskNum = 1
+ }
+ if input.PartSize <= 0 {
+ input.PartSize = DEFAULT_PART_SIZE
+ }
+
+ output, err = obsClient.resumeDownload(input, extensions)
+ return
+}
+
+// SetBucketFetchPolicy sets the bucket fetch policy.
+//
+// You can use this API to set a bucket fetch policy.
+func (obsClient ObsClient) SetBucketFetchPolicy(input *SetBucketFetchPolicyInput, extensions ...extensionOptions) (output *BaseModel, err error) {
+ if input == nil {
+ return nil, errors.New("SetBucketFetchPolicyInput is nil")
+ }
+ if strings.TrimSpace(string(input.Status)) == "" {
+ return nil, errors.New("Fetch policy status is empty")
+ }
+ if strings.TrimSpace(input.Agency) == "" {
+ return nil, errors.New("Fetch policy agency is empty")
+ }
+ output = &BaseModel{}
+ err = obsClient.doActionWithBucketAndKey("SetBucketFetchPolicy", HTTP_PUT, input.Bucket, string(objectKeyExtensionPolicy), input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// GetBucketFetchPolicy gets the bucket fetch policy.
+//
+// You can use this API to obtain the fetch policy of a bucket.
+func (obsClient ObsClient) GetBucketFetchPolicy(input *GetBucketFetchPolicyInput, extensions ...extensionOptions) (output *GetBucketFetchPolicyOutput, err error) {
+ if input == nil {
+ return nil, errors.New("GetBucketFetchPolicyInput is nil")
+ }
+ output = &GetBucketFetchPolicyOutput{}
+ err = obsClient.doActionWithBucketAndKeyV2("GetBucketFetchPolicy", HTTP_GET, input.Bucket, string(objectKeyExtensionPolicy), input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// DeleteBucketFetchPolicy deletes the bucket fetch policy.
+//
+// You can use this API to delete the fetch policy of a bucket.
+func (obsClient ObsClient) DeleteBucketFetchPolicy(input *DeleteBucketFetchPolicyInput, extensions ...extensionOptions) (output *BaseModel, err error) {
+ if input == nil {
+ return nil, errors.New("DeleteBucketFetchPolicyInput is nil")
+ }
+ output = &BaseModel{}
+ err = obsClient.doActionWithBucketAndKey("DeleteBucketFetchPolicy", HTTP_DELETE, input.Bucket, string(objectKeyExtensionPolicy), input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// SetBucketFetchJob sets the bucket fetch job.
+//
+// You can use this API to set a bucket fetch job.
+func (obsClient ObsClient) SetBucketFetchJob(input *SetBucketFetchJobInput, extensions ...extensionOptions) (output *SetBucketFetchJobOutput, err error) {
+ if input == nil {
+ return nil, errors.New("SetBucketFetchJobInput is nil")
+ }
+ if strings.TrimSpace(input.URL) == "" {
+ return nil, errors.New("URL is empty")
+ }
+ output = &SetBucketFetchJobOutput{}
+ err = obsClient.doActionWithBucketAndKeyV2("SetBucketFetchJob", HTTP_POST, input.Bucket, string(objectKeyAsyncFetchJob), input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// GetBucketFetchJob gets the bucket fetch job.
+//
+// You can use this API to obtain the fetch job of a bucket.
+func (obsClient ObsClient) GetBucketFetchJob(input *GetBucketFetchJobInput, extensions ...extensionOptions) (output *GetBucketFetchJobOutput, err error) {
+ if input == nil {
+ return nil, errors.New("GetBucketFetchJobInput is nil")
+ }
+ if strings.TrimSpace(input.JobID) == "" {
+ return nil, errors.New("JobID is empty")
+ }
+ output = &GetBucketFetchJobOutput{}
+ err = obsClient.doActionWithBucketAndKeyV2("GetBucketFetchJob", HTTP_GET, input.Bucket, string(objectKeyAsyncFetchJob)+"/"+input.JobID, input, output, extensions)
+ if err != nil {
+ output = nil
+ }
+ return
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/conf.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/conf.go
index 78b777cbd1f..1448d03ef39 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/conf.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/conf.go
@@ -27,12 +27,6 @@ import (
"time"
)
-type securityProvider struct {
- ak string
- sk string
- securityToken string
-}
-
type urlHolder struct {
scheme string
host string
@@ -40,44 +34,58 @@ type urlHolder struct {
}
type config struct {
- securityProvider *securityProvider
- urlHolder *urlHolder
- endpoint string
- signature SignatureType
- pathStyle bool
- region string
- connectTimeout int
- socketTimeout int
- headerTimeout int
- idleConnTimeout int
- finalTimeout int
- maxRetryCount int
- proxyUrl string
- maxConnsPerHost int
- sslVerify bool
- pemCerts []byte
- transport *http.Transport
- ctx context.Context
- cname bool
- maxRedirectCount int
+ securityProviders []securityProvider
+ urlHolder *urlHolder
+ pathStyle bool
+ cname bool
+ sslVerify bool
+ endpoint string
+ signature SignatureType
+ region string
+ connectTimeout int
+ socketTimeout int
+ headerTimeout int
+ idleConnTimeout int
+ finalTimeout int
+ maxRetryCount int
+ proxyURL string
+ maxConnsPerHost int
+ pemCerts []byte
+ transport *http.Transport
+ ctx context.Context
+ maxRedirectCount int
+ userAgent string
+ enableCompression bool
}
func (conf config) String() string {
return fmt.Sprintf("[endpoint:%s, signature:%s, pathStyle:%v, region:%s"+
"\nconnectTimeout:%d, socketTimeout:%dheaderTimeout:%d, idleConnTimeout:%d"+
- "\nmaxRetryCount:%d, maxConnsPerHost:%d, sslVerify:%v, proxyUrl:%s, maxRedirectCount:%d]",
+ "\nmaxRetryCount:%d, maxConnsPerHost:%d, sslVerify:%v, maxRedirectCount:%d]",
conf.endpoint, conf.signature, conf.pathStyle, conf.region,
conf.connectTimeout, conf.socketTimeout, conf.headerTimeout, conf.idleConnTimeout,
- conf.maxRetryCount, conf.maxConnsPerHost, conf.sslVerify, conf.proxyUrl, conf.maxRedirectCount,
+ conf.maxRetryCount, conf.maxConnsPerHost, conf.sslVerify, conf.maxRedirectCount,
)
}
type configurer func(conf *config)
+func WithSecurityProviders(sps ...securityProvider) configurer {
+ return func(conf *config) {
+ for _, sp := range sps {
+ if sp != nil {
+ conf.securityProviders = append(conf.securityProviders, sp)
+ }
+ }
+ }
+}
+
+// WithSslVerify is a wrapper for WithSslVerifyAndPemCerts.
func WithSslVerify(sslVerify bool) configurer {
return WithSslVerifyAndPemCerts(sslVerify, nil)
}
+// WithSslVerifyAndPemCerts is a configurer for ObsClient to set conf.sslVerify and conf.pemCerts.
func WithSslVerifyAndPemCerts(sslVerify bool, pemCerts []byte) configurer {
return func(conf *config) {
conf.sslVerify = sslVerify
@@ -85,100 +93,171 @@ func WithSslVerifyAndPemCerts(sslVerify bool, pemCerts []byte) configurer {
}
}
+// WithHeaderTimeout is a configurer for ObsClient to set the timeout period of obtaining the response headers.
func WithHeaderTimeout(headerTimeout int) configurer {
return func(conf *config) {
conf.headerTimeout = headerTimeout
}
}
-func WithProxyUrl(proxyUrl string) configurer {
+// WithProxyUrl is a configurer for ObsClient to set HTTP proxy.
+func WithProxyUrl(proxyURL string) configurer {
return func(conf *config) {
- conf.proxyUrl = proxyUrl
+ conf.proxyURL = proxyURL
}
}
+// WithMaxConnections is a configurer for ObsClient to set the maximum number of idle HTTP connections.
func WithMaxConnections(maxConnsPerHost int) configurer {
return func(conf *config) {
conf.maxConnsPerHost = maxConnsPerHost
}
}
+// WithPathStyle is a configurer for ObsClient.
func WithPathStyle(pathStyle bool) configurer {
return func(conf *config) {
conf.pathStyle = pathStyle
}
}
+// WithSignature is a configurer for ObsClient.
func WithSignature(signature SignatureType) configurer {
return func(conf *config) {
conf.signature = signature
}
}
+// WithRegion is a configurer for ObsClient.
func WithRegion(region string) configurer {
return func(conf *config) {
conf.region = region
}
}
+// WithConnectTimeout is a configurer for ObsClient to set timeout period for establishing
+// an http/https connection, in seconds.
func WithConnectTimeout(connectTimeout int) configurer {
return func(conf *config) {
conf.connectTimeout = connectTimeout
}
}
+// WithSocketTimeout is a configurer for ObsClient to set the timeout duration for transmitting data at
+// the socket layer, in seconds.
func WithSocketTimeout(socketTimeout int) configurer {
return func(conf *config) {
conf.socketTimeout = socketTimeout
}
}
+// WithIdleConnTimeout is a configurer for ObsClient to set the timeout period of an idle HTTP connection
+// in the connection pool, in seconds.
func WithIdleConnTimeout(idleConnTimeout int) configurer {
return func(conf *config) {
conf.idleConnTimeout = idleConnTimeout
}
}
+// WithMaxRetryCount is a configurer for ObsClient to set the maximum number of retries when an HTTP/HTTPS connection is abnormal.
func WithMaxRetryCount(maxRetryCount int) configurer {
return func(conf *config) {
conf.maxRetryCount = maxRetryCount
}
}
+// WithSecurityToken is a configurer for ObsClient to set the security token in the temporary access keys.
func WithSecurityToken(securityToken string) configurer {
return func(conf *config) {
- conf.securityProvider.securityToken = securityToken
+ for _, sp := range conf.securityProviders {
+ if bsp, ok := sp.(*BasicSecurityProvider); ok {
+ sh := bsp.getSecurity()
+ bsp.refresh(sh.ak, sh.sk, securityToken)
+ break
+ }
+ }
}
}
+// WithHttpTransport is a configurer for ObsClient to set the customized http Transport.
func WithHttpTransport(transport *http.Transport) configurer {
return func(conf *config) {
conf.transport = transport
}
}
+// WithRequestContext is a configurer for ObsClient to set the context for each HTTP request.
func WithRequestContext(ctx context.Context) configurer {
return func(conf *config) {
conf.ctx = ctx
}
}
+// WithCustomDomainName is a configurer for ObsClient.
func WithCustomDomainName(cname bool) configurer {
return func(conf *config) {
conf.cname = cname
}
}
+// WithMaxRedirectCount is a configurer for ObsClient to set the maximum number of times that the request is redirected.
func WithMaxRedirectCount(maxRedirectCount int) configurer {
return func(conf *config) {
conf.maxRedirectCount = maxRedirectCount
}
}
+// WithUserAgent is a configurer for ObsClient to set the User-Agent.
+func WithUserAgent(userAgent string) configurer {
+ return func(conf *config) {
+ conf.userAgent = userAgent
+ }
+}
+
+// WithEnableCompression is a configurer for ObsClient to set the Transport.DisableCompression.
+func WithEnableCompression(enableCompression bool) configurer {
+ return func(conf *config) {
+ conf.enableCompression = enableCompression
+ }
+}
+
+func (conf *config) prepareConfig() {
+ if conf.connectTimeout <= 0 {
+ conf.connectTimeout = DEFAULT_CONNECT_TIMEOUT
+ }
+
+ if conf.socketTimeout <= 0 {
+ conf.socketTimeout = DEFAULT_SOCKET_TIMEOUT
+ }
+
+ conf.finalTimeout = conf.socketTimeout * 10
+
+ if conf.headerTimeout <= 0 {
+ conf.headerTimeout = DEFAULT_HEADER_TIMEOUT
+ }
+
+ if conf.idleConnTimeout < 0 {
+ conf.idleConnTimeout = DEFAULT_IDLE_CONN_TIMEOUT
+ }
+
+ if conf.maxRetryCount < 0 {
+ conf.maxRetryCount = DEFAULT_MAX_RETRY_COUNT
+ }
+
+ if conf.maxConnsPerHost <= 0 {
+ conf.maxConnsPerHost = DEFAULT_MAX_CONN_PER_HOST
+ }
+
+ if conf.maxRedirectCount < 0 {
+ conf.maxRedirectCount = DEFAULT_MAX_REDIRECT_COUNT
+ }
+
+ if conf.pathStyle && conf.signature == SignatureObs {
+ conf.signature = SignatureV2
+ }
+}
+
func (conf *config) initConfigWithDefault() error {
- conf.securityProvider.ak = strings.TrimSpace(conf.securityProvider.ak)
- conf.securityProvider.sk = strings.TrimSpace(conf.securityProvider.sk)
- conf.securityProvider.securityToken = strings.TrimSpace(conf.securityProvider.securityToken)
conf.endpoint = strings.TrimSpace(conf.endpoint)
if conf.endpoint == "" {
return errors.New("endpoint is not set")
@@ -205,7 +284,7 @@ func (conf *config) initConfigWithDefault() error {
urlHolder.scheme = "http"
address = conf.endpoint[len("http://"):]
} else {
- urlHolder.scheme = "http"
+ urlHolder.scheme = "https"
address = conf.endpoint
}
@@ -235,37 +314,8 @@ func (conf *config) initConfigWithDefault() error {
conf.region = DEFAULT_REGION
}
- if conf.connectTimeout <= 0 {
- conf.connectTimeout = DEFAULT_CONNECT_TIMEOUT
- }
-
- if conf.socketTimeout <= 0 {
- conf.socketTimeout = DEFAULT_SOCKET_TIMEOUT
- }
-
- conf.finalTimeout = conf.socketTimeout * 10
-
- if conf.headerTimeout <= 0 {
- conf.headerTimeout = DEFAULT_HEADER_TIMEOUT
- }
-
- if conf.idleConnTimeout < 0 {
- conf.idleConnTimeout = DEFAULT_IDLE_CONN_TIMEOUT
- }
-
- if conf.maxRetryCount < 0 {
- conf.maxRetryCount = DEFAULT_MAX_RETRY_COUNT
- }
-
- if conf.maxConnsPerHost <= 0 {
- conf.maxConnsPerHost = DEFAULT_MAX_CONN_PER_HOST
- }
-
- if conf.maxRedirectCount < 0 {
- conf.maxRedirectCount = DEFAULT_MAX_REDIRECT_COUNT
- }
-
- conf.proxyUrl = strings.TrimSpace(conf.proxyUrl)
+ conf.prepareConfig()
+ conf.proxyURL = strings.TrimSpace(conf.proxyURL)
return nil
}
@@ -285,12 +335,12 @@ func (conf *config) getTransport() error {
IdleConnTimeout: time.Second * time.Duration(conf.idleConnTimeout),
}
- if conf.proxyUrl != "" {
- proxyUrl, err := url.Parse(conf.proxyUrl)
+ if conf.proxyURL != "" {
+ proxyURL, err := url.Parse(conf.proxyURL)
if err != nil {
return err
}
- conf.transport.Proxy = http.ProxyURL(proxyUrl)
+ conf.transport.Proxy = http.ProxyURL(proxyURL)
}
tlsConfig := &tls.Config{InsecureSkipVerify: !conf.sslVerify}
@@ -301,6 +351,7 @@ func (conf *config) getTransport() error {
}
conf.transport.TLSClientConfig = tlsConfig
+ conf.transport.DisableCompression = !conf.enableCompression
}
return nil
@@ -310,70 +361,88 @@ func checkRedirectFunc(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
+// DummyQueryEscape return the input string.
func DummyQueryEscape(s string) string {
return s
}
-func (conf *config) formatUrls(bucketName, objectKey string, params map[string]string, escape bool) (requestUrl string, canonicalizedUrl string) {
-
+func (conf *config) prepareBaseURL(bucketName string) (requestURL string, canonicalizedURL string) {
urlHolder := conf.urlHolder
if conf.cname {
- requestUrl = fmt.Sprintf("%s://%s:%d", urlHolder.scheme, urlHolder.host, urlHolder.port)
+ requestURL = fmt.Sprintf("%s://%s:%d", urlHolder.scheme, urlHolder.host, urlHolder.port)
if conf.signature == "v4" {
- canonicalizedUrl = "/"
+ canonicalizedURL = "/"
} else {
- canonicalizedUrl = "/" + urlHolder.host + "/"
+ canonicalizedURL = "/" + urlHolder.host + "/"
}
} else {
if bucketName == "" {
- requestUrl = fmt.Sprintf("%s://%s:%d", urlHolder.scheme, urlHolder.host, urlHolder.port)
- canonicalizedUrl = "/"
+ requestURL = fmt.Sprintf("%s://%s:%d", urlHolder.scheme, urlHolder.host, urlHolder.port)
+ canonicalizedURL = "/"
} else {
if conf.pathStyle {
- requestUrl = fmt.Sprintf("%s://%s:%d/%s", urlHolder.scheme, urlHolder.host, urlHolder.port, bucketName)
- canonicalizedUrl = "/" + bucketName
+ requestURL = fmt.Sprintf("%s://%s:%d/%s", urlHolder.scheme, urlHolder.host, urlHolder.port, bucketName)
+ canonicalizedURL = "/" + bucketName
} else {
- requestUrl = fmt.Sprintf("%s://%s.%s:%d", urlHolder.scheme, bucketName, urlHolder.host, urlHolder.port)
+ requestURL = fmt.Sprintf("%s://%s.%s:%d", urlHolder.scheme, bucketName, urlHolder.host, urlHolder.port)
if conf.signature == "v2" || conf.signature == "OBS" {
- canonicalizedUrl = "/" + bucketName + "/"
+ canonicalizedURL = "/" + bucketName + "/"
} else {
- canonicalizedUrl = "/"
+ canonicalizedURL = "/"
}
}
}
}
- var escapeFunc func(s string) string
- if escape {
- escapeFunc = url.QueryEscape
- } else {
- escapeFunc = DummyQueryEscape
- }
+ return
+}
- if objectKey != "" {
- var encodeObjectKey string
- if escape {
- tempKey := []rune(objectKey)
- result := make([]string, 0, len(tempKey))
- for _, value := range tempKey {
- if string(value) == "/" {
- result = append(result, string(value))
+func (conf *config) prepareObjectKey(escape bool, objectKey string, escapeFunc func(s string) string) (encodeObjectKey string) {
+ if escape {
+ tempKey := []rune(objectKey)
+ result := make([]string, 0, len(tempKey))
+ for _, value := range tempKey {
+ if string(value) == "/" {
+ result = append(result, string(value))
+ } else {
+ if string(value) == " " {
+ result = append(result, url.PathEscape(string(value)))
} else {
result = append(result, url.QueryEscape(string(value)))
}
}
- encodeObjectKey = strings.Join(result, "")
- } else {
- encodeObjectKey = escapeFunc(objectKey)
}
- requestUrl += "/" + encodeObjectKey
- if !strings.HasSuffix(canonicalizedUrl, "/") {
- canonicalizedUrl += "/"
+ encodeObjectKey = strings.Join(result, "")
+ } else {
+ encodeObjectKey = escapeFunc(objectKey)
+ }
+ return
+}
+
+func (conf *config) prepareEscapeFunc(escape bool) (escapeFunc func(s string) string) {
+ if escape {
+ return url.QueryEscape
+ }
+ return DummyQueryEscape
+}
+
+func (conf *config) formatUrls(bucketName, objectKey string, params map[string]string, escape bool) (requestURL string, canonicalizedURL string) {
+
+ requestURL, canonicalizedURL = conf.prepareBaseURL(bucketName)
+ var escapeFunc func(s string) string
+ escapeFunc = conf.prepareEscapeFunc(escape)
+
+ if objectKey != "" {
+ var encodeObjectKey string
+ encodeObjectKey = conf.prepareObjectKey(escape, objectKey, escapeFunc)
+ requestURL += "/" + encodeObjectKey
+ if !strings.HasSuffix(canonicalizedURL, "/") {
+ canonicalizedURL += "/"
}
- canonicalizedUrl += encodeObjectKey
+ canonicalizedURL += encodeObjectKey
}
keys := make([]string, 0, len(params))
- for key, _ := range params {
+ for key := range params {
keys = append(keys, strings.TrimSpace(key))
}
sort.Strings(keys)
@@ -381,25 +450,25 @@ func (conf *config) formatUrls(bucketName, objectKey string, params map[string]s
for index, key := range keys {
if index == 0 {
- requestUrl += "?"
+ requestURL += "?"
} else {
- requestUrl += "&"
+ requestURL += "&"
}
_key := url.QueryEscape(key)
- requestUrl += _key
+ requestURL += _key
_value := params[key]
if conf.signature == "v4" {
- requestUrl += "=" + url.QueryEscape(_value)
+ requestURL += "=" + url.QueryEscape(_value)
} else {
if _value != "" {
- requestUrl += "=" + url.QueryEscape(_value)
+ requestURL += "=" + url.QueryEscape(_value)
_value = "=" + _value
} else {
_value = ""
}
lowerKey := strings.ToLower(key)
- _, ok := allowed_resource_parameter_names[lowerKey]
+ _, ok := allowedResourceParameterNames[lowerKey]
prefixHeader := HEADER_PREFIX
isObs := conf.signature == SignatureObs
if isObs {
@@ -408,11 +477,11 @@ func (conf *config) formatUrls(bucketName, objectKey string, params map[string]s
ok = ok || strings.HasPrefix(lowerKey, prefixHeader)
if ok {
if i == 0 {
- canonicalizedUrl += "?"
+ canonicalizedURL += "?"
} else {
- canonicalizedUrl += "&"
+ canonicalizedURL += "&"
}
- canonicalizedUrl += getQueryUrl(_key, _value)
+ canonicalizedURL += getQueryURL(_key, _value)
i++
}
}
@@ -420,9 +489,9 @@ func (conf *config) formatUrls(bucketName, objectKey string, params map[string]s
return
}
-func getQueryUrl(key, value string) string {
- queryUrl := ""
- queryUrl += key
- queryUrl += value
- return queryUrl
+func getQueryURL(key, value string) string {
+ queryURL := ""
+ queryURL += key
+ queryURL += value
+ return queryURL
}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/const.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/const.go
index 63f952b85e0..4418c2560e8 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/const.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/const.go
@@ -13,8 +13,8 @@
package obs
const (
- obs_sdk_version = "3.19.11"
- USER_AGENT = "obs-sdk-go/" + obs_sdk_version
+ obsSdkVersion = "3.21.1"
+ USER_AGENT = "obs-sdk-go/" + obsSdkVersion
HEADER_PREFIX = "x-amz-"
HEADER_PREFIX_META = "x-amz-meta-"
HEADER_PREFIX_OBS = "x-obs-"
@@ -71,6 +71,8 @@ const (
HEADER_CACHE_CONTROL = "cache-control"
HEADER_CONTENT_DISPOSITION = "content-disposition"
HEADER_CONTENT_ENCODING = "content-encoding"
+ HEADER_AZ_REDUNDANCY = "az-redundancy"
+ headerOefMarker = "oef-marker"
HEADER_ETAG = "etag"
HEADER_LASTMODIFIED = "last-modified"
@@ -103,6 +105,8 @@ const (
HEADER_SUCCESS_ACTION_REDIRECT = "success_action_redirect"
+ headerFSFileInterface = "fs-file-interface"
+
HEADER_DATE_CAMEL = "Date"
HEADER_HOST_CAMEL = "Host"
HEADER_HOST = "host"
@@ -174,47 +178,32 @@ const (
HTTP_DELETE = "DELETE"
HTTP_HEAD = "HEAD"
HTTP_OPTIONS = "OPTIONS"
+
+ REQUEST_PAYER = "request-payer"
+ MULTI_AZ = "3az"
+
+ MAX_PART_SIZE = 5 * 1024 * 1024 * 1024
+ MIN_PART_SIZE = 100 * 1024
+ DEFAULT_PART_SIZE = 9 * 1024 * 1024
+ MAX_PART_NUM = 10000
)
+// SignatureType defines type of signature
type SignatureType string
const (
- SignatureV2 SignatureType = "v2"
- SignatureV4 SignatureType = "v4"
+ // SignatureV2 signature type v2
+ SignatureV2 SignatureType = "v2"
+ // SignatureV4 signature type v4
+ SignatureV4 SignatureType = "v4"
+ // SignatureObs signature type OBS
SignatureObs SignatureType = "OBS"
)
var (
- interested_headers = []string{"content-md5", "content-type", "date"}
-
- allowed_response_http_header_metadata_names = map[string]bool{
- "content-type": true,
- "content-md5": true,
- "content-length": true,
- "content-language": true,
- "expires": true,
- "origin": true,
- "cache-control": true,
- "content-disposition": true,
- "content-encoding": true,
- "x-default-storage-class": true,
- "location": true,
- "date": true,
- "etag": true,
- "host": true,
- "last-modified": true,
- "content-range": true,
- "x-reserved": true,
- "x-reserved-indicator": true,
- "access-control-allow-origin": true,
- "access-control-allow-headers": true,
- "access-control-max-age": true,
- "access-control-allow-methods": true,
- "access-control-expose-headers": true,
- "connection": true,
- }
+ interestedHeaders = []string{"content-md5", "content-type", "date"}
- allowed_request_http_header_metadata_names = map[string]bool{
+ allowedRequestHTTPHeaderMetadataNames = map[string]bool{
"content-type": true,
"content-md5": true,
"content-length": true,
@@ -240,9 +229,10 @@ var (
"content-range": true,
}
- allowed_resource_parameter_names = map[string]bool{
+ allowedResourceParameterNames = map[string]bool{
"acl": true,
"backtosource": true,
+ "metadata": true,
"policy": true,
"torrent": true,
"logging": true,
@@ -279,9 +269,10 @@ var (
"x-oss-process": true,
"x-image-save-bucket": true,
"x-image-save-object": true,
+ "ignore-sign-in-query": true,
}
- mime_types = map[string]string{
+ mimeTypes = map[string]string{
"001": "application/x-001",
"301": "application/x-301",
"323": "text/h323",
@@ -447,14 +438,14 @@ var (
"mp2": "audio/mp2",
"mp2v": "video/mpeg",
"mp3": "audio/mp3",
- "mp4": "video/mpeg4",
+ "mp4": "video/mp4",
"mp4a": "audio/mp4",
"mp4v": "video/mp4",
"mpa": "video/x-mpg",
"mpd": "application/vnd.ms-project",
- "mpe": "video/x-mpeg",
- "mpeg": "video/mpg",
- "mpg": "video/mpg",
+ "mpe": "video/mpeg",
+ "mpeg": "video/mpeg",
+ "mpg": "video/mpeg",
"mpg4": "video/mp4",
"mpga": "audio/rn-mpeg",
"mpp": "application/vnd.ms-project",
@@ -661,6 +652,7 @@ var (
}
)
+// HttpMethodType defines http method type
type HttpMethodType string
const (
@@ -672,30 +664,83 @@ const (
HttpMethodOptions HttpMethodType = HTTP_OPTIONS
)
+// SubResourceType defines the subResource value
type SubResourceType string
const (
+ // SubResourceStoragePolicy subResource value: storagePolicy
SubResourceStoragePolicy SubResourceType = "storagePolicy"
- SubResourceStorageClass SubResourceType = "storageClass"
- SubResourceQuota SubResourceType = "quota"
- SubResourceStorageInfo SubResourceType = "storageinfo"
- SubResourceLocation SubResourceType = "location"
- SubResourceAcl SubResourceType = "acl"
- SubResourcePolicy SubResourceType = "policy"
- SubResourceCors SubResourceType = "cors"
- SubResourceVersioning SubResourceType = "versioning"
- SubResourceWebsite SubResourceType = "website"
- SubResourceLogging SubResourceType = "logging"
- SubResourceLifecycle SubResourceType = "lifecycle"
- SubResourceNotification SubResourceType = "notification"
- SubResourceTagging SubResourceType = "tagging"
- SubResourceDelete SubResourceType = "delete"
- SubResourceVersions SubResourceType = "versions"
- SubResourceUploads SubResourceType = "uploads"
- SubResourceRestore SubResourceType = "restore"
- SubResourceMetadata SubResourceType = "metadata"
+
+ // SubResourceStorageClass subResource value: storageClass
+ SubResourceStorageClass SubResourceType = "storageClass"
+
+ // SubResourceQuota subResource value: quota
+ SubResourceQuota SubResourceType = "quota"
+
+ // SubResourceStorageInfo subResource value: storageinfo
+ SubResourceStorageInfo SubResourceType = "storageinfo"
+
+ // SubResourceLocation subResource value: location
+ SubResourceLocation SubResourceType = "location"
+
+ // SubResourceAcl subResource value: acl
+ SubResourceAcl SubResourceType = "acl"
+
+ // SubResourcePolicy subResource value: policy
+ SubResourcePolicy SubResourceType = "policy"
+
+ // SubResourceCors subResource value: cors
+ SubResourceCors SubResourceType = "cors"
+
+ // SubResourceVersioning subResource value: versioning
+ SubResourceVersioning SubResourceType = "versioning"
+
+ // SubResourceWebsite subResource value: website
+ SubResourceWebsite SubResourceType = "website"
+
+ // SubResourceLogging subResource value: logging
+ SubResourceLogging SubResourceType = "logging"
+
+ // SubResourceLifecycle subResource value: lifecycle
+ SubResourceLifecycle SubResourceType = "lifecycle"
+
+ // SubResourceNotification subResource value: notification
+ SubResourceNotification SubResourceType = "notification"
+
+ // SubResourceTagging subResource value: tagging
+ SubResourceTagging SubResourceType = "tagging"
+
+ // SubResourceDelete subResource value: delete
+ SubResourceDelete SubResourceType = "delete"
+
+ // SubResourceVersions subResource value: versions
+ SubResourceVersions SubResourceType = "versions"
+
+ // SubResourceUploads subResource value: uploads
+ SubResourceUploads SubResourceType = "uploads"
+
+ // SubResourceRestore subResource value: restore
+ SubResourceRestore SubResourceType = "restore"
+
+ // SubResourceMetadata subResource value: metadata
+ SubResourceMetadata SubResourceType = "metadata"
+
+ // SubResourceRequestPayment subResource value: requestPayment
+ SubResourceRequestPayment SubResourceType = "requestPayment"
)
+// objectKeyType defines the objectKey value
+type objectKeyType string
+
+const (
+ // objectKeyExtensionPolicy objectKey value: v1/extension_policy
+ objectKeyExtensionPolicy objectKeyType = "v1/extension_policy"
+
+ // objectKeyAsyncFetchJob objectKey value: v1/async-fetch/jobs
+ objectKeyAsyncFetchJob objectKeyType = "v1/async-fetch/jobs"
+)
+
+// AclType defines bucket/object acl type
type AclType string
const (
@@ -710,86 +755,194 @@ const (
AclPublicReadWriteDelivery AclType = "public-read-write-delivered"
)
+// StorageClassType defines bucket storage class
type StorageClassType string
const (
+ //StorageClassStandard storage class: STANDARD
StorageClassStandard StorageClassType = "STANDARD"
- StorageClassWarm StorageClassType = "WARM"
- StorageClassCold StorageClassType = "COLD"
+
+ //StorageClassWarm storage class: WARM
+ StorageClassWarm StorageClassType = "WARM"
+
+ //StorageClassCold storage class: COLD
+ StorageClassCold StorageClassType = "COLD"
+
+ storageClassStandardIA StorageClassType = "STANDARD_IA"
+ storageClassGlacier StorageClassType = "GLACIER"
)
+// PermissionType defines permission type
type PermissionType string
const (
- PermissionRead PermissionType = "READ"
- PermissionWrite PermissionType = "WRITE"
- PermissionReadAcp PermissionType = "READ_ACP"
- PermissionWriteAcp PermissionType = "WRITE_ACP"
+ // PermissionRead permission type: READ
+ PermissionRead PermissionType = "READ"
+
+ // PermissionWrite permission type: WRITE
+ PermissionWrite PermissionType = "WRITE"
+
+ // PermissionReadAcp permission type: READ_ACP
+ PermissionReadAcp PermissionType = "READ_ACP"
+
+ // PermissionWriteAcp permission type: WRITE_ACP
+ PermissionWriteAcp PermissionType = "WRITE_ACP"
+
+ // PermissionFullControl permission type: FULL_CONTROL
PermissionFullControl PermissionType = "FULL_CONTROL"
)
+// GranteeType defines grantee type
type GranteeType string
const (
+ // GranteeGroup grantee type: Group
GranteeGroup GranteeType = "Group"
- GranteeUser GranteeType = "CanonicalUser"
+
+ // GranteeUser grantee type: CanonicalUser
+ GranteeUser GranteeType = "CanonicalUser"
)
+// GroupUriType defines grantee uri type
type GroupUriType string
const (
- GroupAllUsers GroupUriType = "AllUsers"
+ // GroupAllUsers grantee uri type: AllUsers
+ GroupAllUsers GroupUriType = "AllUsers"
+
+ // GroupAuthenticatedUsers grantee uri type: AuthenticatedUsers
GroupAuthenticatedUsers GroupUriType = "AuthenticatedUsers"
- GroupLogDelivery GroupUriType = "LogDelivery"
+
+ // GroupLogDelivery grantee uri type: LogDelivery
+ GroupLogDelivery GroupUriType = "LogDelivery"
)
+// VersioningStatusType defines bucket version status
type VersioningStatusType string
const (
- VersioningStatusEnabled VersioningStatusType = "Enabled"
+ // VersioningStatusEnabled version status: Enabled
+ VersioningStatusEnabled VersioningStatusType = "Enabled"
+
+ // VersioningStatusSuspended version status: Suspended
VersioningStatusSuspended VersioningStatusType = "Suspended"
)
+// ProtocolType defines protocol type
type ProtocolType string
const (
- ProtocolHttp ProtocolType = "http"
+ // ProtocolHttp prorocol type: http
+ ProtocolHttp ProtocolType = "http"
+
+ // ProtocolHttps prorocol type: https
ProtocolHttps ProtocolType = "https"
)
+// RuleStatusType defines lifeCycle rule status
type RuleStatusType string
const (
- RuleStatusEnabled RuleStatusType = "Enabled"
+ // RuleStatusEnabled rule status: Enabled
+ RuleStatusEnabled RuleStatusType = "Enabled"
+
+ // RuleStatusDisabled rule status: Disabled
RuleStatusDisabled RuleStatusType = "Disabled"
)
+// RestoreTierType defines restore options
type RestoreTierType string
const (
+ // RestoreTierExpedited restore options: Expedited
RestoreTierExpedited RestoreTierType = "Expedited"
- RestoreTierStandard RestoreTierType = "Standard"
- RestoreTierBulk RestoreTierType = "Bulk"
+
+ // RestoreTierStandard restore options: Standard
+ RestoreTierStandard RestoreTierType = "Standard"
+
+ // RestoreTierBulk restore options: Bulk
+ RestoreTierBulk RestoreTierType = "Bulk"
)
+// MetadataDirectiveType defines metadata operation indicator
type MetadataDirectiveType string
const (
- CopyMetadata MetadataDirectiveType = "COPY"
- ReplaceNew MetadataDirectiveType = "REPLACE_NEW"
+ // CopyMetadata metadata operation: COPY
+ CopyMetadata MetadataDirectiveType = "COPY"
+
+ // ReplaceNew metadata operation: REPLACE_NEW
+ ReplaceNew MetadataDirectiveType = "REPLACE_NEW"
+
+ // ReplaceMetadata metadata operation: REPLACE
ReplaceMetadata MetadataDirectiveType = "REPLACE"
)
+// EventType defines bucket notification type of events
type EventType string
const (
- ObjectCreatedAll EventType = "ObjectCreated:*"
- ObjectCreatedPut EventType = "ObjectCreated:Put"
+ // ObjectCreatedAll type of events: ObjectCreated:*
+ ObjectCreatedAll EventType = "ObjectCreated:*"
+
+ // ObjectCreatedPut type of events: ObjectCreated:Put
+ ObjectCreatedPut EventType = "ObjectCreated:Put"
+
+ // ObjectCreatedPost type of events: ObjectCreated:Post
ObjectCreatedPost EventType = "ObjectCreated:Post"
- ObjectCreatedCopy EventType = "ObjectCreated:Copy"
+ // ObjectCreatedCopy type of events: ObjectCreated:Copy
+ ObjectCreatedCopy EventType = "ObjectCreated:Copy"
+
+ // ObjectCreatedCompleteMultipartUpload type of events: ObjectCreated:CompleteMultipartUpload
ObjectCreatedCompleteMultipartUpload EventType = "ObjectCreated:CompleteMultipartUpload"
- ObjectRemovedAll EventType = "ObjectRemoved:*"
- ObjectRemovedDelete EventType = "ObjectRemoved:Delete"
- ObjectRemovedDeleteMarkerCreated EventType = "ObjectRemoved:DeleteMarkerCreated"
+
+ // ObjectRemovedAll type of events: ObjectRemoved:*
+ ObjectRemovedAll EventType = "ObjectRemoved:*"
+
+ // ObjectRemovedDelete type of events: ObjectRemoved:Delete
+ ObjectRemovedDelete EventType = "ObjectRemoved:Delete"
+
+ // ObjectRemovedDeleteMarkerCreated type of events: ObjectRemoved:DeleteMarkerCreated
+ ObjectRemovedDeleteMarkerCreated EventType = "ObjectRemoved:DeleteMarkerCreated"
+)
+
+// PayerType defines type of payer
+type PayerType string
+
+const (
+ // BucketOwnerPayer type of payer: BucketOwner
+ BucketOwnerPayer PayerType = "BucketOwner"
+
+ // RequesterPayer type of payer: Requester
+ RequesterPayer PayerType = "Requester"
+
+ // Requester header for requester-Pays
+ Requester PayerType = "requester"
+)
+
+// FetchPolicyStatusType defines type of fetch policy status
+type FetchPolicyStatusType string
+
+const (
+ // FetchStatusOpen type of status: open
+ FetchStatusOpen FetchPolicyStatusType = "open"
+
+ // FetchStatusClosed type of status: closed
+ FetchStatusClosed FetchPolicyStatusType = "closed"
+)
+
+// AvailableZoneType defines type of az redundancy
+type AvailableZoneType string
+
+const (
+ AvailableZoneMultiAz AvailableZoneType = "3az"
+)
+
+// FSStatusType defines type of file system status
+type FSStatusType string
+
+const (
+ FSStatusEnabled FSStatusType = "Enabled"
+ FSStatusDisabled FSStatusType = "Disabled"
)
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/convert.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/convert.go
index c0d5149e60a..fd42a641332 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/convert.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/convert.go
@@ -14,10 +14,12 @@ package obs
import (
"bytes"
+ "encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
+ "net/url"
"reflect"
"strings"
"time"
@@ -37,6 +39,7 @@ func cleanHeaderPrefix(header http.Header) map[string][]string {
return responseHeaders
}
+// ParseStringToEventType converts string value to EventType value and returns it
func ParseStringToEventType(value string) (ret EventType) {
switch value {
case "ObjectCreated:*", "s3:ObjectCreated:*":
@@ -61,6 +64,7 @@ func ParseStringToEventType(value string) (ret EventType) {
return
}
+// ParseStringToStorageClassType converts string value to StorageClassType value and returns it
func ParseStringToStorageClassType(value string) (ret StorageClassType) {
switch value {
case "STANDARD":
@@ -75,15 +79,24 @@ func ParseStringToStorageClassType(value string) (ret StorageClassType) {
return
}
-func convertGrantToXml(grant Grant, isObs bool, isBucket bool) string {
- xml := make([]string, 0, 4)
- if !isObs {
- xml = append(xml, fmt.Sprintf("", grant.Grantee.Type))
+func prepareGrantURI(grant Grant) string {
+ if grant.Grantee.URI == GroupAllUsers || grant.Grantee.URI == GroupAuthenticatedUsers {
+ return fmt.Sprintf("%s%s", "http://acs.amazonaws.com/groups/global/", grant.Grantee.URI)
+ }
+ if grant.Grantee.URI == GroupLogDelivery {
+ return fmt.Sprintf("%s%s", "http://acs.amazonaws.com/groups/s3/", grant.Grantee.URI)
}
+ return fmt.Sprintf("%s", grant.Grantee.URI)
+}
+
+func convertGrantToXML(grant Grant, isObs bool, isBucket bool) string {
+ xml := make([]string, 0, 4)
if grant.Grantee.Type == GranteeUser {
if isObs {
xml = append(xml, "")
+ } else {
+ xml = append(xml, fmt.Sprintf("", grant.Grantee.Type))
}
if grant.Grantee.ID != "" {
granteeID := XmlTranscoding(grant.Grantee.ID)
@@ -96,13 +109,8 @@ func convertGrantToXml(grant Grant, isObs bool, isBucket bool) string {
xml = append(xml, "")
} else {
if !isObs {
- if grant.Grantee.URI == GroupAllUsers || grant.Grantee.URI == GroupAuthenticatedUsers {
- xml = append(xml, fmt.Sprintf("%s%s", "http://acs.amazonaws.com/groups/global/", grant.Grantee.URI))
- } else if grant.Grantee.URI == GroupLogDelivery {
- xml = append(xml, fmt.Sprintf("%s%s", "http://acs.amazonaws.com/groups/s3/", grant.Grantee.URI))
- } else {
- xml = append(xml, fmt.Sprintf("%s", grant.Grantee.URI))
- }
+ xml = append(xml, fmt.Sprintf("", grant.Grantee.Type))
+ xml = append(xml, prepareGrantURI(grant))
xml = append(xml, "")
} else if grant.Grantee.URI == GroupAllUsers {
xml = append(xml, "")
@@ -121,6 +129,14 @@ func convertGrantToXml(grant Grant, isObs bool, isBucket bool) string {
return strings.Join(xml, "")
}
+func hasLoggingTarget(input BucketLoggingStatus) bool {
+ if input.TargetBucket != "" || input.TargetPrefix != "" || len(input.TargetGrants) > 0 {
+ return true
+ }
+ return false
+}
+
+// ConvertLoggingStatusToXml converts BucketLoggingStatus value to XML data and returns it
func ConvertLoggingStatusToXml(input BucketLoggingStatus, returnMd5 bool, isObs bool) (data string, md5 string) {
grantsLength := len(input.TargetGrants)
xml := make([]string, 0, 8+grantsLength)
@@ -130,7 +146,7 @@ func ConvertLoggingStatusToXml(input BucketLoggingStatus, returnMd5 bool, isObs
agency := XmlTranscoding(input.Agency)
xml = append(xml, fmt.Sprintf("%s", agency))
}
- if input.TargetBucket != "" || input.TargetPrefix != "" || grantsLength > 0 {
+ if hasLoggingTarget(input) {
xml = append(xml, "")
if input.TargetBucket != "" {
xml = append(xml, fmt.Sprintf("%s", input.TargetBucket))
@@ -142,7 +158,7 @@ func ConvertLoggingStatusToXml(input BucketLoggingStatus, returnMd5 bool, isObs
if grantsLength > 0 {
xml = append(xml, "")
for _, grant := range input.TargetGrants {
- xml = append(xml, convertGrantToXml(grant, isObs, false))
+ xml = append(xml, convertGrantToXML(grant, isObs, false))
}
xml = append(xml, "")
}
@@ -157,6 +173,7 @@ func ConvertLoggingStatusToXml(input BucketLoggingStatus, returnMd5 bool, isObs
return
}
+// ConvertAclToXml converts AccessControlPolicy value to XML data and returns it
func ConvertAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool) (data string, md5 string) {
xml := make([]string, 0, 4+len(input.Grants))
ownerID := XmlTranscoding(input.Owner.ID)
@@ -172,7 +189,7 @@ func ConvertAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool) (dat
xml = append(xml, "")
}
for _, grant := range input.Grants {
- xml = append(xml, convertGrantToXml(grant, isObs, false))
+ xml = append(xml, convertGrantToXML(grant, isObs, false))
}
xml = append(xml, "")
data = strings.Join(xml, "")
@@ -182,7 +199,7 @@ func ConvertAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool) (dat
return
}
-func convertBucketAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool) (data string, md5 string) {
+func convertBucketACLToXML(input AccessControlPolicy, returnMd5 bool, isObs bool) (data string, md5 string) {
xml := make([]string, 0, 4+len(input.Grants))
ownerID := XmlTranscoding(input.Owner.ID)
xml = append(xml, fmt.Sprintf("%s", ownerID))
@@ -194,7 +211,7 @@ func convertBucketAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool
xml = append(xml, "")
for _, grant := range input.Grants {
- xml = append(xml, convertGrantToXml(grant, isObs, true))
+ xml = append(xml, convertGrantToXML(grant, isObs, true))
}
xml = append(xml, "")
data = strings.Join(xml, "")
@@ -204,7 +221,7 @@ func convertBucketAclToXml(input AccessControlPolicy, returnMd5 bool, isObs bool
return
}
-func convertConditionToXml(condition Condition) string {
+func convertConditionToXML(condition Condition) string {
xml := make([]string, 0, 2)
if condition.KeyPrefixEquals != "" {
keyPrefixEquals := XmlTranscoding(condition.KeyPrefixEquals)
@@ -219,6 +236,40 @@ func convertConditionToXml(condition Condition) string {
return ""
}
+func prepareRoutingRule(input BucketWebsiteConfiguration) string {
+ xml := make([]string, 0, len(input.RoutingRules)*10)
+ for _, routingRule := range input.RoutingRules {
+ xml = append(xml, "")
+ xml = append(xml, "")
+ if routingRule.Redirect.Protocol != "" {
+ xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.Protocol))
+ }
+ if routingRule.Redirect.HostName != "" {
+ xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.HostName))
+ }
+ if routingRule.Redirect.ReplaceKeyPrefixWith != "" {
+ replaceKeyPrefixWith := XmlTranscoding(routingRule.Redirect.ReplaceKeyPrefixWith)
+ xml = append(xml, fmt.Sprintf("%s", replaceKeyPrefixWith))
+ }
+
+ if routingRule.Redirect.ReplaceKeyWith != "" {
+ replaceKeyWith := XmlTranscoding(routingRule.Redirect.ReplaceKeyWith)
+ xml = append(xml, fmt.Sprintf("%s", replaceKeyWith))
+ }
+ if routingRule.Redirect.HttpRedirectCode != "" {
+ xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.HttpRedirectCode))
+ }
+ xml = append(xml, "")
+
+ if ret := convertConditionToXML(routingRule.Condition); ret != "" {
+ xml = append(xml, ret)
+ }
+ xml = append(xml, "")
+ }
+ return strings.Join(xml, "")
+}
+
+// ConvertWebsiteConfigurationToXml converts BucketWebsiteConfiguration value to XML data and returns it
func ConvertWebsiteConfigurationToXml(input BucketWebsiteConfiguration, returnMd5 bool) (data string, md5 string) {
routingRuleLength := len(input.RoutingRules)
xml := make([]string, 0, 6+routingRuleLength*10)
@@ -241,34 +292,7 @@ func ConvertWebsiteConfigurationToXml(input BucketWebsiteConfiguration, returnMd
}
if routingRuleLength > 0 {
xml = append(xml, "")
- for _, routingRule := range input.RoutingRules {
- xml = append(xml, "")
- xml = append(xml, "")
- if routingRule.Redirect.Protocol != "" {
- xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.Protocol))
- }
- if routingRule.Redirect.HostName != "" {
- xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.HostName))
- }
- if routingRule.Redirect.ReplaceKeyPrefixWith != "" {
- replaceKeyPrefixWith := XmlTranscoding(routingRule.Redirect.ReplaceKeyPrefixWith)
- xml = append(xml, fmt.Sprintf("%s", replaceKeyPrefixWith))
- }
-
- if routingRule.Redirect.ReplaceKeyWith != "" {
- replaceKeyWith := XmlTranscoding(routingRule.Redirect.ReplaceKeyWith)
- xml = append(xml, fmt.Sprintf("%s", replaceKeyWith))
- }
- if routingRule.Redirect.HttpRedirectCode != "" {
- xml = append(xml, fmt.Sprintf("%s", routingRule.Redirect.HttpRedirectCode))
- }
- xml = append(xml, "")
-
- if ret := convertConditionToXml(routingRule.Condition); ret != "" {
- xml = append(xml, ret)
- }
- xml = append(xml, "")
- }
+ xml = append(xml, prepareRoutingRule(input))
xml = append(xml, "")
}
}
@@ -281,7 +305,7 @@ func ConvertWebsiteConfigurationToXml(input BucketWebsiteConfiguration, returnMd
return
}
-func convertTransitionsToXml(transitions []Transition, isObs bool) string {
+func convertTransitionsToXML(transitions []Transition, isObs bool) string {
if length := len(transitions); length > 0 {
xml := make([]string, 0, length)
for _, transition := range transitions {
@@ -294,10 +318,10 @@ func convertTransitionsToXml(transitions []Transition, isObs bool) string {
if temp != "" {
if !isObs {
storageClass := string(transition.StorageClass)
- if transition.StorageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if transition.StorageClass == "COLD" {
- storageClass = "GLACIER"
+ if transition.StorageClass == StorageClassWarm {
+ storageClass = string(storageClassStandardIA)
+ } else if transition.StorageClass == StorageClassCold {
+ storageClass = string(storageClassGlacier)
}
xml = append(xml, fmt.Sprintf("%s%s", temp, storageClass))
} else {
@@ -310,7 +334,7 @@ func convertTransitionsToXml(transitions []Transition, isObs bool) string {
return ""
}
-func convertExpirationToXml(expiration Expiration) string {
+func convertExpirationToXML(expiration Expiration) string {
if expiration.Days > 0 {
return fmt.Sprintf("%d", expiration.Days)
} else if !expiration.Date.IsZero() {
@@ -318,17 +342,17 @@ func convertExpirationToXml(expiration Expiration) string {
}
return ""
}
-func convertNoncurrentVersionTransitionsToXml(noncurrentVersionTransitions []NoncurrentVersionTransition, isObs bool) string {
+func convertNoncurrentVersionTransitionsToXML(noncurrentVersionTransitions []NoncurrentVersionTransition, isObs bool) string {
if length := len(noncurrentVersionTransitions); length > 0 {
xml := make([]string, 0, length)
for _, noncurrentVersionTransition := range noncurrentVersionTransitions {
if noncurrentVersionTransition.NoncurrentDays > 0 {
storageClass := string(noncurrentVersionTransition.StorageClass)
if !isObs {
- if storageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if storageClass == "COLD" {
- storageClass = "GLACIER"
+ if storageClass == string(StorageClassWarm) {
+ storageClass = string(storageClassStandardIA)
+ } else if storageClass == string(StorageClassCold) {
+ storageClass = string(storageClassGlacier)
}
}
xml = append(xml, fmt.Sprintf("%d"+
@@ -340,13 +364,14 @@ func convertNoncurrentVersionTransitionsToXml(noncurrentVersionTransitions []Non
}
return ""
}
-func convertNoncurrentVersionExpirationToXml(noncurrentVersionExpiration NoncurrentVersionExpiration) string {
+func convertNoncurrentVersionExpirationToXML(noncurrentVersionExpiration NoncurrentVersionExpiration) string {
if noncurrentVersionExpiration.NoncurrentDays > 0 {
return fmt.Sprintf("%d", noncurrentVersionExpiration.NoncurrentDays)
}
return ""
}
+// ConvertLifecyleConfigurationToXml converts BucketLifecyleConfiguration value to XML data and returns it
func ConvertLifecyleConfigurationToXml(input BucketLifecyleConfiguration, returnMd5 bool, isObs bool) (data string, md5 string) {
xml := make([]string, 0, 2+len(input.LifecycleRules)*9)
xml = append(xml, "")
@@ -359,16 +384,16 @@ func ConvertLifecyleConfigurationToXml(input BucketLifecyleConfiguration, return
lifecyleRulePrefix := XmlTranscoding(lifecyleRule.Prefix)
xml = append(xml, fmt.Sprintf("%s", lifecyleRulePrefix))
xml = append(xml, fmt.Sprintf("%s", lifecyleRule.Status))
- if ret := convertTransitionsToXml(lifecyleRule.Transitions, isObs); ret != "" {
+ if ret := convertTransitionsToXML(lifecyleRule.Transitions, isObs); ret != "" {
xml = append(xml, ret)
}
- if ret := convertExpirationToXml(lifecyleRule.Expiration); ret != "" {
+ if ret := convertExpirationToXML(lifecyleRule.Expiration); ret != "" {
xml = append(xml, ret)
}
- if ret := convertNoncurrentVersionTransitionsToXml(lifecyleRule.NoncurrentVersionTransitions, isObs); ret != "" {
+ if ret := convertNoncurrentVersionTransitionsToXML(lifecyleRule.NoncurrentVersionTransitions, isObs); ret != "" {
xml = append(xml, ret)
}
- if ret := convertNoncurrentVersionExpirationToXml(lifecyleRule.NoncurrentVersionExpiration); ret != "" {
+ if ret := convertNoncurrentVersionExpirationToXML(lifecyleRule.NoncurrentVersionExpiration); ret != "" {
xml = append(xml, ret)
}
xml = append(xml, "")
@@ -381,7 +406,7 @@ func ConvertLifecyleConfigurationToXml(input BucketLifecyleConfiguration, return
return
}
-func converntFilterRulesToXml(filterRules []FilterRule, isObs bool) string {
+func converntFilterRulesToXML(filterRules []FilterRule, isObs bool) string {
if length := len(filterRules); length > 0 {
xml := make([]string, 0, length*4)
for _, filterRule := range filterRules {
@@ -398,14 +423,13 @@ func converntFilterRulesToXml(filterRules []FilterRule, isObs bool) string {
}
if !isObs {
return fmt.Sprintf("%s", strings.Join(xml, ""))
- } else {
- return fmt.Sprintf("", strings.Join(xml, ""))
}
+ return fmt.Sprintf("", strings.Join(xml, ""))
}
return ""
}
-func converntEventsToXml(events []EventType, isObs bool) string {
+func converntEventsToXML(events []EventType, isObs bool) string {
if length := len(events); length > 0 {
xml := make([]string, 0, length)
if !isObs {
@@ -422,7 +446,7 @@ func converntEventsToXml(events []EventType, isObs bool) string {
return ""
}
-func converntConfigureToXml(topicConfiguration TopicConfiguration, xmlElem string, isObs bool) string {
+func converntConfigureToXML(topicConfiguration TopicConfiguration, xmlElem string, isObs bool) string {
xml := make([]string, 0, 6)
xml = append(xml, xmlElem)
if topicConfiguration.ID != "" {
@@ -432,10 +456,10 @@ func converntConfigureToXml(topicConfiguration TopicConfiguration, xmlElem strin
topicConfigurationTopic := XmlTranscoding(topicConfiguration.Topic)
xml = append(xml, fmt.Sprintf("%s", topicConfigurationTopic))
- if ret := converntEventsToXml(topicConfiguration.Events, isObs); ret != "" {
+ if ret := converntEventsToXML(topicConfiguration.Events, isObs); ret != "" {
xml = append(xml, ret)
}
- if ret := converntFilterRulesToXml(topicConfiguration.FilterRules, isObs); ret != "" {
+ if ret := converntFilterRulesToXML(topicConfiguration.FilterRules, isObs); ret != "" {
xml = append(xml, ret)
}
tempElem := xmlElem[0:1] + "/" + xmlElem[1:]
@@ -443,6 +467,7 @@ func converntConfigureToXml(topicConfiguration TopicConfiguration, xmlElem strin
return strings.Join(xml, "")
}
+// ConverntObsRestoreToXml converts RestoreObjectInput value to XML data and returns it
func ConverntObsRestoreToXml(restoreObjectInput RestoreObjectInput) string {
xml := make([]string, 0, 2)
xml = append(xml, fmt.Sprintf("%d", restoreObjectInput.Days))
@@ -454,11 +479,12 @@ func ConverntObsRestoreToXml(restoreObjectInput RestoreObjectInput) string {
return data
}
+// ConvertNotificationToXml converts BucketNotification value to XML data and returns it
func ConvertNotificationToXml(input BucketNotification, returnMd5 bool, isObs bool) (data string, md5 string) {
xml := make([]string, 0, 2+len(input.TopicConfigurations)*6)
xml = append(xml, "")
for _, topicConfiguration := range input.TopicConfigurations {
- ret := converntConfigureToXml(topicConfiguration, "", isObs)
+ ret := converntConfigureToXML(topicConfiguration, "", isObs)
xml = append(xml, ret)
}
xml = append(xml, "")
@@ -469,6 +495,7 @@ func ConvertNotificationToXml(input BucketNotification, returnMd5 bool, isObs bo
return
}
+// ConvertCompleteMultipartUploadInputToXml converts CompleteMultipartUploadInput value to XML data and returns it
func ConvertCompleteMultipartUploadInputToXml(input CompleteMultipartUploadInput, returnMd5 bool) (data string, md5 string) {
xml := make([]string, 0, 2+len(input.Parts)*4)
xml = append(xml, "")
@@ -486,6 +513,31 @@ func ConvertCompleteMultipartUploadInputToXml(input CompleteMultipartUploadInput
return
}
+func convertDeleteObjectsToXML(input DeleteObjectsInput) (data string, md5 string) {
+ xml := make([]string, 0, 4+len(input.Objects)*4)
+ xml = append(xml, "")
+ if input.Quiet {
+ xml = append(xml, fmt.Sprintf("%t", input.Quiet))
+ }
+ if input.EncodingType != "" {
+ encodingType := XmlTranscoding(input.EncodingType)
+ xml = append(xml, fmt.Sprintf("%s", encodingType))
+ }
+ for _, obj := range input.Objects {
+ xml = append(xml, "")
+ }
+ xml = append(xml, "")
+ data = strings.Join(xml, "")
+ md5 = Base64Md5([]byte(data))
+ return
+}
+
func parseSseHeader(responseHeaders map[string][]string) (sseHeader ISseHeader) {
if ret, ok := responseHeaders[HEADER_SSEC_ENCRYPTION]; ok {
sseCHeader := SseCHeader{Encryption: ret[0]}
@@ -505,7 +557,26 @@ func parseSseHeader(responseHeaders map[string][]string) (sseHeader ISseHeader)
return
}
-func ParseGetObjectMetadataOutput(output *GetObjectMetadataOutput) {
+func parseCorsHeader(output BaseModel) (AllowOrigin, AllowHeader, AllowMethod, ExposeHeader string, MaxAgeSeconds int) {
+ if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_ORIGIN]; ok {
+ AllowOrigin = ret[0]
+ }
+ if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_HEADERS]; ok {
+ AllowHeader = ret[0]
+ }
+ if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_MAX_AGE]; ok {
+ MaxAgeSeconds = StringToInt(ret[0], 0)
+ }
+ if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_METHODS]; ok {
+ AllowMethod = ret[0]
+ }
+ if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_EXPOSE_HEADERS]; ok {
+ ExposeHeader = ret[0]
+ }
+ return
+}
+
+func parseUnCommonHeader(output *GetObjectMetadataOutput) {
if ret, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
output.VersionId = ret[0]
}
@@ -519,11 +590,17 @@ func ParseGetObjectMetadataOutput(output *GetObjectMetadataOutput) {
output.Restore = ret[0]
}
if ret, ok := output.ResponseHeaders[HEADER_OBJECT_TYPE]; ok {
- output.Restore = ret[0]
+ output.ObjectType = ret[0]
}
if ret, ok := output.ResponseHeaders[HEADER_NEXT_APPEND_POSITION]; ok {
- output.Restore = ret[0]
+ output.NextAppendPosition = ret[0]
}
+}
+
+// ParseGetObjectMetadataOutput sets GetObjectMetadataOutput field values with response headers
+func ParseGetObjectMetadataOutput(output *GetObjectMetadataOutput) {
+ output.AllowOrigin, output.AllowHeader, output.AllowMethod, output.ExposeHeader, output.MaxAgeSeconds = parseCorsHeader(output.BaseModel)
+ parseUnCommonHeader(output)
if ret, ok := output.ResponseHeaders[HEADER_STORAGE_CLASS2]; ok {
output.StorageClass = ParseStringToStorageClassType(ret[0])
}
@@ -533,21 +610,6 @@ func ParseGetObjectMetadataOutput(output *GetObjectMetadataOutput) {
if ret, ok := output.ResponseHeaders[HEADER_CONTENT_TYPE]; ok {
output.ContentType = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_ORIGIN]; ok {
- output.AllowOrigin = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_HEADERS]; ok {
- output.AllowHeader = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_MAX_AGE]; ok {
- output.MaxAgeSeconds = StringToInt(ret[0], 0)
- }
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_METHODS]; ok {
- output.AllowMethod = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_EXPOSE_HEADERS]; ok {
- output.ExposeHeader = ret[0]
- }
output.SseHeader = parseSseHeader(output.ResponseHeaders)
if ret, ok := output.ResponseHeaders[HEADER_LASTMODIFIED]; ok {
@@ -573,6 +635,7 @@ func ParseGetObjectMetadataOutput(output *GetObjectMetadataOutput) {
}
+// ParseCopyObjectOutput sets CopyObjectOutput field values with response headers
func ParseCopyObjectOutput(output *CopyObjectOutput) {
if ret, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
output.VersionId = ret[0]
@@ -583,6 +646,7 @@ func ParseCopyObjectOutput(output *CopyObjectOutput) {
}
}
+// ParsePutObjectOutput sets PutObjectOutput field values with response headers
func ParsePutObjectOutput(output *PutObjectOutput) {
if ret, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
output.VersionId = ret[0]
@@ -596,10 +660,12 @@ func ParsePutObjectOutput(output *PutObjectOutput) {
}
}
+// ParseInitiateMultipartUploadOutput sets InitiateMultipartUploadOutput field values with response headers
func ParseInitiateMultipartUploadOutput(output *InitiateMultipartUploadOutput) {
output.SseHeader = parseSseHeader(output.ResponseHeaders)
}
+// ParseUploadPartOutput sets UploadPartOutput field values with response headers
func ParseUploadPartOutput(output *UploadPartOutput) {
output.SseHeader = parseSseHeader(output.ResponseHeaders)
if ret, ok := output.ResponseHeaders[HEADER_ETAG]; ok {
@@ -607,6 +673,7 @@ func ParseUploadPartOutput(output *UploadPartOutput) {
}
}
+// ParseCompleteMultipartUploadOutput sets CompleteMultipartUploadOutput field values with response headers
func ParseCompleteMultipartUploadOutput(output *CompleteMultipartUploadOutput) {
output.SseHeader = parseSseHeader(output.ResponseHeaders)
if ret, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
@@ -614,11 +681,14 @@ func ParseCompleteMultipartUploadOutput(output *CompleteMultipartUploadOutput) {
}
}
+// ParseCopyPartOutput sets CopyPartOutput field values with response headers
func ParseCopyPartOutput(output *CopyPartOutput) {
output.SseHeader = parseSseHeader(output.ResponseHeaders)
}
+// ParseGetBucketMetadataOutput sets GetBucketMetadataOutput field values with response headers
func ParseGetBucketMetadataOutput(output *GetBucketMetadataOutput) {
+ output.AllowOrigin, output.AllowHeader, output.AllowMethod, output.ExposeHeader, output.MaxAgeSeconds = parseCorsHeader(output.BaseModel)
if ret, ok := output.ResponseHeaders[HEADER_STORAGE_CLASS]; ok {
output.StorageClass = ParseStringToStorageClassType(ret[0])
} else if ret, ok := output.ResponseHeaders[HEADER_STORAGE_CLASS2]; ok {
@@ -632,26 +702,35 @@ func ParseGetBucketMetadataOutput(output *GetBucketMetadataOutput) {
} else if ret, ok := output.ResponseHeaders[HEADER_BUCKET_LOCATION_OBS]; ok {
output.Location = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_ORIGIN]; ok {
- output.AllowOrigin = ret[0]
+ if ret, ok := output.ResponseHeaders[HEADER_EPID_HEADERS]; ok {
+ output.Epid = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_HEADERS]; ok {
- output.AllowHeader = ret[0]
+ if ret, ok := output.ResponseHeaders[HEADER_AZ_REDUNDANCY]; ok {
+ output.AvailableZone = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_MAX_AGE]; ok {
- output.MaxAgeSeconds = StringToInt(ret[0], 0)
+ if ret, ok := output.ResponseHeaders[headerFSFileInterface]; ok {
+ output.FSStatus = parseStringToFSStatusType(ret[0])
+ } else {
+ output.FSStatus = FSStatusDisabled
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_ALLOW_METHODS]; ok {
- output.AllowMethod = ret[0]
+}
+
+func parseContentHeader(output *SetObjectMetadataOutput) {
+ if ret, ok := output.ResponseHeaders[HEADER_CONTENT_DISPOSITION]; ok {
+ output.ContentDisposition = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_ACCESS_CONRTOL_EXPOSE_HEADERS]; ok {
- output.ExposeHeader = ret[0]
+ if ret, ok := output.ResponseHeaders[HEADER_CONTENT_ENCODING]; ok {
+ output.ContentEncoding = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_EPID_HEADERS]; ok {
- output.Epid = ret[0]
+ if ret, ok := output.ResponseHeaders[HEADER_CONTENT_LANGUAGE]; ok {
+ output.ContentLanguage = ret[0]
+ }
+ if ret, ok := output.ResponseHeaders[HEADER_CONTENT_TYPE]; ok {
+ output.ContentType = ret[0]
}
}
+// ParseSetObjectMetadataOutput sets SetObjectMetadataOutput field values with response headers
func ParseSetObjectMetadataOutput(output *SetObjectMetadataOutput) {
if ret, ok := output.ResponseHeaders[HEADER_STORAGE_CLASS]; ok {
output.StorageClass = ParseStringToStorageClassType(ret[0])
@@ -664,18 +743,7 @@ func ParseSetObjectMetadataOutput(output *SetObjectMetadataOutput) {
if ret, ok := output.ResponseHeaders[HEADER_CACHE_CONTROL]; ok {
output.CacheControl = ret[0]
}
- if ret, ok := output.ResponseHeaders[HEADER_CONTENT_DISPOSITION]; ok {
- output.ContentDisposition = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_CONTENT_ENCODING]; ok {
- output.ContentEncoding = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_CONTENT_LANGUAGE]; ok {
- output.ContentLanguage = ret[0]
- }
- if ret, ok := output.ResponseHeaders[HEADER_CONTENT_TYPE]; ok {
- output.ContentType = ret[0]
- }
+ parseContentHeader(output)
if ret, ok := output.ResponseHeaders[HEADER_EXPIRES]; ok {
output.Expires = ret[0]
}
@@ -693,9 +761,11 @@ func ParseSetObjectMetadataOutput(output *SetObjectMetadataOutput) {
}
}
}
+
+// ParseDeleteObjectOutput sets DeleteObjectOutput field values with response headers
func ParseDeleteObjectOutput(output *DeleteObjectOutput) {
- if versionId, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
- output.VersionId = versionId[0]
+ if versionID, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
+ output.VersionId = versionID[0]
}
if deleteMarker, ok := output.ResponseHeaders[HEADER_DELETE_MARKER]; ok {
@@ -703,6 +773,7 @@ func ParseDeleteObjectOutput(output *DeleteObjectOutput) {
}
}
+// ParseGetObjectOutput sets GetObjectOutput field values with response headers
func ParseGetObjectOutput(output *GetObjectOutput) {
ParseGetObjectMetadataOutput(&output.GetObjectMetadataOutput)
if ret, ok := output.ResponseHeaders[HEADER_DELETE_MARKER]; ok {
@@ -725,6 +796,7 @@ func ParseGetObjectOutput(output *GetObjectOutput) {
}
}
+// ConvertRequestToIoReaderV2 converts req to XML data
func ConvertRequestToIoReaderV2(req interface{}) (io.Reader, string, error) {
data, err := TransToXml(req)
if err == nil {
@@ -736,6 +808,7 @@ func ConvertRequestToIoReaderV2(req interface{}) (io.Reader, string, error) {
return nil, "", err
}
+// ConvertRequestToIoReader converts req to XML data
func ConvertRequestToIoReader(req interface{}) (io.Reader, error) {
body, err := TransToXml(req)
if err == nil {
@@ -747,26 +820,41 @@ func ConvertRequestToIoReader(req interface{}) (io.Reader, error) {
return nil, err
}
+func parseBucketPolicyOutput(s reflect.Type, baseModel IBaseModel, body []byte) {
+ for i := 0; i < s.NumField(); i++ {
+ if s.Field(i).Tag == "json:\"body\"" {
+ reflect.ValueOf(baseModel).Elem().FieldByName(s.Field(i).Name).SetString(string(body))
+ break
+ }
+ }
+}
+
+// ParseResponseToBaseModel gets response from OBS
func ParseResponseToBaseModel(resp *http.Response, baseModel IBaseModel, xmlResult bool, isObs bool) (err error) {
readCloser, ok := baseModel.(IReadCloser)
if !ok {
- defer resp.Body.Close()
- body, err := ioutil.ReadAll(resp.Body)
+ defer func() {
+ errMsg := resp.Body.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close response body")
+ }
+ }()
+ var body []byte
+ body, err = ioutil.ReadAll(resp.Body)
if err == nil && len(body) > 0 {
if xmlResult {
err = ParseXml(body, baseModel)
- if err != nil {
- doLog(LEVEL_ERROR, "Unmarshal error: %v", err)
- }
} else {
s := reflect.TypeOf(baseModel).Elem()
- for i := 0; i < s.NumField(); i++ {
- if s.Field(i).Tag == "json:\"body\"" {
- reflect.ValueOf(baseModel).Elem().FieldByName(s.Field(i).Name).SetString(string(body))
- break
- }
+ if reflect.TypeOf(baseModel).Elem().Name() == "GetBucketPolicyOutput" {
+ parseBucketPolicyOutput(s, baseModel, body)
+ } else {
+ err = parseJSON(body, baseModel)
}
}
+ if err != nil {
+ doLog(LEVEL_ERROR, "Unmarshal error: %v", err)
+ }
}
} else {
readCloser.setReadCloser(resp.Body)
@@ -776,17 +864,200 @@ func ParseResponseToBaseModel(resp *http.Response, baseModel IBaseModel, xmlResu
responseHeaders := cleanHeaderPrefix(resp.Header)
baseModel.setResponseHeaders(responseHeaders)
if values, ok := responseHeaders[HEADER_REQUEST_ID]; ok {
- baseModel.setRequestId(values[0])
+ baseModel.setRequestID(values[0])
}
return
}
+// ParseResponseToObsError gets obsError from OBS
func ParseResponseToObsError(resp *http.Response, isObs bool) error {
+ isJson := false
+ if contentType, ok := resp.Header[HEADER_CONTENT_TYPE_CAML]; ok {
+ jsonType, _ := mimeTypes["json"]
+ isJson = contentType[0] == jsonType
+ }
obsError := ObsError{}
- respError := ParseResponseToBaseModel(resp, &obsError, true, isObs)
+ respError := ParseResponseToBaseModel(resp, &obsError, !isJson, isObs)
if respError != nil {
doLog(LEVEL_WARN, "Parse response to BaseModel with error: %v", respError)
}
obsError.Status = resp.Status
return obsError
}
+
+// convertFetchPolicyToJSON converts SetBucketFetchPolicyInput into json format
+func convertFetchPolicyToJSON(input SetBucketFetchPolicyInput) (data string, err error) {
+ fetch := map[string]SetBucketFetchPolicyInput{"fetch": input}
+ json, err := json.Marshal(fetch)
+ if err != nil {
+ return "", err
+ }
+ data = string(json)
+ return
+}
+
+// convertFetchJobToJSON converts SetBucketFetchJobInput into json format
+func convertFetchJobToJSON(input SetBucketFetchJobInput) (data string, err error) {
+ objectHeaders := make(map[string]string)
+ for key, value := range input.ObjectHeaders {
+ if value != "" {
+ _key := strings.ToLower(key)
+ if !strings.HasPrefix(key, HEADER_PREFIX_OBS) {
+ _key = HEADER_PREFIX_META_OBS + _key
+ }
+ objectHeaders[_key] = value
+ }
+ }
+ input.ObjectHeaders = objectHeaders
+ json, err := json.Marshal(input)
+ if err != nil {
+ return "", err
+ }
+ data = string(json)
+ return
+}
+
+func parseStringToFSStatusType(value string) (ret FSStatusType) {
+ switch value {
+ case "Enabled":
+ ret = FSStatusEnabled
+ case "Disabled":
+ ret = FSStatusDisabled
+ default:
+ ret = ""
+ }
+ return
+}
+
+func decodeListObjectsOutput(output *ListObjectsOutput) (err error) {
+ output.Delimiter, err = url.QueryUnescape(output.Delimiter)
+ if err != nil {
+ return
+ }
+ output.Marker, err = url.QueryUnescape(output.Marker)
+ if err != nil {
+ return
+ }
+ output.NextMarker, err = url.QueryUnescape(output.NextMarker)
+ if err != nil {
+ return
+ }
+ output.Prefix, err = url.QueryUnescape(output.Prefix)
+ if err != nil {
+ return
+ }
+ for index, value := range output.CommonPrefixes {
+ output.CommonPrefixes[index], err = url.QueryUnescape(value)
+ if err != nil {
+ return
+ }
+ }
+ for index, content := range output.Contents {
+ output.Contents[index].Key, err = url.QueryUnescape(content.Key)
+ if err != nil {
+ return
+ }
+ }
+ return
+}
+
+func decodeListVersionsOutput(output *ListVersionsOutput) (err error) {
+ output.Delimiter, err = url.QueryUnescape(output.Delimiter)
+ if err != nil {
+ return
+ }
+ output.KeyMarker, err = url.QueryUnescape(output.KeyMarker)
+ if err != nil {
+ return
+ }
+ output.NextKeyMarker, err = url.QueryUnescape(output.NextKeyMarker)
+ if err != nil {
+ return
+ }
+ output.Prefix, err = url.QueryUnescape(output.Prefix)
+ if err != nil {
+ return
+ }
+ for index, version := range output.Versions {
+ output.Versions[index].Key, err = url.QueryUnescape(version.Key)
+ if err != nil {
+ return
+ }
+ }
+ for index, deleteMarker := range output.DeleteMarkers {
+ output.DeleteMarkers[index].Key, err = url.QueryUnescape(deleteMarker.Key)
+ if err != nil {
+ return
+ }
+ }
+ for index, value := range output.CommonPrefixes {
+ output.CommonPrefixes[index], err = url.QueryUnescape(value)
+ if err != nil {
+ return
+ }
+ }
+ return
+}
+
+func decodeDeleteObjectsOutput(output *DeleteObjectsOutput) (err error) {
+ for index, object := range output.Deleteds {
+ output.Deleteds[index].Key, err = url.QueryUnescape(object.Key)
+ if err != nil {
+ return
+ }
+ }
+ for index, object := range output.Errors {
+ output.Errors[index].Key, err = url.QueryUnescape(object.Key)
+ if err != nil {
+ return
+ }
+ }
+ return
+}
+
+func decodeListMultipartUploadsOutput(output *ListMultipartUploadsOutput) (err error) {
+ output.Delimiter, err = url.QueryUnescape(output.Delimiter)
+ if err != nil {
+ return
+ }
+ output.Prefix, err = url.QueryUnescape(output.Prefix)
+ if err != nil {
+ return
+ }
+ output.KeyMarker, err = url.QueryUnescape(output.KeyMarker)
+ if err != nil {
+ return
+ }
+ output.NextKeyMarker, err = url.QueryUnescape(output.NextKeyMarker)
+ if err != nil {
+ return
+ }
+ for index, value := range output.CommonPrefixes {
+ output.CommonPrefixes[index], err = url.QueryUnescape(value)
+ if err != nil {
+ return
+ }
+ }
+ for index, upload := range output.Uploads {
+ output.Uploads[index].Key, err = url.QueryUnescape(upload.Key)
+ if err != nil {
+ return
+ }
+ }
+ return
+}
+
+func decodeListPartsOutput(output *ListPartsOutput) (err error) {
+ output.Key, err = url.QueryUnescape(output.Key)
+ return
+}
+
+func decodeInitiateMultipartUploadOutput(output *InitiateMultipartUploadOutput) (err error) {
+ output.Key, err = url.QueryUnescape(output.Key)
+ return
+}
+
+func decodeCompleteMultipartUploadOutput(output *CompleteMultipartUploadOutput) (err error) {
+ output.Key, err = url.QueryUnescape(output.Key)
+ return
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/error.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/error.go
index eff74e61155..eb8ab280992 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/error.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/error.go
@@ -17,12 +17,13 @@ import (
"fmt"
)
+// ObsError defines error response from OBS
type ObsError struct {
BaseModel
Status string
XMLName xml.Name `xml:"Error"`
- Code string `xml:"Code"`
- Message string `xml:"Message"`
+ Code string `xml:"Code" json:"code"`
+ Message string `xml:"Message" json:"message"`
Resource string `xml:"Resource"`
HostId string `xml:"HostId"`
}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/extension.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/extension.go
new file mode 100644
index 00000000000..aa4ab02a385
--- /dev/null
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/extension.go
@@ -0,0 +1,36 @@
+// Copyright 2019 Huawei Technologies Co.,Ltd.
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+// this file except in compliance with the License. You may obtain a copy of the
+// License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software distributed
+// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+// CONDITIONS OF ANY KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations under the License.
+
+package obs
+
+import (
+ "fmt"
+ "strings"
+)
+
+type extensionOptions interface{}
+type extensionHeaders func(headers map[string][]string, isObs bool) error
+
+func setHeaderPrefix(key string, value string) extensionHeaders {
+ return func(headers map[string][]string, isObs bool) error {
+ if strings.TrimSpace(value) == "" {
+ return fmt.Errorf("set header %s with empty value", key)
+ }
+ setHeaders(headers, key, []string{value}, isObs)
+ return nil
+ }
+}
+
+// WithReqPaymentHeader sets header for requester-pays
+func WithReqPaymentHeader(requester PayerType) extensionHeaders {
+ return setHeaderPrefix(REQUEST_PAYER, string(requester))
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/http.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/http.go
index de73e4912bc..4464c722ef3 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/http.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/http.go
@@ -15,6 +15,7 @@ package obs
import (
"bytes"
"errors"
+ "fmt"
"io"
"math/rand"
"net"
@@ -34,7 +35,7 @@ func prepareHeaders(headers map[string][]string, meta bool, isObs bool) map[stri
continue
}
_key := strings.ToLower(key)
- if _, ok := allowed_request_http_header_metadata_names[_key]; !ok && !strings.HasPrefix(key, HEADER_PREFIX) && !strings.HasPrefix(key, HEADER_PREFIX_OBS) {
+ if _, ok := allowedRequestHTTPHeaderMetadataNames[_key]; !ok && !strings.HasPrefix(key, HEADER_PREFIX) && !strings.HasPrefix(key, HEADER_PREFIX_OBS) {
if !meta {
continue
}
@@ -52,43 +53,53 @@ func prepareHeaders(headers map[string][]string, meta bool, isObs bool) map[stri
return _headers
}
-func (obsClient ObsClient) doActionWithoutBucket(action, method string, input ISerializable, output IBaseModel) error {
- return obsClient.doAction(action, method, "", "", input, output, true, true)
+func (obsClient ObsClient) doActionWithoutBucket(action, method string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
+ return obsClient.doAction(action, method, "", "", input, output, true, true, extensions)
}
-func (obsClient ObsClient) doActionWithBucketV2(action, method, bucketName string, input ISerializable, output IBaseModel) error {
+func (obsClient ObsClient) doActionWithBucketV2(action, method, bucketName string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
if strings.TrimSpace(bucketName) == "" && !obsClient.conf.cname {
return errors.New("Bucket is empty")
}
- return obsClient.doAction(action, method, bucketName, "", input, output, false, true)
+ return obsClient.doAction(action, method, bucketName, "", input, output, false, true, extensions)
}
-func (obsClient ObsClient) doActionWithBucket(action, method, bucketName string, input ISerializable, output IBaseModel) error {
+func (obsClient ObsClient) doActionWithBucket(action, method, bucketName string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
if strings.TrimSpace(bucketName) == "" && !obsClient.conf.cname {
return errors.New("Bucket is empty")
}
- return obsClient.doAction(action, method, bucketName, "", input, output, true, true)
+ return obsClient.doAction(action, method, bucketName, "", input, output, true, true, extensions)
}
-func (obsClient ObsClient) doActionWithBucketAndKey(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel) error {
- return obsClient._doActionWithBucketAndKey(action, method, bucketName, objectKey, input, output, true)
+func (obsClient ObsClient) doActionWithBucketAndKey(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
+ return obsClient._doActionWithBucketAndKey(action, method, bucketName, objectKey, input, output, true, extensions)
}
-func (obsClient ObsClient) doActionWithBucketAndKeyUnRepeatable(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel) error {
- return obsClient._doActionWithBucketAndKey(action, method, bucketName, objectKey, input, output, false)
+func (obsClient ObsClient) doActionWithBucketAndKeyV2(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
+ if strings.TrimSpace(bucketName) == "" && !obsClient.conf.cname {
+ return errors.New("Bucket is empty")
+ }
+ if strings.TrimSpace(objectKey) == "" {
+ return errors.New("Key is empty")
+ }
+ return obsClient.doAction(action, method, bucketName, objectKey, input, output, false, true, extensions)
}
-func (obsClient ObsClient) _doActionWithBucketAndKey(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, repeatable bool) error {
+func (obsClient ObsClient) doActionWithBucketAndKeyUnRepeatable(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, extensions []extensionOptions) error {
+ return obsClient._doActionWithBucketAndKey(action, method, bucketName, objectKey, input, output, false, extensions)
+}
+
+func (obsClient ObsClient) _doActionWithBucketAndKey(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, repeatable bool, extensions []extensionOptions) error {
if strings.TrimSpace(bucketName) == "" && !obsClient.conf.cname {
return errors.New("Bucket is empty")
}
if strings.TrimSpace(objectKey) == "" {
return errors.New("Key is empty")
}
- return obsClient.doAction(action, method, bucketName, objectKey, input, output, true, repeatable)
+ return obsClient.doAction(action, method, bucketName, objectKey, input, output, true, repeatable, extensions)
}
-func (obsClient ObsClient) doAction(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, xmlResult bool, repeatable bool) error {
+func (obsClient ObsClient) doAction(action, method, bucketName, objectKey string, input ISerializable, output IBaseModel, xmlResult bool, repeatable bool, extensions []extensionOptions) error {
var resp *http.Response
var respError error
@@ -99,6 +110,7 @@ func (obsClient ObsClient) doAction(action, method, bucketName, objectKey string
if err != nil {
return err
}
+
if params == nil {
params = make(map[string]string)
}
@@ -107,19 +119,30 @@ func (obsClient ObsClient) doAction(action, method, bucketName, objectKey string
headers = make(map[string][]string)
}
+ for _, extension := range extensions {
+ if extensionHeader, ok := extension.(extensionHeaders); ok {
+ _err := extensionHeader(headers, obsClient.conf.signature == SignatureObs)
+ if _err != nil {
+ doLog(LEVEL_INFO, fmt.Sprintf("set header with error: %v", _err))
+ }
+ } else {
+ doLog(LEVEL_INFO, "Unsupported extensionOptions")
+ }
+ }
+
switch method {
case HTTP_GET:
- resp, respError = obsClient.doHttpGet(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPGet(bucketName, objectKey, params, headers, data, repeatable)
case HTTP_POST:
- resp, respError = obsClient.doHttpPost(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPPost(bucketName, objectKey, params, headers, data, repeatable)
case HTTP_PUT:
- resp, respError = obsClient.doHttpPut(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPPut(bucketName, objectKey, params, headers, data, repeatable)
case HTTP_DELETE:
- resp, respError = obsClient.doHttpDelete(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPDelete(bucketName, objectKey, params, headers, data, repeatable)
case HTTP_HEAD:
- resp, respError = obsClient.doHttpHead(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPHead(bucketName, objectKey, params, headers, data, repeatable)
case HTTP_OPTIONS:
- resp, respError = obsClient.doHttpOptions(bucketName, objectKey, params, headers, data, repeatable)
+ resp, respError = obsClient.doHTTPOptions(bucketName, objectKey, params, headers, data, repeatable)
default:
respError = errors.New("Unexpect http method error")
}
@@ -139,72 +162,45 @@ func (obsClient ObsClient) doAction(action, method, bucketName, objectKey string
return respError
}
-func (obsClient ObsClient) doHttpGet(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPGet(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_GET, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_GET, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpHead(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPHead(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_HEAD, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_HEAD, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpOptions(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPOptions(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_OPTIONS, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_OPTIONS, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpDelete(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPDelete(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_DELETE, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_DELETE, bucketName, objectKey, params, prepareHeaders(headers, false, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpPut(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPPut(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_PUT, bucketName, objectKey, params, prepareHeaders(headers, true, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_PUT, bucketName, objectKey, params, prepareHeaders(headers, true, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpPost(bucketName, objectKey string, params map[string]string,
+func (obsClient ObsClient) doHTTPPost(bucketName, objectKey string, params map[string]string,
headers map[string][]string, data interface{}, repeatable bool) (*http.Response, error) {
- return obsClient.doHttp(HTTP_POST, bucketName, objectKey, params, prepareHeaders(headers, true, obsClient.conf.signature == SignatureObs), data, repeatable)
+ return obsClient.doHTTP(HTTP_POST, bucketName, objectKey, params, prepareHeaders(headers, true, obsClient.conf.signature == SignatureObs), data, repeatable)
}
-func (obsClient ObsClient) doHttpWithSignedUrl(action, method string, signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader, output IBaseModel, xmlResult bool) (respError error) {
- req, err := http.NewRequest(method, signedUrl, data)
- if err != nil {
- return err
- }
- if obsClient.conf.ctx != nil {
- req = req.WithContext(obsClient.conf.ctx)
- }
- var resp *http.Response
-
- doLog(LEVEL_INFO, "Do %s with signedUrl %s...", action, signedUrl)
-
- req.Header = actualSignedRequestHeaders
- if value, ok := req.Header[HEADER_HOST_CAMEL]; ok {
- req.Host = value[0]
- delete(req.Header, HEADER_HOST_CAMEL)
- } else if value, ok := req.Header[HEADER_HOST]; ok {
- req.Host = value[0]
- delete(req.Header, HEADER_HOST)
- }
-
- if value, ok := req.Header[HEADER_CONTENT_LENGTH_CAMEL]; ok {
- req.ContentLength = StringToInt64(value[0], -1)
- delete(req.Header, HEADER_CONTENT_LENGTH_CAMEL)
- } else if value, ok := req.Header[HEADER_CONTENT_LENGTH]; ok {
- req.ContentLength = StringToInt64(value[0], -1)
- delete(req.Header, HEADER_CONTENT_LENGTH)
- }
-
- req.Header[HEADER_USER_AGENT_CAMEL] = []string{USER_AGENT}
- start := GetCurrentTimestamp()
- resp, err = obsClient.httpClient.Do(req)
- if isInfoLogEnabled() {
- doLog(LEVEL_INFO, "Do http request cost %d ms", (GetCurrentTimestamp() - start))
+func prepareAgentHeader(clientUserAgent string) string {
+ userAgent := USER_AGENT
+ if clientUserAgent != "" {
+ userAgent = clientUserAgent
}
+ return userAgent
+}
+func (obsClient ObsClient) getSignedURLResponse(action string, output IBaseModel, xmlResult bool, resp *http.Response, err error, start int64) (respError error) {
var msg interface{}
if err != nil {
respError = err
@@ -232,22 +228,71 @@ func (obsClient ObsClient) doHttpWithSignedUrl(action, method string, signedUrl
if isDebugLogEnabled() {
doLog(LEVEL_DEBUG, "End method %s, obsclient cost %d ms", action, (GetCurrentTimestamp() - start))
}
-
return
}
-func (obsClient ObsClient) doHttp(method, bucketName, objectKey string, params map[string]string,
- headers map[string][]string, data interface{}, repeatable bool) (resp *http.Response, respError error) {
+func (obsClient ObsClient) doHTTPWithSignedURL(action, method string, signedURL string, actualSignedRequestHeaders http.Header, data io.Reader, output IBaseModel, xmlResult bool) (respError error) {
+ req, err := http.NewRequest(method, signedURL, data)
+ if err != nil {
+ return err
+ }
+ if obsClient.conf.ctx != nil {
+ req = req.WithContext(obsClient.conf.ctx)
+ }
+ var resp *http.Response
- bucketName = strings.TrimSpace(bucketName)
+ var isSecurityToken bool
+ var securityToken string
+ var query []string
+ parmas := strings.Split(signedURL, "?")
+ if len(parmas) > 1 {
+ query = strings.Split(parmas[1], "&")
+ for _, value := range query {
+ if strings.HasPrefix(value, HEADER_STS_TOKEN_AMZ+"=") || strings.HasPrefix(value, HEADER_STS_TOKEN_OBS+"=") {
+ if value[len(HEADER_STS_TOKEN_AMZ)+1:] != "" {
+ securityToken = value[len(HEADER_STS_TOKEN_AMZ)+1:]
+ isSecurityToken = true
+ }
+ }
+ }
+ }
+ logSignedURL := signedURL
+ if isSecurityToken {
+ logSignedURL = strings.Replace(logSignedURL, securityToken, "******", -1)
+ }
+ doLog(LEVEL_INFO, "Do %s with signedUrl %s...", action, logSignedURL)
- method = strings.ToUpper(method)
+ req.Header = actualSignedRequestHeaders
+ if value, ok := req.Header[HEADER_HOST_CAMEL]; ok {
+ req.Host = value[0]
+ delete(req.Header, HEADER_HOST_CAMEL)
+ } else if value, ok := req.Header[HEADER_HOST]; ok {
+ req.Host = value[0]
+ delete(req.Header, HEADER_HOST)
+ }
- var redirectUrl string
- var requestUrl string
- maxRetryCount := obsClient.conf.maxRetryCount
- maxRedirectCount := obsClient.conf.maxRedirectCount
+ if value, ok := req.Header[HEADER_CONTENT_LENGTH_CAMEL]; ok {
+ req.ContentLength = StringToInt64(value[0], -1)
+ delete(req.Header, HEADER_CONTENT_LENGTH_CAMEL)
+ } else if value, ok := req.Header[HEADER_CONTENT_LENGTH]; ok {
+ req.ContentLength = StringToInt64(value[0], -1)
+ delete(req.Header, HEADER_CONTENT_LENGTH)
+ }
+
+ userAgent := prepareAgentHeader(obsClient.conf.userAgent)
+ req.Header[HEADER_USER_AGENT_CAMEL] = []string{userAgent}
+ start := GetCurrentTimestamp()
+ resp, err = obsClient.httpClient.Do(req)
+ if isInfoLogEnabled() {
+ doLog(LEVEL_INFO, "Do http request cost %d ms", (GetCurrentTimestamp() - start))
+ }
+
+ respError = obsClient.getSignedURLResponse(action, output, xmlResult, resp, err, start)
+ return
+}
+
+func prepareData(headers map[string][]string, data interface{}) (io.Reader, error) {
var _data io.Reader
if data != nil {
if dataStr, ok := data.(string); ok {
@@ -265,74 +310,197 @@ func (obsClient ObsClient) doHttp(method, bucketName, objectKey string, params m
return nil, errors.New("Data is not a valid io.Reader")
}
}
+ return _data, nil
+}
- var lastRequest *http.Request
- redirectFlag := false
- for i, redirectCount := 0, 0; i <= maxRetryCount; i++ {
- if redirectUrl != "" {
- if !redirectFlag {
- parsedRedirectUrl, err := url.Parse(redirectUrl)
- if err != nil {
- return nil, err
- }
- requestUrl, _ = obsClient.doAuth(method, bucketName, objectKey, params, headers, parsedRedirectUrl.Host)
- if parsedRequestUrl, err := url.Parse(requestUrl); err != nil {
- return nil, err
- } else if parsedRequestUrl.RawQuery != "" && parsedRedirectUrl.RawQuery == "" {
- redirectUrl += "?" + parsedRequestUrl.RawQuery
- }
+func (obsClient ObsClient) getRequest(redirectURL, requestURL string, redirectFlag bool, _data io.Reader, method,
+ bucketName, objectKey string, params map[string]string, headers map[string][]string) (*http.Request, error) {
+ if redirectURL != "" {
+ if !redirectFlag {
+ parsedRedirectURL, err := url.Parse(redirectURL)
+ if err != nil {
+ return nil, err
}
- requestUrl = redirectUrl
- } else {
- var err error
- requestUrl, err = obsClient.doAuth(method, bucketName, objectKey, params, headers, "")
+ requestURL, err = obsClient.doAuth(method, bucketName, objectKey, params, headers, parsedRedirectURL.Host)
if err != nil {
return nil, err
}
+ if parsedRequestURL, err := url.Parse(requestURL); err != nil {
+ return nil, err
+ } else if parsedRequestURL.RawQuery != "" && parsedRedirectURL.RawQuery == "" {
+ redirectURL += "?" + parsedRequestURL.RawQuery
+ }
}
-
- req, err := http.NewRequest(method, requestUrl, _data)
- if obsClient.conf.ctx != nil {
- req = req.WithContext(obsClient.conf.ctx)
- }
+ requestURL = redirectURL
+ } else {
+ var err error
+ requestURL, err = obsClient.doAuth(method, bucketName, objectKey, params, headers, "")
if err != nil {
return nil, err
}
- doLog(LEVEL_DEBUG, "Do request with url [%s] and method [%s]", requestUrl, method)
+ }
- if isDebugLogEnabled() {
- auth := headers[HEADER_AUTH_CAMEL]
- delete(headers, HEADER_AUTH_CAMEL)
- doLog(LEVEL_DEBUG, "Request headers: %v", headers)
- headers[HEADER_AUTH_CAMEL] = auth
- }
+ req, err := http.NewRequest(method, requestURL, _data)
+ if obsClient.conf.ctx != nil {
+ req = req.WithContext(obsClient.conf.ctx)
+ }
+ if err != nil {
+ return nil, err
+ }
+ doLog(LEVEL_DEBUG, "Do request with url [%s] and method [%s]", requestURL, method)
+ return req, nil
+}
- for key, value := range headers {
- if key == HEADER_HOST_CAMEL {
- req.Host = value[0]
- delete(headers, key)
- } else if key == HEADER_CONTENT_LENGTH_CAMEL {
- req.ContentLength = StringToInt64(value[0], -1)
- delete(headers, key)
+func logHeaders(headers map[string][]string, signature SignatureType) {
+ if isDebugLogEnabled() {
+ auth := headers[HEADER_AUTH_CAMEL]
+ delete(headers, HEADER_AUTH_CAMEL)
+
+ var isSecurityToken bool
+ var securityToken []string
+ if securityToken, isSecurityToken = headers[HEADER_STS_TOKEN_AMZ]; isSecurityToken {
+ headers[HEADER_STS_TOKEN_AMZ] = []string{"******"}
+ } else if securityToken, isSecurityToken = headers[HEADER_STS_TOKEN_OBS]; isSecurityToken {
+ headers[HEADER_STS_TOKEN_OBS] = []string{"******"}
+ }
+ doLog(LEVEL_DEBUG, "Request headers: %v", headers)
+ headers[HEADER_AUTH_CAMEL] = auth
+ if isSecurityToken {
+ if signature == SignatureObs {
+ headers[HEADER_STS_TOKEN_OBS] = securityToken
} else {
- req.Header[key] = value
+ headers[HEADER_STS_TOKEN_AMZ] = securityToken
}
}
+ }
+}
+
+func prepareReq(headers map[string][]string, req, lastRequest *http.Request, clientUserAgent string) *http.Request {
+ for key, value := range headers {
+ if key == HEADER_HOST_CAMEL {
+ req.Host = value[0]
+ delete(headers, key)
+ } else if key == HEADER_CONTENT_LENGTH_CAMEL {
+ req.ContentLength = StringToInt64(value[0], -1)
+ delete(headers, key)
+ } else {
+ req.Header[key] = value
+ }
+ }
+
+ lastRequest = req
+
+ userAgent := prepareAgentHeader(clientUserAgent)
+ req.Header[HEADER_USER_AGENT_CAMEL] = []string{userAgent}
+
+ if lastRequest != nil {
+ req.Host = lastRequest.Host
+ req.ContentLength = lastRequest.ContentLength
+ }
+ return lastRequest
+}
- lastRequest = req
+func canNotRetry(repeatable bool, statusCode int) bool {
+ if !repeatable || (statusCode >= 400 && statusCode < 500) || statusCode == 304 {
+ return true
+ }
+ return false
+}
- req.Header[HEADER_USER_AGENT_CAMEL] = []string{USER_AGENT}
+func isRedirectErr(location string, redirectCount, maxRedirectCount int) bool {
+ if location != "" && redirectCount < maxRedirectCount {
+ return true
+ }
+ return false
+}
- if lastRequest != nil {
- req.Host = lastRequest.Host
- req.ContentLength = lastRequest.ContentLength
+func setRedirectFlag(statusCode int, method string) (redirectFlag bool) {
+ if statusCode == 302 && method == HTTP_GET {
+ redirectFlag = true
+ } else {
+ redirectFlag = false
+ }
+ return
+}
+
+func prepareRetry(resp *http.Response, headers map[string][]string, _data io.Reader, msg interface{}) (io.Reader, *http.Response, error) {
+ if resp != nil {
+ _err := resp.Body.Close()
+ checkAndLogErr(_err, LEVEL_WARN, "Failed to close resp body")
+ resp = nil
+ }
+ if _, ok := headers[HEADER_AUTH_CAMEL]; ok {
+ delete(headers, HEADER_AUTH_CAMEL)
+ }
+ doLog(LEVEL_WARN, "Failed to send request with reason:%v, will try again", msg)
+ if r, ok := _data.(*strings.Reader); ok {
+ _, err := r.Seek(0, 0)
+ if err != nil {
+ return nil, nil, err
+ }
+ } else if r, ok := _data.(*bytes.Reader); ok {
+ _, err := r.Seek(0, 0)
+ if err != nil {
+ return nil, nil, err
+ }
+ } else if r, ok := _data.(*fileReaderWrapper); ok {
+ fd, err := os.Open(r.filePath)
+ if err != nil {
+ return nil, nil, err
+ }
+ fileReaderWrapper := &fileReaderWrapper{filePath: r.filePath}
+ fileReaderWrapper.mark = r.mark
+ fileReaderWrapper.reader = fd
+ fileReaderWrapper.totalCount = r.totalCount
+ _data = fileReaderWrapper
+ _, err = fd.Seek(r.mark, 0)
+ if err != nil {
+ errMsg := fd.Close()
+ checkAndLogErr(errMsg, LEVEL_WARN, "Failed to close with reason: %v", errMsg)
+ return nil, nil, err
+ }
+ } else if r, ok := _data.(*readerWrapper); ok {
+ _, err := r.seek(0, 0)
+ if err != nil {
+ return nil, nil, err
+ }
+ }
+ return _data, resp, nil
+}
+
+func (obsClient ObsClient) doHTTP(method, bucketName, objectKey string, params map[string]string,
+ headers map[string][]string, data interface{}, repeatable bool) (resp *http.Response, respError error) {
+
+ bucketName = strings.TrimSpace(bucketName)
+
+ method = strings.ToUpper(method)
+
+ var redirectURL string
+ var requestURL string
+ maxRetryCount := obsClient.conf.maxRetryCount
+ maxRedirectCount := obsClient.conf.maxRedirectCount
+
+ _data, _err := prepareData(headers, data)
+ if _err != nil {
+ return nil, _err
+ }
+
+ var lastRequest *http.Request
+ redirectFlag := false
+ for i, redirectCount := 0, 0; i <= maxRetryCount; i++ {
+ req, err := obsClient.getRequest(redirectURL, requestURL, redirectFlag, _data,
+ method, bucketName, objectKey, params, headers)
+ if err != nil {
+ return nil, err
}
+ logHeaders(headers, obsClient.conf.signature)
+
+ lastRequest = prepareReq(headers, req, lastRequest, obsClient.conf.userAgent)
+
start := GetCurrentTimestamp()
resp, err = obsClient.httpClient.Do(req)
- if isInfoLogEnabled() {
- doLog(LEVEL_INFO, "Do http request cost %d ms", (GetCurrentTimestamp() - start))
- }
+ doLog(LEVEL_INFO, "Do http request cost %d ms", (GetCurrentTimestamp() - start))
var msg interface{}
if err != nil {
@@ -345,23 +513,21 @@ func (obsClient ObsClient) doHttp(method, bucketName, objectKey string, params m
} else {
doLog(LEVEL_DEBUG, "Response headers: %v", resp.Header)
if resp.StatusCode < 300 {
+ respError = nil
break
- } else if !repeatable || (resp.StatusCode >= 400 && resp.StatusCode < 500) || resp.StatusCode == 304 {
+ } else if canNotRetry(repeatable, resp.StatusCode) {
respError = ParseResponseToObsError(resp, obsClient.conf.signature == SignatureObs)
resp = nil
break
} else if resp.StatusCode >= 300 && resp.StatusCode < 400 {
- if location := resp.Header.Get(HEADER_LOCATION_CAMEL); location != "" && redirectCount < maxRedirectCount {
- redirectUrl = location
- doLog(LEVEL_WARN, "Redirect request to %s", redirectUrl)
+ location := resp.Header.Get(HEADER_LOCATION_CAMEL)
+ if isRedirectErr(location, redirectCount, maxRedirectCount) {
+ redirectURL = location
+ doLog(LEVEL_WARN, "Redirect request to %s", redirectURL)
msg = resp.Status
maxRetryCount++
redirectCount++
- if resp.StatusCode == 302 && method == HTTP_GET {
- redirectFlag = true
- } else {
- redirectFlag = false
- }
+ redirectFlag = setRedirectFlag(resp.StatusCode, method)
} else {
respError = ParseResponseToObsError(resp, obsClient.conf.signature == SignatureObs)
resp = nil
@@ -372,46 +538,16 @@ func (obsClient ObsClient) doHttp(method, bucketName, objectKey string, params m
}
}
if i != maxRetryCount {
- if resp != nil {
- _err := resp.Body.Close()
- if _err != nil {
- doLog(LEVEL_WARN, "Failed to close resp body with reason: %v", _err)
- }
- resp = nil
- }
- if _, ok := headers[HEADER_AUTH_CAMEL]; ok {
- delete(headers, HEADER_AUTH_CAMEL)
+ _data, resp, err = prepareRetry(resp, headers, _data, msg)
+ if err != nil {
+ return nil, err
}
- doLog(LEVEL_WARN, "Failed to send request with reason:%v, will try again", msg)
- if r, ok := _data.(*strings.Reader); ok {
- _, err := r.Seek(0, 0)
- if err != nil {
- return nil, err
- }
- } else if r, ok := _data.(*bytes.Reader); ok {
- _, err := r.Seek(0, 0)
- if err != nil {
- return nil, err
- }
- } else if r, ok := _data.(*fileReaderWrapper); ok {
- fd, err := os.Open(r.filePath)
- if err != nil {
- return nil, err
- }
- defer fd.Close()
- fileReaderWrapper := &fileReaderWrapper{filePath: r.filePath}
- fileReaderWrapper.mark = r.mark
- fileReaderWrapper.reader = fd
- fileReaderWrapper.totalCount = r.totalCount
- _data = fileReaderWrapper
- _, err = fd.Seek(r.mark, 0)
- if err != nil {
- return nil, err
- }
- } else if r, ok := _data.(*readerWrapper); ok {
- _, err := r.seek(0, 0)
- if err != nil {
- return nil, err
+ if r, ok := _data.(*fileReaderWrapper); ok {
+ if _fd, _ok := r.reader.(*os.File); _ok {
+ defer func() {
+ errMsg := _fd.Close()
+ checkAndLogErr(errMsg, LEVEL_WARN, "Failed to close with reason: %v", errMsg)
+ }()
}
}
time.Sleep(time.Duration(float64(i+2) * rand.Float64() * float64(time.Second)))
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/log.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/log.go
index f411180b525..781ac2768e0 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/log.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/log.go
@@ -13,7 +13,6 @@
package obs
import (
- "errors"
"fmt"
"log"
"os"
@@ -23,6 +22,7 @@ import (
"sync"
)
+// Level defines the level of the log
type Level int
const (
@@ -84,7 +84,10 @@ func (lw *loggerWrapper) doInit() {
func (lw *loggerWrapper) rotate() {
stat, err := lw.fd.Stat()
if err != nil {
- lw.fd.Close()
+ _err := lw.fd.Close()
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", _err)
+ }
panic(err)
}
if stat.Size() >= logConf.maxLogSize {
@@ -92,7 +95,10 @@ func (lw *loggerWrapper) rotate() {
if _err != nil {
panic(err)
}
- lw.fd.Close()
+ _err = lw.fd.Close()
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", _err)
+ }
if lw.index > logConf.backups {
lw.index = 1
}
@@ -100,9 +106,9 @@ func (lw *loggerWrapper) rotate() {
if _err != nil {
panic(err)
}
- lw.index += 1
+ lw.index++
- fd, err := os.OpenFile(lw.fullPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
+ fd, err := os.OpenFile(lw.fullPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0600)
if err != nil {
panic(err)
}
@@ -134,7 +140,10 @@ func (lw *loggerWrapper) doWrite() {
msg, ok := <-lw.ch
if !ok {
lw.doFlush()
- lw.fd.Close()
+ _err := lw.fd.Close()
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", _err)
+ }
break
}
if len(lw.queue) >= lw.cacheCount {
@@ -155,7 +164,7 @@ func (lw *loggerWrapper) Printf(format string, v ...interface{}) {
var consoleLogger *log.Logger
var fileLogger *loggerWrapper
-var lock *sync.RWMutex = new(sync.RWMutex)
+var lock = new(sync.RWMutex)
func isDebugLogEnabled() bool {
return logConf.level <= LEVEL_DEBUG
@@ -182,10 +191,12 @@ func reset() {
logConf = getDefaultLogConf()
}
+// InitLog enable logging function with default cacheCnt
func InitLog(logFullPath string, maxLogSize int64, backups int, level Level, logToConsole bool) error {
return InitLogWithCacheCnt(logFullPath, maxLogSize, backups, level, logToConsole, 50)
}
+// InitLogWithCacheCnt enable logging function
func InitLogWithCacheCnt(logFullPath string, maxLogSize int64, backups int, level Level, logToConsole bool, cacheCnt int) error {
lock.Lock()
defer lock.Unlock()
@@ -203,32 +214,19 @@ func InitLogWithCacheCnt(logFullPath string, maxLogSize int64, backups int, leve
_fullPath += ".log"
}
- stat, err := os.Stat(_fullPath)
- if err == nil && stat.IsDir() {
- return errors.New(fmt.Sprintf("logFullPath:[%s] is a directory", _fullPath))
- } else if err := os.MkdirAll(filepath.Dir(_fullPath), os.ModePerm); err != nil {
- return err
- }
-
- fd, err := os.OpenFile(_fullPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
+ stat, fd, err := initLogFile(_fullPath)
if err != nil {
return err
}
- if stat == nil {
- stat, err = os.Stat(_fullPath)
- if err != nil {
- fd.Close()
- return err
- }
- }
-
prefix := stat.Name() + "."
index := 1
+ var timeIndex int64 = 0
walkFunc := func(path string, info os.FileInfo, err error) error {
if err == nil {
if name := info.Name(); strings.HasPrefix(name, prefix) {
- if i := StringToInt(name[len(prefix):], 0); i >= index {
+ if i := StringToInt(name[len(prefix):], 0); i >= index && info.ModTime().Unix() >= timeIndex {
+ timeIndex = info.ModTime().Unix()
index = i + 1
}
}
@@ -237,7 +235,10 @@ func InitLogWithCacheCnt(logFullPath string, maxLogSize int64, backups int, leve
}
if err = filepath.Walk(filepath.Dir(_fullPath), walkFunc); err != nil {
- fd.Close()
+ _err := fd.Close()
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", _err)
+ }
return err
}
@@ -257,6 +258,33 @@ func InitLogWithCacheCnt(logFullPath string, maxLogSize int64, backups int, leve
return nil
}
+func initLogFile(_fullPath string) (os.FileInfo, *os.File, error) {
+ stat, err := os.Stat(_fullPath)
+ if err == nil && stat.IsDir() {
+ return nil, nil, fmt.Errorf("logFullPath:[%s] is a directory", _fullPath)
+ } else if err = os.MkdirAll(filepath.Dir(_fullPath), os.ModePerm); err != nil {
+ return nil, nil, err
+ }
+
+ fd, err := os.OpenFile(_fullPath, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0600)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ if stat == nil {
+ stat, err = os.Stat(_fullPath)
+ if err != nil {
+ _err := fd.Close()
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", _err)
+ }
+ return nil, nil, err
+ }
+ }
+ return stat, fd, nil
+}
+
+// CloseLog disable logging and synchronize cache data to log files
func CloseLog() {
if logEnabled() {
lock.Lock()
@@ -265,13 +293,11 @@ func CloseLog() {
}
}
-func SyncLog() {
-}
-
func logEnabled() bool {
return consoleLogger != nil || fileLogger != nil
}
+// DoLog writes log messages to the logger
func DoLog(level Level, format string, v ...interface{}) {
doLog(level, format, v...)
}
@@ -296,3 +322,9 @@ func doLog(level Level, format string, v ...interface{}) {
}
}
}
+
+func checkAndLogErr(err error, level Level, format string, v ...interface{}) {
+ if err != nil {
+ doLog(level, format, v...)
+ }
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/model.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/model.go
index 04596657bb2..2303f31bbed 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/model.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/model.go
@@ -19,35 +19,42 @@ import (
"time"
)
+// BaseModel defines base model response from OBS
type BaseModel struct {
StatusCode int `xml:"-"`
- RequestId string `xml:"RequestId"`
+ RequestId string `xml:"RequestId" json:"request_id"`
ResponseHeaders map[string][]string `xml:"-"`
}
+// Bucket defines bucket properties
type Bucket struct {
XMLName xml.Name `xml:"Bucket"`
Name string `xml:"Name"`
CreationDate time.Time `xml:"CreationDate"`
Location string `xml:"Location"`
+ BucketType string `xml:"BucketType,omitempty"`
}
+// Owner defines owner properties
type Owner struct {
XMLName xml.Name `xml:"Owner"`
ID string `xml:"ID"`
DisplayName string `xml:"DisplayName,omitempty"`
}
+// Initiator defines initiator properties
type Initiator struct {
XMLName xml.Name `xml:"Initiator"`
ID string `xml:"ID"`
DisplayName string `xml:"DisplayName,omitempty"`
}
+// ListBucketsInput is the input parameter of ListBuckets function
type ListBucketsInput struct {
QueryLocation bool
}
+// ListBucketsOutput is the result of ListBuckets function
type ListBucketsOutput struct {
BaseModel
XMLName xml.Name `xml:"ListAllMyBucketsResult"`
@@ -60,11 +67,13 @@ type bucketLocationObs struct {
Location string `xml:",chardata"`
}
+// BucketLocation defines bucket location configuration
type BucketLocation struct {
XMLName xml.Name `xml:"CreateBucketConfiguration"`
Location string `xml:"LocationConstraint,omitempty"`
}
+// CreateBucketInput is the input parameter of CreateBucket function
type CreateBucketInput struct {
BucketLocation
Bucket string `xml:"-"`
@@ -78,13 +87,17 @@ type CreateBucketInput struct {
GrantReadDeliveredId string `xml:"-"`
GrantFullControlDeliveredId string `xml:"-"`
Epid string `xml:"-"`
+ AvailableZone string `xml:"-"`
+ IsFSFileInterface bool `xml:"-"`
}
+// BucketStoragePolicy defines the bucket storage class
type BucketStoragePolicy struct {
XMLName xml.Name `xml:"StoragePolicy"`
StorageClass StorageClassType `xml:"DefaultStorageClass"`
}
+// SetBucketStoragePolicyInput is the input parameter of SetBucketStoragePolicy function
type SetBucketStoragePolicyInput struct {
Bucket string `xml:"-"`
BucketStoragePolicy
@@ -95,6 +108,7 @@ type getBucketStoragePolicyOutputS3 struct {
BucketStoragePolicy
}
+// GetBucketStoragePolicyOutput is the result of GetBucketStoragePolicy function
type GetBucketStoragePolicyOutput struct {
BaseModel
StorageClass string
@@ -109,20 +123,24 @@ type getBucketStoragePolicyOutputObs struct {
bucketStoragePolicyObs
}
+// ListObjsInput defines parameters for listing objects
type ListObjsInput struct {
Prefix string
MaxKeys int
Delimiter string
Origin string
RequestHeader string
+ EncodingType string
}
+// ListObjectsInput is the input parameter of ListObjects function
type ListObjectsInput struct {
ListObjsInput
Bucket string
Marker string
}
+// Content defines the object content properties
type Content struct {
XMLName xml.Name `xml:"Contents"`
Owner Owner `xml:"Owner"`
@@ -133,6 +151,7 @@ type Content struct {
StorageClass StorageClassType `xml:"StorageClass"`
}
+// ListObjectsOutput is the result of ListObjects function
type ListObjectsOutput struct {
BaseModel
XMLName xml.Name `xml:"ListBucketResult"`
@@ -146,8 +165,10 @@ type ListObjectsOutput struct {
Contents []Content `xml:"Contents"`
CommonPrefixes []string `xml:"CommonPrefixes>Prefix"`
Location string `xml:"-"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// ListVersionsInput is the input parameter of ListVersions function
type ListVersionsInput struct {
ListObjsInput
Bucket string
@@ -155,6 +176,7 @@ type ListVersionsInput struct {
VersionIdMarker string
}
+// Version defines the properties of versioning objects
type Version struct {
DeleteMarker
XMLName xml.Name `xml:"Version"`
@@ -162,6 +184,7 @@ type Version struct {
Size int64 `xml:"Size"`
}
+// DeleteMarker defines the properties of versioning delete markers
type DeleteMarker struct {
XMLName xml.Name `xml:"DeleteMarker"`
Key string `xml:"Key"`
@@ -172,6 +195,7 @@ type DeleteMarker struct {
StorageClass StorageClassType `xml:"StorageClass"`
}
+// ListVersionsOutput is the result of ListVersions function
type ListVersionsOutput struct {
BaseModel
XMLName xml.Name `xml:"ListVersionsResult"`
@@ -188,8 +212,10 @@ type ListVersionsOutput struct {
DeleteMarkers []DeleteMarker `xml:"DeleteMarker"`
CommonPrefixes []string `xml:"CommonPrefixes>Prefix"`
Location string `xml:"-"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// ListMultipartUploadsInput is the input parameter of ListMultipartUploads function
type ListMultipartUploadsInput struct {
Bucket string
Prefix string
@@ -197,8 +223,10 @@ type ListMultipartUploadsInput struct {
Delimiter string
KeyMarker string
UploadIdMarker string
+ EncodingType string
}
+// Upload defines multipart upload properties
type Upload struct {
XMLName xml.Name `xml:"Upload"`
Key string `xml:"Key"`
@@ -209,6 +237,7 @@ type Upload struct {
Initiator Initiator `xml:"Initiator"`
}
+// ListMultipartUploadsOutput is the result of ListMultipartUploads function
type ListMultipartUploadsOutput struct {
BaseModel
XMLName xml.Name `xml:"ListMultipartUploadsResult"`
@@ -223,23 +252,28 @@ type ListMultipartUploadsOutput struct {
Prefix string `xml:"Prefix"`
Uploads []Upload `xml:"Upload"`
CommonPrefixes []string `xml:"CommonPrefixes>Prefix"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// BucketQuota defines bucket quota configuration
type BucketQuota struct {
XMLName xml.Name `xml:"Quota"`
Quota int64 `xml:"StorageQuota"`
}
+// SetBucketQuotaInput is the input parameter of SetBucketQuota function
type SetBucketQuotaInput struct {
Bucket string `xml:"-"`
BucketQuota
}
+// GetBucketQuotaOutput is the result of GetBucketQuota function
type GetBucketQuotaOutput struct {
BaseModel
BucketQuota
}
+// GetBucketStorageInfoOutput is the result of GetBucketStorageInfo function
type GetBucketStorageInfoOutput struct {
BaseModel
XMLName xml.Name `xml:"GetBucketStorageInfoResult"`
@@ -255,11 +289,14 @@ type getBucketLocationOutputObs struct {
BaseModel
bucketLocationObs
}
+
+// GetBucketLocationOutput is the result of GetBucketLocation function
type GetBucketLocationOutput struct {
BaseModel
Location string `xml:"-"`
}
+// Grantee defines grantee properties
type Grantee struct {
XMLName xml.Name `xml:"Grantee"`
Type GranteeType `xml:"type,attr"`
@@ -276,6 +313,7 @@ type granteeObs struct {
Canned string `xml:"Canned,omitempty"`
}
+// Grant defines grant properties
type Grant struct {
XMLName xml.Name `xml:"Grant"`
Grantee Grantee `xml:"Grantee"`
@@ -289,6 +327,7 @@ type grantObs struct {
Delivered bool `xml:"Delivered"`
}
+// AccessControlPolicy defines access control policy properties
type AccessControlPolicy struct {
XMLName xml.Name `xml:"AccessControlPolicy"`
Owner Owner `xml:"Owner"`
@@ -302,32 +341,37 @@ type accessControlPolicyObs struct {
Grants []grantObs `xml:"AccessControlList>Grant"`
}
+// GetBucketAclOutput is the result of GetBucketAcl function
type GetBucketAclOutput struct {
BaseModel
AccessControlPolicy
}
-type getBucketAclOutputObs struct {
+type getBucketACLOutputObs struct {
BaseModel
accessControlPolicyObs
}
+// SetBucketAclInput is the input parameter of SetBucketAcl function
type SetBucketAclInput struct {
Bucket string `xml:"-"`
ACL AclType `xml:"-"`
AccessControlPolicy
}
+// SetBucketPolicyInput is the input parameter of SetBucketPolicy function
type SetBucketPolicyInput struct {
Bucket string
Policy string
}
+// GetBucketPolicyOutput is the result of GetBucketPolicy function
type GetBucketPolicyOutput struct {
BaseModel
Policy string `json:"body"`
}
+// CorsRule defines the CORS rules
type CorsRule struct {
XMLName xml.Name `xml:"CORSRule"`
ID string `xml:"ID,omitempty"`
@@ -338,50 +382,60 @@ type CorsRule struct {
ExposeHeader []string `xml:"ExposeHeader,omitempty"`
}
+// BucketCors defines the bucket CORS configuration
type BucketCors struct {
XMLName xml.Name `xml:"CORSConfiguration"`
CorsRules []CorsRule `xml:"CORSRule"`
}
+// SetBucketCorsInput is the input parameter of SetBucketCors function
type SetBucketCorsInput struct {
Bucket string `xml:"-"`
BucketCors
}
+// GetBucketCorsOutput is the result of GetBucketCors function
type GetBucketCorsOutput struct {
BaseModel
BucketCors
}
+// BucketVersioningConfiguration defines the versioning configuration
type BucketVersioningConfiguration struct {
XMLName xml.Name `xml:"VersioningConfiguration"`
Status VersioningStatusType `xml:"Status"`
}
+// SetBucketVersioningInput is the input parameter of SetBucketVersioning function
type SetBucketVersioningInput struct {
Bucket string `xml:"-"`
BucketVersioningConfiguration
}
+// GetBucketVersioningOutput is the result of GetBucketVersioning function
type GetBucketVersioningOutput struct {
BaseModel
BucketVersioningConfiguration
}
+// IndexDocument defines the default page configuration
type IndexDocument struct {
Suffix string `xml:"Suffix"`
}
+// ErrorDocument defines the error page configuration
type ErrorDocument struct {
Key string `xml:"Key,omitempty"`
}
+// Condition defines condition in RoutingRule
type Condition struct {
XMLName xml.Name `xml:"Condition"`
KeyPrefixEquals string `xml:"KeyPrefixEquals,omitempty"`
HttpErrorCodeReturnedEquals string `xml:"HttpErrorCodeReturnedEquals,omitempty"`
}
+// Redirect defines redirect in RoutingRule
type Redirect struct {
XMLName xml.Name `xml:"Redirect"`
Protocol ProtocolType `xml:"Protocol,omitempty"`
@@ -391,18 +445,21 @@ type Redirect struct {
HttpRedirectCode string `xml:"HttpRedirectCode,omitempty"`
}
+// RoutingRule defines routing rules
type RoutingRule struct {
XMLName xml.Name `xml:"RoutingRule"`
Condition Condition `xml:"Condition,omitempty"`
Redirect Redirect `xml:"Redirect"`
}
+// RedirectAllRequestsTo defines redirect in BucketWebsiteConfiguration
type RedirectAllRequestsTo struct {
XMLName xml.Name `xml:"RedirectAllRequestsTo"`
Protocol ProtocolType `xml:"Protocol,omitempty"`
HostName string `xml:"HostName"`
}
+// BucketWebsiteConfiguration defines the bucket website configuration
type BucketWebsiteConfiguration struct {
XMLName xml.Name `xml:"WebsiteConfiguration"`
RedirectAllRequestsTo RedirectAllRequestsTo `xml:"RedirectAllRequestsTo,omitempty"`
@@ -411,22 +468,26 @@ type BucketWebsiteConfiguration struct {
RoutingRules []RoutingRule `xml:"RoutingRules>RoutingRule,omitempty"`
}
+// SetBucketWebsiteConfigurationInput is the input parameter of SetBucketWebsiteConfiguration function
type SetBucketWebsiteConfigurationInput struct {
Bucket string `xml:"-"`
BucketWebsiteConfiguration
}
+// GetBucketWebsiteConfigurationOutput is the result of GetBucketWebsiteConfiguration function
type GetBucketWebsiteConfigurationOutput struct {
BaseModel
BucketWebsiteConfiguration
}
+// GetBucketMetadataInput is the input parameter of GetBucketMetadata function
type GetBucketMetadataInput struct {
Bucket string
Origin string
RequestHeader string
}
+// SetObjectMetadataInput is the input parameter of SetObjectMetadata function
type SetObjectMetadataInput struct {
Bucket string
Key string
@@ -443,6 +504,7 @@ type SetObjectMetadataInput struct {
Metadata map[string]string
}
+//SetObjectMetadataOutput is the result of SetObjectMetadata function
type SetObjectMetadataOutput struct {
BaseModel
MetadataDirective MetadataDirectiveType
@@ -457,6 +519,7 @@ type SetObjectMetadataOutput struct {
Metadata map[string]string
}
+// GetBucketMetadataOutput is the result of GetBucketMetadata function
type GetBucketMetadataOutput struct {
BaseModel
StorageClass StorageClassType
@@ -468,8 +531,11 @@ type GetBucketMetadataOutput struct {
MaxAgeSeconds int
ExposeHeader string
Epid string
+ AvailableZone string
+ FSStatus FSStatusType
}
+// BucketLoggingStatus defines the bucket logging configuration
type BucketLoggingStatus struct {
XMLName xml.Name `xml:"BucketLoggingStatus"`
Agency string `xml:"Agency,omitempty"`
@@ -478,16 +544,19 @@ type BucketLoggingStatus struct {
TargetGrants []Grant `xml:"LoggingEnabled>TargetGrants>Grant,omitempty"`
}
+// SetBucketLoggingConfigurationInput is the input parameter of SetBucketLoggingConfiguration function
type SetBucketLoggingConfigurationInput struct {
Bucket string `xml:"-"`
BucketLoggingStatus
}
+// GetBucketLoggingConfigurationOutput is the result of GetBucketLoggingConfiguration function
type GetBucketLoggingConfigurationOutput struct {
BaseModel
BucketLoggingStatus
}
+// Transition defines transition property in LifecycleRule
type Transition struct {
XMLName xml.Name `xml:"Transition"`
Date time.Time `xml:"Date,omitempty"`
@@ -495,23 +564,27 @@ type Transition struct {
StorageClass StorageClassType `xml:"StorageClass"`
}
+// Expiration defines expiration property in LifecycleRule
type Expiration struct {
XMLName xml.Name `xml:"Expiration"`
Date time.Time `xml:"Date,omitempty"`
Days int `xml:"Days,omitempty"`
}
+// NoncurrentVersionTransition defines noncurrentVersion transition property in LifecycleRule
type NoncurrentVersionTransition struct {
XMLName xml.Name `xml:"NoncurrentVersionTransition"`
NoncurrentDays int `xml:"NoncurrentDays"`
StorageClass StorageClassType `xml:"StorageClass"`
}
+// NoncurrentVersionExpiration defines noncurrentVersion expiration property in LifecycleRule
type NoncurrentVersionExpiration struct {
XMLName xml.Name `xml:"NoncurrentVersionExpiration"`
NoncurrentDays int `xml:"NoncurrentDays"`
}
+// LifecycleRule defines lifecycle rule
type LifecycleRule struct {
ID string `xml:"ID,omitempty"`
Prefix string `xml:"Prefix"`
@@ -522,48 +595,57 @@ type LifecycleRule struct {
NoncurrentVersionExpiration NoncurrentVersionExpiration `xml:"NoncurrentVersionExpiration,omitempty"`
}
+// BucketLifecyleConfiguration defines the bucket lifecycle configuration
type BucketLifecyleConfiguration struct {
XMLName xml.Name `xml:"LifecycleConfiguration"`
LifecycleRules []LifecycleRule `xml:"Rule"`
}
+// SetBucketLifecycleConfigurationInput is the input parameter of SetBucketLifecycleConfiguration function
type SetBucketLifecycleConfigurationInput struct {
Bucket string `xml:"-"`
BucketLifecyleConfiguration
}
+// GetBucketLifecycleConfigurationOutput is the result of GetBucketLifecycleConfiguration function
type GetBucketLifecycleConfigurationOutput struct {
BaseModel
BucketLifecyleConfiguration
}
+// Tag defines tag property in BucketTagging
type Tag struct {
XMLName xml.Name `xml:"Tag"`
Key string `xml:"Key"`
Value string `xml:"Value"`
}
+// BucketTagging defines the bucket tag configuration
type BucketTagging struct {
XMLName xml.Name `xml:"Tagging"`
Tags []Tag `xml:"TagSet>Tag"`
}
+// SetBucketTaggingInput is the input parameter of SetBucketTagging function
type SetBucketTaggingInput struct {
Bucket string `xml:"-"`
BucketTagging
}
+// GetBucketTaggingOutput is the result of GetBucketTagging function
type GetBucketTaggingOutput struct {
BaseModel
BucketTagging
}
+// FilterRule defines filter rule in TopicConfiguration
type FilterRule struct {
XMLName xml.Name `xml:"FilterRule"`
Name string `xml:"Name,omitempty"`
Value string `xml:"Value,omitempty"`
}
+// TopicConfiguration defines the topic configuration
type TopicConfiguration struct {
XMLName xml.Name `xml:"TopicConfiguration"`
ID string `xml:"Id,omitempty"`
@@ -572,11 +654,13 @@ type TopicConfiguration struct {
FilterRules []FilterRule `xml:"Filter>Object>FilterRule"`
}
+// BucketNotification defines the bucket notification configuration
type BucketNotification struct {
XMLName xml.Name `xml:"NotificationConfiguration"`
TopicConfigurations []TopicConfiguration `xml:"TopicConfiguration"`
}
+// SetBucketNotificationInput is the input parameter of SetBucketNotification function
type SetBucketNotificationInput struct {
Bucket string `xml:"-"`
BucketNotification
@@ -600,36 +684,43 @@ type getBucketNotificationOutputS3 struct {
bucketNotificationS3
}
+// GetBucketNotificationOutput is the result of GetBucketNotification function
type GetBucketNotificationOutput struct {
BaseModel
BucketNotification
}
+// DeleteObjectInput is the input parameter of DeleteObject function
type DeleteObjectInput struct {
Bucket string
Key string
VersionId string
}
+// DeleteObjectOutput is the result of DeleteObject function
type DeleteObjectOutput struct {
BaseModel
VersionId string
DeleteMarker bool
}
+// ObjectToDelete defines the object property in DeleteObjectsInput
type ObjectToDelete struct {
XMLName xml.Name `xml:"Object"`
Key string `xml:"Key"`
VersionId string `xml:"VersionId,omitempty"`
}
+// DeleteObjectsInput is the input parameter of DeleteObjects function
type DeleteObjectsInput struct {
- Bucket string `xml:"-"`
- XMLName xml.Name `xml:"Delete"`
- Quiet bool `xml:"Quiet,omitempty"`
- Objects []ObjectToDelete `xml:"Object"`
+ Bucket string `xml:"-"`
+ XMLName xml.Name `xml:"Delete"`
+ Quiet bool `xml:"Quiet,omitempty"`
+ Objects []ObjectToDelete `xml:"Object"`
+ EncodingType string `xml:"EncodingType"`
}
+// Deleted defines the deleted property in DeleteObjectsOutput
type Deleted struct {
XMLName xml.Name `xml:"Deleted"`
Key string `xml:"Key"`
@@ -638,6 +729,7 @@ type Deleted struct {
DeleteMarkerVersionId string `xml:"DeleteMarkerVersionId"`
}
+// Error defines the error property in DeleteObjectsOutput
type Error struct {
XMLName xml.Name `xml:"Error"`
Key string `xml:"Key"`
@@ -646,13 +738,16 @@ type Error struct {
Message string `xml:"Message"`
}
+// DeleteObjectsOutput is the result of DeleteObjects function
type DeleteObjectsOutput struct {
BaseModel
- XMLName xml.Name `xml:"DeleteResult"`
- Deleteds []Deleted `xml:"Deleted"`
- Errors []Error `xml:"Error"`
+ XMLName xml.Name `xml:"DeleteResult"`
+ Deleteds []Deleted `xml:"Deleted"`
+ Errors []Error `xml:"Error"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// SetObjectAclInput is the input parameter of SetObjectAcl function
type SetObjectAclInput struct {
Bucket string `xml:"-"`
Key string `xml:"-"`
@@ -661,18 +756,21 @@ type SetObjectAclInput struct {
AccessControlPolicy
}
+// GetObjectAclInput is the input parameter of GetObjectAcl function
type GetObjectAclInput struct {
Bucket string
Key string
VersionId string
}
+// GetObjectAclOutput is the result of GetObjectAcl function
type GetObjectAclOutput struct {
BaseModel
VersionId string
AccessControlPolicy
}
+// RestoreObjectInput is the input parameter of RestoreObject function
type RestoreObjectInput struct {
Bucket string `xml:"-"`
Key string `xml:"-"`
@@ -682,23 +780,27 @@ type RestoreObjectInput struct {
Tier RestoreTierType `xml:"GlacierJobParameters>Tier,omitempty"`
}
+// ISseHeader defines the sse encryption header
type ISseHeader interface {
GetEncryption() string
GetKey() string
}
+// SseKmsHeader defines the SseKms header
type SseKmsHeader struct {
Encryption string
Key string
isObs bool
}
+// SseCHeader defines the SseC header
type SseCHeader struct {
Encryption string
Key string
KeyMD5 string
}
+// GetObjectMetadataInput is the input parameter of GetObjectMetadata function
type GetObjectMetadataInput struct {
Bucket string
Key string
@@ -708,6 +810,7 @@ type GetObjectMetadataInput struct {
SseHeader ISseHeader
}
+// GetObjectMetadataOutput is the result of GetObjectMetadata function
type GetObjectMetadataOutput struct {
BaseModel
VersionId string
@@ -730,6 +833,7 @@ type GetObjectMetadataOutput struct {
Metadata map[string]string
}
+// GetObjectInput is the input parameter of GetObject function
type GetObjectInput struct {
GetObjectMetadataInput
IfMatch string
@@ -747,6 +851,7 @@ type GetObjectInput struct {
ResponseExpires string
}
+// GetObjectOutput is the result of GetObject function
type GetObjectOutput struct {
GetObjectMetadataOutput
DeleteMarker bool
@@ -758,6 +863,7 @@ type GetObjectOutput struct {
Body io.ReadCloser
}
+// ObjectOperationInput defines the object operation properties
type ObjectOperationInput struct {
Bucket string
Key string
@@ -773,6 +879,7 @@ type ObjectOperationInput struct {
Metadata map[string]string
}
+// PutObjectBasicInput defines the basic object operation properties
type PutObjectBasicInput struct {
ObjectOperationInput
ContentType string
@@ -780,16 +887,19 @@ type PutObjectBasicInput struct {
ContentLength int64
}
+// PutObjectInput is the input parameter of PutObject function
type PutObjectInput struct {
PutObjectBasicInput
Body io.Reader
}
+// PutFileInput is the input parameter of PutFile function
type PutFileInput struct {
PutObjectBasicInput
SourceFile string
}
+// PutObjectOutput is the result of PutObject function
type PutObjectOutput struct {
BaseModel
VersionId string
@@ -798,6 +908,7 @@ type PutObjectOutput struct {
ETag string
}
+// CopyObjectInput is the input parameter of CopyObject function
type CopyObjectInput struct {
ObjectOperationInput
CopySourceBucket string
@@ -818,6 +929,7 @@ type CopyObjectInput struct {
SuccessActionRedirect string
}
+// CopyObjectOutput is the result of CopyObject function
type CopyObjectOutput struct {
BaseModel
CopySourceVersionId string `xml:"-"`
@@ -828,26 +940,32 @@ type CopyObjectOutput struct {
ETag string `xml:"ETag"`
}
+// AbortMultipartUploadInput is the input parameter of AbortMultipartUpload function
type AbortMultipartUploadInput struct {
Bucket string
Key string
UploadId string
}
+// InitiateMultipartUploadInput is the input parameter of InitiateMultipartUpload function
type InitiateMultipartUploadInput struct {
ObjectOperationInput
- ContentType string
+ ContentType string
+ EncodingType string
}
+// InitiateMultipartUploadOutput is the result of InitiateMultipartUpload function
type InitiateMultipartUploadOutput struct {
BaseModel
- XMLName xml.Name `xml:"InitiateMultipartUploadResult"`
- Bucket string `xml:"Bucket"`
- Key string `xml:"Key"`
- UploadId string `xml:"UploadId"`
- SseHeader ISseHeader
+ XMLName xml.Name `xml:"InitiateMultipartUploadResult"`
+ Bucket string `xml:"Bucket"`
+ Key string `xml:"Key"`
+ UploadId string `xml:"UploadId"`
+ SseHeader ISseHeader
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// UploadPartInput is the input parameter of UploadPart function
type UploadPartInput struct {
Bucket string
Key string
@@ -861,6 +979,7 @@ type UploadPartInput struct {
PartSize int64
}
+// UploadPartOutput is the result of UploadPart function
type UploadPartOutput struct {
BaseModel
PartNumber int
@@ -868,6 +987,7 @@ type UploadPartOutput struct {
SseHeader ISseHeader
}
+// Part defines the part properties
type Part struct {
XMLName xml.Name `xml:"Part"`
PartNumber int `xml:"PartNumber"`
@@ -876,33 +996,40 @@ type Part struct {
Size int64 `xml:"Size,omitempty"`
}
+// CompleteMultipartUploadInput is the input parameter of CompleteMultipartUpload function
type CompleteMultipartUploadInput struct {
- Bucket string `xml:"-"`
- Key string `xml:"-"`
- UploadId string `xml:"-"`
- XMLName xml.Name `xml:"CompleteMultipartUpload"`
- Parts []Part `xml:"Part"`
+ Bucket string `xml:"-"`
+ Key string `xml:"-"`
+ UploadId string `xml:"-"`
+ XMLName xml.Name `xml:"CompleteMultipartUpload"`
+ Parts []Part `xml:"Part"`
+ EncodingType string `xml:"-"`
}
+// CompleteMultipartUploadOutput is the result of CompleteMultipartUpload function
type CompleteMultipartUploadOutput struct {
BaseModel
- VersionId string `xml:"-"`
- SseHeader ISseHeader `xml:"-"`
- XMLName xml.Name `xml:"CompleteMultipartUploadResult"`
- Location string `xml:"Location"`
- Bucket string `xml:"Bucket"`
- Key string `xml:"Key"`
- ETag string `xml:"ETag"`
+ VersionId string `xml:"-"`
+ SseHeader ISseHeader `xml:"-"`
+ XMLName xml.Name `xml:"CompleteMultipartUploadResult"`
+ Location string `xml:"Location"`
+ Bucket string `xml:"Bucket"`
+ Key string `xml:"Key"`
+ ETag string `xml:"ETag"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// ListPartsInput is the input parameter of ListParts function
type ListPartsInput struct {
Bucket string
Key string
UploadId string
MaxParts int
PartNumberMarker int
+ EncodingType string
}
+// ListPartsOutput is the result of ListParts function
type ListPartsOutput struct {
BaseModel
XMLName xml.Name `xml:"ListPartsResult"`
@@ -917,8 +1044,10 @@ type ListPartsOutput struct {
Initiator Initiator `xml:"Initiator"`
Owner Owner `xml:"Owner"`
Parts []Part `xml:"Part"`
+ EncodingType string `xml:"EncodingType,omitempty"`
}
+// CopyPartInput is the input parameter of CopyPart function
type CopyPartInput struct {
Bucket string
Key string
@@ -933,6 +1062,7 @@ type CopyPartInput struct {
SourceSseHeader ISseHeader
}
+// CopyPartOutput is the result of CopyPart function
type CopyPartOutput struct {
BaseModel
XMLName xml.Name `xml:"CopyPartResult"`
@@ -942,6 +1072,7 @@ type CopyPartOutput struct {
SseHeader ISseHeader `xml:"-"`
}
+// CreateSignedUrlInput is the input parameter of CreateSignedUrl function
type CreateSignedUrlInput struct {
Method HttpMethodType
Bucket string
@@ -952,11 +1083,13 @@ type CreateSignedUrlInput struct {
QueryParams map[string]string
}
+// CreateSignedUrlOutput is the result of CreateSignedUrl function
type CreateSignedUrlOutput struct {
SignedUrl string
ActualSignedRequestHeaders http.Header
}
+// CreateBrowserBasedSignatureInput is the input parameter of CreateBrowserBasedSignature function.
type CreateBrowserBasedSignatureInput struct {
Bucket string
Key string
@@ -964,6 +1097,7 @@ type CreateBrowserBasedSignatureInput struct {
FormParams map[string]string
}
+// CreateBrowserBasedSignatureOutput is the result of CreateBrowserBasedSignature function.
type CreateBrowserBasedSignatureOutput struct {
OriginPolicy string
Policy string
@@ -972,3 +1106,148 @@ type CreateBrowserBasedSignatureOutput struct {
Date string
Signature string
}
+
+// HeadObjectInput is the input parameter of HeadObject function
+type HeadObjectInput struct {
+ Bucket string
+ Key string
+ VersionId string
+}
+
+// BucketPayer defines the request payment configuration
+type BucketPayer struct {
+ XMLName xml.Name `xml:"RequestPaymentConfiguration"`
+ Payer PayerType `xml:"Payer"`
+}
+
+// SetBucketRequestPaymentInput is the input parameter of SetBucketRequestPayment function
+type SetBucketRequestPaymentInput struct {
+ Bucket string `xml:"-"`
+ BucketPayer
+}
+
+// GetBucketRequestPaymentOutput is the result of GetBucketRequestPayment function
+type GetBucketRequestPaymentOutput struct {
+ BaseModel
+ BucketPayer
+}
+
+// UploadFileInput is the input parameter of UploadFile function
+type UploadFileInput struct {
+ ObjectOperationInput
+ ContentType string
+ UploadFile string
+ PartSize int64
+ TaskNum int
+ EnableCheckpoint bool
+ CheckpointFile string
+ EncodingType string
+}
+
+// DownloadFileInput is the input parameter of DownloadFile function
+type DownloadFileInput struct {
+ GetObjectMetadataInput
+ IfMatch string
+ IfNoneMatch string
+ IfModifiedSince time.Time
+ IfUnmodifiedSince time.Time
+ DownloadFile string
+ PartSize int64
+ TaskNum int
+ EnableCheckpoint bool
+ CheckpointFile string
+}
+
+// SetBucketFetchPolicyInput is the input parameter of SetBucketFetchPolicy function
+type SetBucketFetchPolicyInput struct {
+ Bucket string
+ Status FetchPolicyStatusType `json:"status"`
+ Agency string `json:"agency"`
+}
+
+// GetBucketFetchPolicyInput is the input parameter of GetBucketFetchPolicy function
+type GetBucketFetchPolicyInput struct {
+ Bucket string
+}
+
+// GetBucketFetchPolicyOutput is the result of GetBucketFetchPolicy function
+type GetBucketFetchPolicyOutput struct {
+ BaseModel
+ FetchResponse `json:"fetch"`
+}
+
+// FetchResponse defines the response fetch policy configuration
+type FetchResponse struct {
+ Status FetchPolicyStatusType `json:"status"`
+ Agency string `json:"agency"`
+}
+
+// DeleteBucketFetchPolicyInput is the input parameter of DeleteBucketFetchPolicy function
+type DeleteBucketFetchPolicyInput struct {
+ Bucket string
+}
+
+// SetBucketFetchJobInput is the input parameter of SetBucketFetchJob function
+type SetBucketFetchJobInput struct {
+ Bucket string `json:"bucket"`
+ URL string `json:"url"`
+ Host string `json:"host,omitempty"`
+ Key string `json:"key,omitempty"`
+ Md5 string `json:"md5,omitempty"`
+ CallBackURL string `json:"callbackurl,omitempty"`
+ CallBackBody string `json:"callbackbody,omitempty"`
+ CallBackBodyType string `json:"callbackbodytype,omitempty"`
+ CallBackHost string `json:"callbackhost,omitempty"`
+ FileType string `json:"file_type,omitempty"`
+ IgnoreSameKey bool `json:"ignore_same_key,omitempty"`
+ ObjectHeaders map[string]string `json:"objectheaders,omitempty"`
+ Etag string `json:"etag,omitempty"`
+ TrustName string `json:"trustname,omitempty"`
+}
+
+// SetBucketFetchJobOutput is the result of SetBucketFetchJob function
+type SetBucketFetchJobOutput struct {
+ BaseModel
+ SetBucketFetchJobResponse
+}
+
+// SetBucketFetchJobResponse defines the response SetBucketFetchJob configuration
+type SetBucketFetchJobResponse struct {
+ ID string `json:"id"`
+ Wait int `json:"Wait"`
+}
+
+// GetBucketFetchJobInput is the input parameter of GetBucketFetchJob function
+type GetBucketFetchJobInput struct {
+ Bucket string
+ JobID string
+}
+
+// GetBucketFetchJobOutput is the result of GetBucketFetchJob function
+type GetBucketFetchJobOutput struct {
+ BaseModel
+ GetBucketFetchJobResponse
+}
+
+// GetBucketFetchJobResponse defines the response fetch job configuration
+type GetBucketFetchJobResponse struct {
+ Err string `json:"err"`
+ Code string `json:"code"`
+ Status string `json:"status"`
+ Job JobResponse `json:"job"`
+}
+
+// JobResponse defines the response job configuration
+type JobResponse struct {
+ Bucket string `json:"bucket"`
+ URL string `json:"url"`
+ Host string `json:"host"`
+ Key string `json:"key"`
+ Md5 string `json:"md5"`
+ CallBackURL string `json:"callbackurl"`
+ CallBackBody string `json:"callbackbody"`
+ CallBackBodyType string `json:"callbackbodytype"`
+ CallBackHost string `json:"callbackhost"`
+ FileType string `json:"file_type"`
+ IgnoreSameKey bool `json:"ignore_same_key"`
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/pool.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/pool.go
new file mode 100644
index 00000000000..1755c96ac51
--- /dev/null
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/pool.go
@@ -0,0 +1,542 @@
+// Copyright 2019 Huawei Technologies Co.,Ltd.
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+// this file except in compliance with the License. You may obtain a copy of the
+// License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software distributed
+// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+// CONDITIONS OF ANY KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations under the License.
+
+//nolint:structcheck, unused
+package obs
+
+import (
+ "errors"
+ "fmt"
+ "runtime"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+// Future defines interface with function: Get
+type Future interface {
+ Get() interface{}
+}
+
+// FutureResult for task result
+type FutureResult struct {
+ result interface{}
+ resultChan chan interface{}
+ lock sync.Mutex
+}
+
+type panicResult struct {
+ presult interface{}
+}
+
+func (f *FutureResult) checkPanic() interface{} {
+ if r, ok := f.result.(panicResult); ok {
+ panic(r.presult)
+ }
+ return f.result
+}
+
+// Get gets the task result
+func (f *FutureResult) Get() interface{} {
+ if f.resultChan == nil {
+ return f.checkPanic()
+ }
+ f.lock.Lock()
+ defer f.lock.Unlock()
+ if f.resultChan == nil {
+ return f.checkPanic()
+ }
+
+ f.result = <-f.resultChan
+ close(f.resultChan)
+ f.resultChan = nil
+ return f.checkPanic()
+}
+
+// Task defines interface with function: Run
+type Task interface {
+ Run() interface{}
+}
+
+type funcWrapper struct {
+ f func() interface{}
+}
+
+func (fw *funcWrapper) Run() interface{} {
+ if fw.f != nil {
+ return fw.f()
+ }
+ return nil
+}
+
+type taskWrapper struct {
+ t Task
+ f *FutureResult
+}
+
+func (tw *taskWrapper) Run() interface{} {
+ if tw.t != nil {
+ return tw.t.Run()
+ }
+ return nil
+}
+
+type signalTask struct {
+ id string
+}
+
+func (signalTask) Run() interface{} {
+ return nil
+}
+
+type worker struct {
+ name string
+ taskQueue chan Task
+ wg *sync.WaitGroup
+ pool *RoutinePool
+}
+
+func runTask(t Task) {
+ if tw, ok := t.(*taskWrapper); ok {
+ defer func() {
+ if r := recover(); r != nil {
+ tw.f.resultChan <- panicResult{
+ presult: r,
+ }
+ }
+ }()
+ ret := t.Run()
+ tw.f.resultChan <- ret
+ } else {
+ t.Run()
+ }
+}
+
+func (*worker) runTask(t Task) {
+ runTask(t)
+}
+
+func (w *worker) start() {
+ go func() {
+ defer func() {
+ if w.wg != nil {
+ w.wg.Done()
+ }
+ }()
+ for {
+ task, ok := <-w.taskQueue
+ if !ok {
+ break
+ }
+ w.pool.AddCurrentWorkingCnt(1)
+ w.runTask(task)
+ w.pool.AddCurrentWorkingCnt(-1)
+ if w.pool.autoTuneWorker(w) {
+ break
+ }
+ }
+ }()
+}
+
+func (w *worker) release() {
+ w.taskQueue = nil
+ w.wg = nil
+ w.pool = nil
+}
+
+// Pool defines coroutine pool interface
+type Pool interface {
+ ShutDown()
+ Submit(t Task) (Future, error)
+ SubmitFunc(f func() interface{}) (Future, error)
+ Execute(t Task)
+ ExecuteFunc(f func() interface{})
+ GetMaxWorkerCnt() int64
+ AddMaxWorkerCnt(value int64) int64
+ GetCurrentWorkingCnt() int64
+ AddCurrentWorkingCnt(value int64) int64
+ GetWorkerCnt() int64
+ AddWorkerCnt(value int64) int64
+ EnableAutoTune()
+}
+
+type basicPool struct {
+ maxWorkerCnt int64
+ workerCnt int64
+ currentWorkingCnt int64
+ isShutDown int32
+}
+
+// ErrTaskInvalid will be returned if the task is nil
+var ErrTaskInvalid = errors.New("Task is nil")
+
+func (pool *basicPool) GetCurrentWorkingCnt() int64 {
+ return atomic.LoadInt64(&pool.currentWorkingCnt)
+}
+
+func (pool *basicPool) AddCurrentWorkingCnt(value int64) int64 {
+ return atomic.AddInt64(&pool.currentWorkingCnt, value)
+}
+
+func (pool *basicPool) GetWorkerCnt() int64 {
+ return atomic.LoadInt64(&pool.workerCnt)
+}
+
+func (pool *basicPool) AddWorkerCnt(value int64) int64 {
+ return atomic.AddInt64(&pool.workerCnt, value)
+}
+
+func (pool *basicPool) GetMaxWorkerCnt() int64 {
+ return atomic.LoadInt64(&pool.maxWorkerCnt)
+}
+
+func (pool *basicPool) AddMaxWorkerCnt(value int64) int64 {
+ return atomic.AddInt64(&pool.maxWorkerCnt, value)
+}
+
+func (pool *basicPool) CompareAndSwapCurrentWorkingCnt(oldValue, newValue int64) bool {
+ return atomic.CompareAndSwapInt64(&pool.currentWorkingCnt, oldValue, newValue)
+}
+
+func (pool *basicPool) EnableAutoTune() {
+
+}
+
+// RoutinePool defines the coroutine pool struct
+type RoutinePool struct {
+ basicPool
+ taskQueue chan Task
+ dispatchQueue chan Task
+ workers map[string]*worker
+ cacheCnt int
+ wg *sync.WaitGroup
+ lock *sync.Mutex
+ shutDownWg *sync.WaitGroup
+ autoTune int32
+}
+
+// ErrSubmitTimeout will be returned if submit task timeout when calling SubmitWithTimeout function
+var ErrSubmitTimeout = errors.New("Submit task timeout")
+
+// ErrPoolShutDown will be returned if RoutinePool is shutdown
+var ErrPoolShutDown = errors.New("RoutinePool is shutdown")
+
+// ErrTaskReject will be returned if submit task is rejected
+var ErrTaskReject = errors.New("Submit task is rejected")
+
+var closeQueue = signalTask{id: "closeQueue"}
+
+// NewRoutinePool creates a RoutinePool instance
+func NewRoutinePool(maxWorkerCnt, cacheCnt int) Pool {
+ if maxWorkerCnt <= 0 {
+ maxWorkerCnt = runtime.NumCPU()
+ }
+
+ pool := &RoutinePool{
+ cacheCnt: cacheCnt,
+ wg: new(sync.WaitGroup),
+ lock: new(sync.Mutex),
+ shutDownWg: new(sync.WaitGroup),
+ autoTune: 0,
+ }
+ pool.isShutDown = 0
+ pool.maxWorkerCnt += int64(maxWorkerCnt)
+ if pool.cacheCnt <= 0 {
+ pool.taskQueue = make(chan Task)
+ } else {
+ pool.taskQueue = make(chan Task, pool.cacheCnt)
+ }
+ pool.workers = make(map[string]*worker, pool.maxWorkerCnt)
+ // dispatchQueue must not have length
+ pool.dispatchQueue = make(chan Task)
+ pool.dispatcher()
+
+ return pool
+}
+
+// EnableAutoTune sets the autoTune enabled
+func (pool *RoutinePool) EnableAutoTune() {
+ atomic.StoreInt32(&pool.autoTune, 1)
+}
+
+func (pool *RoutinePool) checkStatus(t Task) error {
+ if t == nil {
+ return ErrTaskInvalid
+ }
+
+ if atomic.LoadInt32(&pool.isShutDown) == 1 {
+ return ErrPoolShutDown
+ }
+ return nil
+}
+
+func (pool *RoutinePool) dispatcher() {
+ pool.shutDownWg.Add(1)
+ go func() {
+ for {
+ task, ok := <-pool.dispatchQueue
+ if !ok {
+ break
+ }
+
+ if task == closeQueue {
+ close(pool.taskQueue)
+ pool.shutDownWg.Done()
+ continue
+ }
+
+ if pool.GetWorkerCnt() < pool.GetMaxWorkerCnt() {
+ pool.addWorker()
+ }
+
+ pool.taskQueue <- task
+ }
+ }()
+}
+
+// AddMaxWorkerCnt sets the maxWorkerCnt field's value and returns it
+func (pool *RoutinePool) AddMaxWorkerCnt(value int64) int64 {
+ if atomic.LoadInt32(&pool.autoTune) == 1 {
+ return pool.basicPool.AddMaxWorkerCnt(value)
+ }
+ return pool.GetMaxWorkerCnt()
+}
+
+func (pool *RoutinePool) addWorker() {
+ if atomic.LoadInt32(&pool.autoTune) == 1 {
+ pool.lock.Lock()
+ defer pool.lock.Unlock()
+ }
+ w := &worker{}
+ w.name = fmt.Sprintf("woker-%d", len(pool.workers))
+ w.taskQueue = pool.taskQueue
+ w.wg = pool.wg
+ pool.AddWorkerCnt(1)
+ w.pool = pool
+ pool.workers[w.name] = w
+ pool.wg.Add(1)
+ w.start()
+}
+
+func (pool *RoutinePool) autoTuneWorker(w *worker) bool {
+ if atomic.LoadInt32(&pool.autoTune) == 0 {
+ return false
+ }
+
+ if w == nil {
+ return false
+ }
+
+ workerCnt := pool.GetWorkerCnt()
+ maxWorkerCnt := pool.GetMaxWorkerCnt()
+ if workerCnt > maxWorkerCnt && atomic.CompareAndSwapInt64(&pool.workerCnt, workerCnt, workerCnt-1) {
+ pool.lock.Lock()
+ defer pool.lock.Unlock()
+ delete(pool.workers, w.name)
+ w.wg.Done()
+ w.release()
+ return true
+ }
+
+ return false
+}
+
+// ExecuteFunc creates a funcWrapper instance with the specified function and calls the Execute function
+func (pool *RoutinePool) ExecuteFunc(f func() interface{}) {
+ fw := &funcWrapper{
+ f: f,
+ }
+ pool.Execute(fw)
+}
+
+// Execute pushes the specified task to the dispatchQueue
+func (pool *RoutinePool) Execute(t Task) {
+ if t != nil {
+ pool.dispatchQueue <- t
+ }
+}
+
+// SubmitFunc creates a funcWrapper instance with the specified function and calls the Submit function
+func (pool *RoutinePool) SubmitFunc(f func() interface{}) (Future, error) {
+ fw := &funcWrapper{
+ f: f,
+ }
+ return pool.Submit(fw)
+}
+
+// Submit pushes the specified task to the dispatchQueue, and returns the FutureResult and error info
+func (pool *RoutinePool) Submit(t Task) (Future, error) {
+ if err := pool.checkStatus(t); err != nil {
+ return nil, err
+ }
+ f := &FutureResult{}
+ f.resultChan = make(chan interface{}, 1)
+ tw := &taskWrapper{
+ t: t,
+ f: f,
+ }
+ pool.dispatchQueue <- tw
+ return f, nil
+}
+
+// SubmitWithTimeout pushes the specified task to the dispatchQueue, and returns the FutureResult and error info.
+// Also takes a timeout value, will return ErrSubmitTimeout if it does't complete within that time.
+func (pool *RoutinePool) SubmitWithTimeout(t Task, timeout int64) (Future, error) {
+ if timeout <= 0 {
+ return pool.Submit(t)
+ }
+ if err := pool.checkStatus(t); err != nil {
+ return nil, err
+ }
+ timeoutChan := make(chan bool, 1)
+ go func() {
+ time.Sleep(time.Duration(time.Millisecond * time.Duration(timeout)))
+ timeoutChan <- true
+ close(timeoutChan)
+ }()
+
+ f := &FutureResult{}
+ f.resultChan = make(chan interface{}, 1)
+ tw := &taskWrapper{
+ t: t,
+ f: f,
+ }
+ select {
+ case pool.dispatchQueue <- tw:
+ return f, nil
+ case _, ok := <-timeoutChan:
+ if ok {
+ return nil, ErrSubmitTimeout
+ }
+ return nil, ErrSubmitTimeout
+ }
+}
+
+func (pool *RoutinePool) beforeCloseDispatchQueue() {
+ if !atomic.CompareAndSwapInt32(&pool.isShutDown, 0, 1) {
+ return
+ }
+ pool.dispatchQueue <- closeQueue
+ pool.wg.Wait()
+}
+
+func (pool *RoutinePool) doCloseDispatchQueue() {
+ close(pool.dispatchQueue)
+ pool.shutDownWg.Wait()
+}
+
+// ShutDown closes the RoutinePool instance
+func (pool *RoutinePool) ShutDown() {
+ pool.beforeCloseDispatchQueue()
+ pool.doCloseDispatchQueue()
+ for _, w := range pool.workers {
+ w.release()
+ }
+ pool.workers = nil
+ pool.taskQueue = nil
+ pool.dispatchQueue = nil
+}
+
+// NoChanPool defines the coroutine pool struct
+type NoChanPool struct {
+ basicPool
+ wg *sync.WaitGroup
+ tokens chan interface{}
+}
+
+// NewNochanPool creates a new NoChanPool instance
+func NewNochanPool(maxWorkerCnt int) Pool {
+ if maxWorkerCnt <= 0 {
+ maxWorkerCnt = runtime.NumCPU()
+ }
+
+ pool := &NoChanPool{
+ wg: new(sync.WaitGroup),
+ tokens: make(chan interface{}, maxWorkerCnt),
+ }
+ pool.isShutDown = 0
+ pool.AddMaxWorkerCnt(int64(maxWorkerCnt))
+
+ for i := 0; i < maxWorkerCnt; i++ {
+ pool.tokens <- struct{}{}
+ }
+
+ return pool
+}
+
+func (pool *NoChanPool) acquire() {
+ <-pool.tokens
+}
+
+func (pool *NoChanPool) release() {
+ pool.tokens <- 1
+}
+
+func (pool *NoChanPool) execute(t Task) {
+ pool.wg.Add(1)
+ go func() {
+ pool.acquire()
+ defer func() {
+ pool.release()
+ pool.wg.Done()
+ }()
+ runTask(t)
+ }()
+}
+
+// ShutDown closes the NoChanPool instance
+func (pool *NoChanPool) ShutDown() {
+ if !atomic.CompareAndSwapInt32(&pool.isShutDown, 0, 1) {
+ return
+ }
+ pool.wg.Wait()
+}
+
+// Execute executes the specified task
+func (pool *NoChanPool) Execute(t Task) {
+ if t != nil {
+ pool.execute(t)
+ }
+}
+
+// ExecuteFunc creates a funcWrapper instance with the specified function and calls the Execute function
+func (pool *NoChanPool) ExecuteFunc(f func() interface{}) {
+ fw := &funcWrapper{
+ f: f,
+ }
+ pool.Execute(fw)
+}
+
+// Submit executes the specified task, and returns the FutureResult and error info
+func (pool *NoChanPool) Submit(t Task) (Future, error) {
+ if t == nil {
+ return nil, ErrTaskInvalid
+ }
+
+ f := &FutureResult{}
+ f.resultChan = make(chan interface{}, 1)
+ tw := &taskWrapper{
+ t: t,
+ f: f,
+ }
+
+ pool.execute(tw)
+ return f, nil
+}
+
+// SubmitFunc creates a funcWrapper instance with the specified function and calls the Submit function
+func (pool *NoChanPool) SubmitFunc(f func() interface{}) (Future, error) {
+ fw := &funcWrapper{
+ f: f,
+ }
+ return pool.Submit(fw)
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/provider.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/provider.go
new file mode 100644
index 00000000000..2e485c1134b
--- /dev/null
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/provider.go
@@ -0,0 +1,243 @@
+// Copyright 2019 Huawei Technologies Co.,Ltd.
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+// this file except in compliance with the License. You may obtain a copy of the
+// License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software distributed
+// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+// CONDITIONS OF ANY KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations under the License.
+
+package obs
+
+import (
+ "encoding/json"
+ "io/ioutil"
+ "math/rand"
+ "net"
+ "net/http"
+ "os"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "time"
+)
+
+const (
+ accessKeyEnv = "OBS_ACCESS_KEY_ID"
+ securityKeyEnv = "OBS_SECRET_ACCESS_KEY"
+ securityTokenEnv = "OBS_SECURITY_TOKEN"
+ ecsRequestURL = "http://169.254.169.254/openstack/latest/securitykey"
+)
+
+type securityHolder struct {
+ ak string
+ sk string
+ securityToken string
+}
+
+var emptySecurityHolder = securityHolder{}
+
+type securityProvider interface {
+ getSecurity() securityHolder
+}
+
+type BasicSecurityProvider struct {
+ val atomic.Value
+}
+
+func (bsp *BasicSecurityProvider) getSecurity() securityHolder {
+ if sh, ok := bsp.val.Load().(securityHolder); ok {
+ return sh
+ }
+ return emptySecurityHolder
+}
+
+func (bsp *BasicSecurityProvider) refresh(ak, sk, securityToken string) {
+ bsp.val.Store(securityHolder{ak: strings.TrimSpace(ak), sk: strings.TrimSpace(sk), securityToken: strings.TrimSpace(securityToken)})
+}
+
+func NewBasicSecurityProvider(ak, sk, securityToken string) *BasicSecurityProvider {
+ bsp := &BasicSecurityProvider{}
+ bsp.refresh(ak, sk, securityToken)
+ return bsp
+}
+
+type EnvSecurityProvider struct {
+ sh securityHolder
+ suffix string
+ once sync.Once
+}
+
+func (esp *EnvSecurityProvider) getSecurity() securityHolder {
+ //ensure run only once
+ esp.once.Do(func() {
+ esp.sh = securityHolder{
+ ak: strings.TrimSpace(os.Getenv(accessKeyEnv + esp.suffix)),
+ sk: strings.TrimSpace(os.Getenv(securityKeyEnv + esp.suffix)),
+ securityToken: strings.TrimSpace(os.Getenv(securityTokenEnv + esp.suffix)),
+ }
+ })
+
+ return esp.sh
+}
+
+func NewEnvSecurityProvider(suffix string) *EnvSecurityProvider {
+ if suffix != "" {
+ suffix = "_" + suffix
+ }
+ esp := &EnvSecurityProvider{
+ suffix: suffix,
+ }
+ return esp
+}
+
+type TemporarySecurityHolder struct {
+ securityHolder
+ expireDate time.Time
+}
+
+var emptyTemporarySecurityHolder = TemporarySecurityHolder{}
+
+type EcsSecurityProvider struct {
+ val atomic.Value
+ lock sync.Mutex
+ httpClient *http.Client
+ prefetch int32
+ retryCount int
+}
+
+func (ecsSp *EcsSecurityProvider) loadTemporarySecurityHolder() (TemporarySecurityHolder, bool) {
+ if sh := ecsSp.val.Load(); sh == nil {
+ return emptyTemporarySecurityHolder, false
+ } else if _sh, ok := sh.(TemporarySecurityHolder); !ok {
+ return emptyTemporarySecurityHolder, false
+ } else {
+ return _sh, true
+ }
+}
+
+func (ecsSp *EcsSecurityProvider) getAndSetSecurityWithOutLock() securityHolder {
+ _sh := TemporarySecurityHolder{}
+ _sh.expireDate = time.Now().Add(time.Minute * 5)
+ retryCount := 0
+ for {
+ if req, err := http.NewRequest("GET", ecsRequestURL, nil); err == nil {
+ start := GetCurrentTimestamp()
+ res, err := ecsSp.httpClient.Do(req)
+ if err == nil {
+ if data, _err := ioutil.ReadAll(res.Body); _err == nil {
+ temp := &struct {
+ Credential struct {
+ AK string `json:"access,omitempty"`
+ SK string `json:"secret,omitempty"`
+ SecurityToken string `json:"securitytoken,omitempty"`
+ ExpireDate time.Time `json:"expires_at,omitempty"`
+ } `json:"credential"`
+ }{}
+
+ doLog(LEVEL_DEBUG, "Get the json data from ecs succeed")
+
+ if jsonErr := json.Unmarshal(data, temp); jsonErr == nil {
+ _sh.ak = temp.Credential.AK
+ _sh.sk = temp.Credential.SK
+ _sh.securityToken = temp.Credential.SecurityToken
+ _sh.expireDate = temp.Credential.ExpireDate.Add(time.Minute * -1)
+
+ doLog(LEVEL_INFO, "Get security from ecs succeed, AK:xxxx, SK:xxxx, SecurityToken:xxxx, ExprireDate %s", _sh.expireDate)
+
+ doLog(LEVEL_INFO, "Get security from ecs succeed, cost %d ms", (GetCurrentTimestamp() - start))
+ break
+ } else {
+ err = jsonErr
+ }
+ } else {
+ err = _err
+ }
+ }
+
+ doLog(LEVEL_WARN, "Try to get security from ecs failed, cost %d ms, err %s", (GetCurrentTimestamp() - start), err.Error())
+ }
+
+ if retryCount >= ecsSp.retryCount {
+ doLog(LEVEL_WARN, "Try to get security from ecs failed and exceed the max retry count")
+ break
+ }
+ sleepTime := float64(retryCount+2) * rand.Float64()
+ if sleepTime > 10 {
+ sleepTime = 10
+ }
+ time.Sleep(time.Duration(sleepTime * float64(time.Second)))
+ retryCount++
+ }
+
+ ecsSp.val.Store(_sh)
+ return _sh.securityHolder
+}
+
+func (ecsSp *EcsSecurityProvider) getAndSetSecurity() securityHolder {
+ ecsSp.lock.Lock()
+ defer ecsSp.lock.Unlock()
+ tsh, succeed := ecsSp.loadTemporarySecurityHolder()
+ if !succeed || time.Now().After(tsh.expireDate) {
+ return ecsSp.getAndSetSecurityWithOutLock()
+ }
+ return tsh.securityHolder
+}
+
+func (ecsSp *EcsSecurityProvider) getSecurity() securityHolder {
+ if tsh, succeed := ecsSp.loadTemporarySecurityHolder(); succeed {
+ if time.Now().Before(tsh.expireDate) {
+ //not expire
+ if time.Now().Add(time.Minute*5).After(tsh.expireDate) && atomic.CompareAndSwapInt32(&ecsSp.prefetch, 0, 1) {
+ //do prefetch
+ sh := ecsSp.getAndSetSecurityWithOutLock()
+ atomic.CompareAndSwapInt32(&ecsSp.prefetch, 1, 0)
+ return sh
+ }
+ return tsh.securityHolder
+ }
+ return ecsSp.getAndSetSecurity()
+ }
+
+ return ecsSp.getAndSetSecurity()
+}
+
+func getInternalTransport() *http.Transport {
+
+ timeout := 10
+ transport := &http.Transport{
+ Dial: func(network, addr string) (net.Conn, error) {
+ start := GetCurrentTimestamp()
+ conn, err := (&net.Dialer{
+ Timeout: time.Second * time.Duration(timeout),
+ Resolver: net.DefaultResolver,
+ }).Dial(network, addr)
+
+ if isInfoLogEnabled() {
+ doLog(LEVEL_INFO, "Do http dial cost %d ms", (GetCurrentTimestamp() - start))
+ }
+ if err != nil {
+ return nil, err
+ }
+ return getConnDelegate(conn, timeout, timeout*10), nil
+ },
+ MaxIdleConns: 10,
+ MaxIdleConnsPerHost: 10,
+ ResponseHeaderTimeout: time.Second * time.Duration(timeout),
+ IdleConnTimeout: time.Second * time.Duration(DEFAULT_IDLE_CONN_TIMEOUT),
+ DisableCompression: true,
+ }
+
+ return transport
+}
+
+func NewEcsSecurityProvider(retryCount int) *EcsSecurityProvider {
+ ecsSp := &EcsSecurityProvider{
+ retryCount: retryCount,
+ }
+ ecsSp.httpClient = &http.Client{Transport: getInternalTransport(), CheckRedirect: checkRedirectFunc}
+ return ecsSp
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/temporary.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/temporary.go
index 67893e8469f..20d9a29803b 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/temporary.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/temporary.go
@@ -22,6 +22,7 @@ import (
"time"
)
+// CreateSignedUrl creates signed url with the specified CreateSignedUrlInput, and returns the CreateSignedUrlOutput and error
func (obsClient ObsClient) CreateSignedUrl(input *CreateSignedUrlInput) (output *CreateSignedUrlOutput, err error) {
if input == nil {
return nil, errors.New("CreateSignedUrlInput is nil")
@@ -45,18 +46,30 @@ func (obsClient ObsClient) CreateSignedUrl(input *CreateSignedUrlInput) (output
input.Expires = 300
}
- requestUrl, err := obsClient.doAuthTemporary(string(input.Method), input.Bucket, input.Key, params, headers, int64(input.Expires))
+ requestURL, err := obsClient.doAuthTemporary(string(input.Method), input.Bucket, input.Key, params, headers, int64(input.Expires))
if err != nil {
return nil, err
}
output = &CreateSignedUrlOutput{
- SignedUrl: requestUrl,
+ SignedUrl: requestURL,
ActualSignedRequestHeaders: headers,
}
return
}
+func (obsClient ObsClient) isSecurityToken(params map[string]string, sh securityHolder) {
+ if sh.securityToken != "" {
+ if obsClient.conf.signature == SignatureObs {
+ params[HEADER_STS_TOKEN_OBS] = sh.securityToken
+ } else {
+ params[HEADER_STS_TOKEN_AMZ] = sh.securityToken
+ }
+ }
+}
+
+// CreateBrowserBasedSignature gets the browser based signature with the specified CreateBrowserBasedSignatureInput,
+// and returns the CreateBrowserBasedSignatureOutput and error
func (obsClient ObsClient) CreateBrowserBasedSignature(input *CreateBrowserBasedSignatureInput) (output *CreateBrowserBasedSignatureOutput, err error) {
if input == nil {
return nil, errors.New("CreateBrowserBasedSignatureInput is nil")
@@ -70,26 +83,23 @@ func (obsClient ObsClient) CreateBrowserBasedSignature(input *CreateBrowserBased
date := time.Now().UTC()
shortDate := date.Format(SHORT_DATE_FORMAT)
longDate := date.Format(LONG_DATE_FORMAT)
+ sh := obsClient.getSecurity()
- credential, _ := getCredential(obsClient.conf.securityProvider.ak, obsClient.conf.region, shortDate)
+ credential, _ := getCredential(sh.ak, obsClient.conf.region, shortDate)
if input.Expires <= 0 {
input.Expires = 300
}
expiration := date.Add(time.Second * time.Duration(input.Expires)).Format(ISO8601_DATE_FORMAT)
- params[PARAM_ALGORITHM_AMZ_CAMEL] = V4_HASH_PREFIX
- params[PARAM_CREDENTIAL_AMZ_CAMEL] = credential
- params[PARAM_DATE_AMZ_CAMEL] = longDate
-
- if obsClient.conf.securityProvider.securityToken != "" {
- if obsClient.conf.signature == SignatureObs {
- params[HEADER_STS_TOKEN_OBS] = obsClient.conf.securityProvider.securityToken
- } else {
- params[HEADER_STS_TOKEN_AMZ] = obsClient.conf.securityProvider.securityToken
- }
+ if obsClient.conf.signature == SignatureV4 {
+ params[PARAM_ALGORITHM_AMZ_CAMEL] = V4_HASH_PREFIX
+ params[PARAM_CREDENTIAL_AMZ_CAMEL] = credential
+ params[PARAM_DATE_AMZ_CAMEL] = longDate
}
+ obsClient.isSecurityToken(params, sh)
+
matchAnyBucket := true
matchAnyKey := true
count := 5
@@ -126,7 +136,12 @@ func (obsClient ObsClient) CreateBrowserBasedSignature(input *CreateBrowserBased
originPolicy := strings.Join(originPolicySlice, "")
policy := Base64Encode([]byte(originPolicy))
- signature := getSignature(policy, obsClient.conf.securityProvider.sk, obsClient.conf.region, shortDate)
+ var signature string
+ if obsClient.conf.signature == SignatureV4 {
+ signature = getSignature(policy, sh.sk, obsClient.conf.region, shortDate)
+ } else {
+ signature = Base64Encode(HmacSha1([]byte(sh.sk), []byte(policy)))
+ }
output = &CreateBrowserBasedSignatureOutput{
OriginPolicy: originPolicy,
@@ -139,116 +154,159 @@ func (obsClient ObsClient) CreateBrowserBasedSignature(input *CreateBrowserBased
return
}
+// ListBucketsWithSignedUrl lists buckets with the specified signed url and signed request headers
func (obsClient ObsClient) ListBucketsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *ListBucketsOutput, err error) {
output = &ListBucketsOutput{}
- err = obsClient.doHttpWithSignedUrl("ListBuckets", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("ListBuckets", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// CreateBucketWithSignedUrl creates bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) CreateBucketWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("CreateBucket", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("CreateBucket", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteBucketWithSignedUrl deletes bucket with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucket", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucket", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketStoragePolicyWithSignedUrl sets bucket storage class with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketStoragePolicyWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketStoragePolicy", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketStoragePolicy", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketStoragePolicyWithSignedUrl gets bucket storage class with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketStoragePolicyWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketStoragePolicyOutput, err error) {
output = &GetBucketStoragePolicyOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketStoragePolicy", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketStoragePolicy", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// ListObjectsWithSignedUrl lists objects in a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) ListObjectsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *ListObjectsOutput, err error) {
output = &ListObjectsOutput{}
- err = obsClient.doHttpWithSignedUrl("ListObjects", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("ListObjects", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
if location, ok := output.ResponseHeaders[HEADER_BUCKET_REGION]; ok {
output.Location = location[0]
}
+ if output.EncodingType == "url" {
+ err = decodeListObjectsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListObjectsOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
+// ListVersionsWithSignedUrl lists versioning objects in a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) ListVersionsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *ListVersionsOutput, err error) {
output = &ListVersionsOutput{}
- err = obsClient.doHttpWithSignedUrl("ListVersions", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("ListVersions", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
if location, ok := output.ResponseHeaders[HEADER_BUCKET_REGION]; ok {
output.Location = location[0]
}
+ if output.EncodingType == "url" {
+ err = decodeListVersionsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListVersionsOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
+// ListMultipartUploadsWithSignedUrl lists the multipart uploads that are initialized but not combined or aborted in a
+// specified bucket with the specified signed url and signed request headers
func (obsClient ObsClient) ListMultipartUploadsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *ListMultipartUploadsOutput, err error) {
output = &ListMultipartUploadsOutput{}
- err = obsClient.doHttpWithSignedUrl("ListMultipartUploads", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("ListMultipartUploads", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeListMultipartUploadsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListMultipartUploadsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
+// SetBucketQuotaWithSignedUrl sets the bucket quota with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketQuotaWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketQuota", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketQuota", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketQuotaWithSignedUrl gets the bucket quota with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketQuotaWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketQuotaOutput, err error) {
output = &GetBucketQuotaOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketQuota", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketQuota", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// HeadBucketWithSignedUrl checks whether a bucket exists with the specified signed url and signed request headers
func (obsClient ObsClient) HeadBucketWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("HeadBucket", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("HeadBucket", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// HeadObjectWithSignedUrl checks whether an object exists with the specified signed url and signed request headers
+func (obsClient ObsClient) HeadObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
+ output = &BaseModel{}
+ err = obsClient.doHTTPWithSignedURL("HeadObject", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketMetadataWithSignedUrl gets the metadata of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketMetadataWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketMetadataOutput, err error) {
output = &GetBucketMetadataOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketMetadata", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketMetadata", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -257,234 +315,260 @@ func (obsClient ObsClient) GetBucketMetadataWithSignedUrl(signedUrl string, actu
return
}
+// GetBucketStorageInfoWithSignedUrl gets storage information about a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketStorageInfoWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketStorageInfoOutput, err error) {
output = &GetBucketStorageInfoOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketStorageInfo", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketStorageInfo", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketLocationWithSignedUrl gets the location of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketLocationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketLocationOutput, err error) {
output = &GetBucketLocationOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketLocation", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketLocation", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketAclWithSignedUrl sets the bucket ACL with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketAclWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketAcl", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketAcl", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketAclWithSignedUrl gets the bucket ACL with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketAclWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketAclOutput, err error) {
output = &GetBucketAclOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketAcl", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketAcl", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketPolicyWithSignedUrl sets the bucket policy with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketPolicyWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketPolicy", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketPolicy", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketPolicyWithSignedUrl gets the bucket policy with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketPolicyWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketPolicyOutput, err error) {
output = &GetBucketPolicyOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketPolicy", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, false)
+ err = obsClient.doHTTPWithSignedURL("GetBucketPolicy", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, false)
if err != nil {
output = nil
}
return
}
+// DeleteBucketPolicyWithSignedUrl deletes the bucket policy with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketPolicyWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucketPolicy", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucketPolicy", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketCorsWithSignedUrl sets CORS rules for a bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketCorsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketCors", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketCors", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketCorsWithSignedUrl gets CORS rules of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketCorsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketCorsOutput, err error) {
output = &GetBucketCorsOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketCors", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketCors", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteBucketCorsWithSignedUrl deletes CORS rules of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketCorsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucketCors", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucketCors", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketVersioningWithSignedUrl sets the versioning status for a bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketVersioningWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketVersioning", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketVersioning", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketVersioningWithSignedUrl gets the versioning status of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketVersioningWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketVersioningOutput, err error) {
output = &GetBucketVersioningOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketVersioning", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketVersioning", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketWebsiteConfigurationWithSignedUrl sets website hosting for a bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketWebsiteConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketWebsiteConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketWebsiteConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketWebsiteConfigurationWithSignedUrl gets the website hosting settings of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketWebsiteConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketWebsiteConfigurationOutput, err error) {
output = &GetBucketWebsiteConfigurationOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketWebsiteConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketWebsiteConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteBucketWebsiteConfigurationWithSignedUrl deletes the website hosting settings of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketWebsiteConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucketWebsiteConfiguration", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucketWebsiteConfiguration", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketLoggingConfigurationWithSignedUrl sets the bucket logging with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketLoggingConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketLoggingConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketLoggingConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketLoggingConfigurationWithSignedUrl gets the logging settings of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketLoggingConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketLoggingConfigurationOutput, err error) {
output = &GetBucketLoggingConfigurationOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketLoggingConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketLoggingConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketLifecycleConfigurationWithSignedUrl sets lifecycle rules for a bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketLifecycleConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketLifecycleConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketLifecycleConfiguration", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketLifecycleConfigurationWithSignedUrl gets lifecycle rules of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketLifecycleConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketLifecycleConfigurationOutput, err error) {
output = &GetBucketLifecycleConfigurationOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketLifecycleConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketLifecycleConfiguration", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteBucketLifecycleConfigurationWithSignedUrl deletes lifecycle rules of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketLifecycleConfigurationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucketLifecycleConfiguration", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucketLifecycleConfiguration", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketTaggingWithSignedUrl sets bucket tags with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketTaggingWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketTagging", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketTagging", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketTaggingWithSignedUrl gets bucket tags with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketTaggingWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketTaggingOutput, err error) {
output = &GetBucketTaggingOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketTagging", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketTagging", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteBucketTaggingWithSignedUrl deletes bucket tags with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteBucketTaggingWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("DeleteBucketTagging", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteBucketTagging", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// SetBucketNotificationWithSignedUrl sets event notification for a bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetBucketNotificationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetBucketNotification", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetBucketNotification", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetBucketNotificationWithSignedUrl gets event notification settings of a bucket with the specified signed url and signed request headers
func (obsClient ObsClient) GetBucketNotificationWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketNotificationOutput, err error) {
output = &GetBucketNotificationOutput{}
- err = obsClient.doHttpWithSignedUrl("GetBucketNotification", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetBucketNotification", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// DeleteObjectWithSignedUrl deletes an object with the specified signed url and signed request headers
func (obsClient ObsClient) DeleteObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *DeleteObjectOutput, err error) {
output = &DeleteObjectOutput{}
- err = obsClient.doHttpWithSignedUrl("DeleteObject", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteObject", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -493,49 +577,60 @@ func (obsClient ObsClient) DeleteObjectWithSignedUrl(signedUrl string, actualSig
return
}
+// DeleteObjectsWithSignedUrl deletes objects in a batch with the specified signed url and signed request headers and data
func (obsClient ObsClient) DeleteObjectsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *DeleteObjectsOutput, err error) {
output = &DeleteObjectsOutput{}
- err = obsClient.doHttpWithSignedUrl("DeleteObjects", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("DeleteObjects", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeDeleteObjectsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get DeleteObjectsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
+// SetObjectAclWithSignedUrl sets ACL for an object with the specified signed url and signed request headers and data
func (obsClient ObsClient) SetObjectAclWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("SetObjectAcl", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("SetObjectAcl", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetObjectAclWithSignedUrl gets the ACL of an object with the specified signed url and signed request headers
func (obsClient ObsClient) GetObjectAclWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetObjectAclOutput, err error) {
output = &GetObjectAclOutput{}
- err = obsClient.doHttpWithSignedUrl("GetObjectAcl", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetObjectAcl", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
- if versionId, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
- output.VersionId = versionId[0]
+ if versionID, ok := output.ResponseHeaders[HEADER_VERSION_ID]; ok {
+ output.VersionId = versionID[0]
}
}
return
}
+// RestoreObjectWithSignedUrl restores an object with the specified signed url and signed request headers and data
func (obsClient ObsClient) RestoreObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("RestoreObject", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("RestoreObject", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
}
return
}
+// GetObjectMetadataWithSignedUrl gets object metadata with the specified signed url and signed request headers
func (obsClient ObsClient) GetObjectMetadataWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetObjectMetadataOutput, err error) {
output = &GetObjectMetadataOutput{}
- err = obsClient.doHttpWithSignedUrl("GetObjectMetadata", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetObjectMetadata", HTTP_HEAD, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -544,9 +639,10 @@ func (obsClient ObsClient) GetObjectMetadataWithSignedUrl(signedUrl string, actu
return
}
+// GetObjectWithSignedUrl downloads object with the specified signed url and signed request headers
func (obsClient ObsClient) GetObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetObjectOutput, err error) {
output = &GetObjectOutput{}
- err = obsClient.doHttpWithSignedUrl("GetObject", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("GetObject", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -555,9 +651,10 @@ func (obsClient ObsClient) GetObjectWithSignedUrl(signedUrl string, actualSigned
return
}
+// PutObjectWithSignedUrl uploads an object to the specified bucket with the specified signed url and signed request headers and data
func (obsClient ObsClient) PutObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *PutObjectOutput, err error) {
output = &PutObjectOutput{}
- err = obsClient.doHttpWithSignedUrl("PutObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("PutObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
} else {
@@ -566,18 +663,26 @@ func (obsClient ObsClient) PutObjectWithSignedUrl(signedUrl string, actualSigned
return
}
+// PutFileWithSignedUrl uploads a file to the specified bucket with the specified signed url and signed request headers and sourceFile path
func (obsClient ObsClient) PutFileWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, sourceFile string) (output *PutObjectOutput, err error) {
var data io.Reader
sourceFile = strings.TrimSpace(sourceFile)
if sourceFile != "" {
- fd, err := os.Open(sourceFile)
- if err != nil {
+ fd, _err := os.Open(sourceFile)
+ if _err != nil {
+ err = _err
return nil, err
}
- defer fd.Close()
-
- stat, err := fd.Stat()
- if err != nil {
+ defer func() {
+ errMsg := fd.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with reason: %v", errMsg)
+ }
+ }()
+
+ stat, _err := fd.Stat()
+ if _err != nil {
+ err = _err
return nil, err
}
fileReaderWrapper := &fileReaderWrapper{filePath: sourceFile}
@@ -599,7 +704,7 @@ func (obsClient ObsClient) PutFileWithSignedUrl(signedUrl string, actualSignedRe
}
output = &PutObjectOutput{}
- err = obsClient.doHttpWithSignedUrl("PutObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("PutObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
} else {
@@ -608,9 +713,10 @@ func (obsClient ObsClient) PutFileWithSignedUrl(signedUrl string, actualSignedRe
return
}
+// CopyObjectWithSignedUrl creates a copy for an existing object with the specified signed url and signed request headers
func (obsClient ObsClient) CopyObjectWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *CopyObjectOutput, err error) {
output = &CopyObjectOutput{}
- err = obsClient.doHttpWithSignedUrl("CopyObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("CopyObject", HTTP_PUT, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -619,29 +725,40 @@ func (obsClient ObsClient) CopyObjectWithSignedUrl(signedUrl string, actualSigne
return
}
+// AbortMultipartUploadWithSignedUrl aborts a multipart upload in a specified bucket by using the multipart upload ID with the specified signed url and signed request headers
func (obsClient ObsClient) AbortMultipartUploadWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *BaseModel, err error) {
output = &BaseModel{}
- err = obsClient.doHttpWithSignedUrl("AbortMultipartUpload", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("AbortMultipartUpload", HTTP_DELETE, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
}
return
}
+// InitiateMultipartUploadWithSignedUrl initializes a multipart upload with the specified signed url and signed request headers
func (obsClient ObsClient) InitiateMultipartUploadWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *InitiateMultipartUploadOutput, err error) {
output = &InitiateMultipartUploadOutput{}
- err = obsClient.doHttpWithSignedUrl("InitiateMultipartUpload", HTTP_POST, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("InitiateMultipartUpload", HTTP_POST, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
ParseInitiateMultipartUploadOutput(output)
+ if output.EncodingType == "url" {
+ err = decodeInitiateMultipartUploadOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get InitiateMultipartUploadOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
+// UploadPartWithSignedUrl uploads a part to a specified bucket by using a specified multipart upload ID
+// with the specified signed url and signed request headers and data
func (obsClient ObsClient) UploadPartWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *UploadPartOutput, err error) {
output = &UploadPartOutput{}
- err = obsClient.doHttpWithSignedUrl("UploadPart", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("UploadPart", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
} else {
@@ -650,29 +767,46 @@ func (obsClient ObsClient) UploadPartWithSignedUrl(signedUrl string, actualSigne
return
}
+// CompleteMultipartUploadWithSignedUrl combines the uploaded parts in a specified bucket by using the multipart upload ID
+// with the specified signed url and signed request headers and data
func (obsClient ObsClient) CompleteMultipartUploadWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *CompleteMultipartUploadOutput, err error) {
output = &CompleteMultipartUploadOutput{}
- err = obsClient.doHttpWithSignedUrl("CompleteMultipartUpload", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
+ err = obsClient.doHTTPWithSignedURL("CompleteMultipartUpload", HTTP_POST, signedUrl, actualSignedRequestHeaders, data, output, true)
if err != nil {
output = nil
} else {
ParseCompleteMultipartUploadOutput(output)
+ if output.EncodingType == "url" {
+ err = decodeCompleteMultipartUploadOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get CompleteMultipartUploadOutput with error: %v.", err)
+ output = nil
+ }
+ }
}
return
}
+// ListPartsWithSignedUrl lists the uploaded parts in a bucket by using the multipart upload ID with the specified signed url and signed request headers
func (obsClient ObsClient) ListPartsWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *ListPartsOutput, err error) {
output = &ListPartsOutput{}
- err = obsClient.doHttpWithSignedUrl("ListParts", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("ListParts", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
+ } else if output.EncodingType == "url" {
+ err = decodeListPartsOutput(output)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to get ListPartsOutput with error: %v.", err)
+ output = nil
+ }
}
return
}
+// CopyPartWithSignedUrl copy a part to a specified bucket by using a specified multipart upload ID with the specified signed url and signed request headers
func (obsClient ObsClient) CopyPartWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *CopyPartOutput, err error) {
output = &CopyPartOutput{}
- err = obsClient.doHttpWithSignedUrl("CopyPart", HTTP_PUT, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ err = obsClient.doHTTPWithSignedURL("CopyPart", HTTP_PUT, signedUrl, actualSignedRequestHeaders, nil, output, true)
if err != nil {
output = nil
} else {
@@ -680,3 +814,23 @@ func (obsClient ObsClient) CopyPartWithSignedUrl(signedUrl string, actualSignedR
}
return
}
+
+// SetBucketRequestPaymentWithSignedUrl sets requester-pays setting for a bucket with the specified signed url and signed request headers and data
+func (obsClient ObsClient) SetBucketRequestPaymentWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header, data io.Reader) (output *BaseModel, err error) {
+ output = &BaseModel{}
+ err = obsClient.doHTTPWithSignedURL("SetBucketRequestPayment", HTTP_PUT, signedUrl, actualSignedRequestHeaders, data, output, true)
+ if err != nil {
+ output = nil
+ }
+ return
+}
+
+// GetBucketRequestPaymentWithSignedUrl gets requester-pays setting of a bucket with the specified signed url and signed request headers
+func (obsClient ObsClient) GetBucketRequestPaymentWithSignedUrl(signedUrl string, actualSignedRequestHeaders http.Header) (output *GetBucketRequestPaymentOutput, err error) {
+ output = &GetBucketRequestPaymentOutput{}
+ err = obsClient.doHTTPWithSignedURL("GetBucketRequestPayment", HTTP_GET, signedUrl, actualSignedRequestHeaders, nil, output, true)
+ if err != nil {
+ output = nil
+ }
+ return
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/trait.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/trait.go
index 5ff3e3c63a9..66ee5f58155 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/trait.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/trait.go
@@ -10,16 +10,19 @@
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
+//nolint:structcheck, unused
package obs
import (
"bytes"
"fmt"
"io"
+ "net/url"
"os"
"strings"
)
+// IReadCloser defines interface with function: setReadCloser
type IReadCloser interface {
setReadCloser(body io.ReadCloser)
}
@@ -46,18 +49,21 @@ func setHeadersNext(headers map[string][]string, header string, headerNext strin
}
}
+// IBaseModel defines interface for base response model
type IBaseModel interface {
setStatusCode(statusCode int)
- setRequestId(requestId string)
+ setRequestID(requestID string)
setResponseHeaders(responseHeaders map[string][]string)
}
+// ISerializable defines interface with function: trans
type ISerializable interface {
trans(isObs bool) (map[string]string, map[string][]string, interface{}, error)
}
+// DefaultSerializable defines default serializable struct
type DefaultSerializable struct {
params map[string]string
headers map[string][]string
@@ -84,8 +90,8 @@ func (baseModel *BaseModel) setStatusCode(statusCode int) {
baseModel.StatusCode = statusCode
}
-func (baseModel *BaseModel) setRequestId(requestId string) {
- baseModel.RequestId = requestId
+func (baseModel *BaseModel) setRequestID(requestID string) {
+ baseModel.RequestId = requestID
}
func (baseModel *BaseModel) setResponseHeaders(responseHeaders map[string][]string) {
@@ -100,6 +106,30 @@ func (input ListBucketsInput) trans(isObs bool) (params map[string]string, heade
return
}
+func (input CreateBucketInput) prepareGrantHeaders(headers map[string][]string, isObs bool) {
+ if grantReadID := input.GrantReadId; grantReadID != "" {
+ setHeaders(headers, HEADER_GRANT_READ_OBS, []string{grantReadID}, isObs)
+ }
+ if grantWriteID := input.GrantWriteId; grantWriteID != "" {
+ setHeaders(headers, HEADER_GRANT_WRITE_OBS, []string{grantWriteID}, isObs)
+ }
+ if grantReadAcpID := input.GrantReadAcpId; grantReadAcpID != "" {
+ setHeaders(headers, HEADER_GRANT_READ_ACP_OBS, []string{grantReadAcpID}, isObs)
+ }
+ if grantWriteAcpID := input.GrantWriteAcpId; grantWriteAcpID != "" {
+ setHeaders(headers, HEADER_GRANT_WRITE_ACP_OBS, []string{grantWriteAcpID}, isObs)
+ }
+ if grantFullControlID := input.GrantFullControlId; grantFullControlID != "" {
+ setHeaders(headers, HEADER_GRANT_FULL_CONTROL_OBS, []string{grantFullControlID}, isObs)
+ }
+ if grantReadDeliveredID := input.GrantReadDeliveredId; grantReadDeliveredID != "" {
+ setHeaders(headers, HEADER_GRANT_READ_DELIVERED_OBS, []string{grantReadDeliveredID}, true)
+ }
+ if grantFullControlDeliveredID := input.GrantFullControlDeliveredId; grantFullControlDeliveredID != "" {
+ setHeaders(headers, HEADER_GRANT_FULL_CONTROL_DELIVERED_OBS, []string{grantFullControlDeliveredID}, true)
+ }
+}
+
func (input CreateBucketInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
headers = make(map[string][]string)
if acl := string(input.ACL); acl != "" {
@@ -107,38 +137,26 @@ func (input CreateBucketInput) trans(isObs bool) (params map[string]string, head
}
if storageClass := string(input.StorageClass); storageClass != "" {
if !isObs {
- if storageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if storageClass == "COLD" {
- storageClass = "GLACIER"
+ if storageClass == string(StorageClassWarm) {
+ storageClass = string(storageClassStandardIA)
+ } else if storageClass == string(StorageClassCold) {
+ storageClass = string(storageClassGlacier)
}
}
setHeadersNext(headers, HEADER_STORAGE_CLASS_OBS, HEADER_STORAGE_CLASS, []string{storageClass}, isObs)
- if epid := string(input.Epid); epid != "" {
- setHeaders(headers, HEADER_EPID_HEADERS, []string{epid}, isObs)
- }
- }
- if grantReadId := string(input.GrantReadId); grantReadId != "" {
- setHeaders(headers, HEADER_GRANT_READ_OBS, []string{grantReadId}, isObs)
}
- if grantWriteId := string(input.GrantWriteId); grantWriteId != "" {
- setHeaders(headers, HEADER_GRANT_WRITE_OBS, []string{grantWriteId}, isObs)
+ if epid := input.Epid; epid != "" {
+ setHeaders(headers, HEADER_EPID_HEADERS, []string{epid}, isObs)
}
- if grantReadAcpId := string(input.GrantReadAcpId); grantReadAcpId != "" {
- setHeaders(headers, HEADER_GRANT_READ_ACP_OBS, []string{grantReadAcpId}, isObs)
+ if availableZone := input.AvailableZone; availableZone != "" {
+ setHeaders(headers, HEADER_AZ_REDUNDANCY, []string{availableZone}, isObs)
}
- if grantWriteAcpId := string(input.GrantWriteAcpId); grantWriteAcpId != "" {
- setHeaders(headers, HEADER_GRANT_WRITE_ACP_OBS, []string{grantWriteAcpId}, isObs)
- }
- if grantFullControlId := string(input.GrantFullControlId); grantFullControlId != "" {
- setHeaders(headers, HEADER_GRANT_FULL_CONTROL_OBS, []string{grantFullControlId}, isObs)
- }
- if grantReadDeliveredId := string(input.GrantReadDeliveredId); grantReadDeliveredId != "" {
- setHeaders(headers, HEADER_GRANT_READ_DELIVERED_OBS, []string{grantReadDeliveredId}, true)
- }
- if grantFullControlDeliveredId := string(input.GrantFullControlDeliveredId); grantFullControlDeliveredId != "" {
- setHeaders(headers, HEADER_GRANT_FULL_CONTROL_DELIVERED_OBS, []string{grantFullControlDeliveredId}, true)
+
+ input.prepareGrantHeaders(headers, isObs)
+ if input.IsFSFileInterface {
+ setHeaders(headers, headerFSFileInterface, []string{"Enabled"}, true)
}
+
if location := strings.TrimSpace(input.Location); location != "" {
input.Location = location
@@ -160,15 +178,15 @@ func (input SetBucketStoragePolicyInput) trans(isObs bool) (params map[string]st
xml := make([]string, 0, 1)
if !isObs {
storageClass := "STANDARD"
- if input.StorageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if input.StorageClass == "COLD" {
- storageClass = "GLACIER"
+ if input.StorageClass == StorageClassWarm {
+ storageClass = string(storageClassStandardIA)
+ } else if input.StorageClass == StorageClassCold {
+ storageClass = string(storageClassGlacier)
}
params = map[string]string{string(SubResourceStoragePolicy): ""}
xml = append(xml, fmt.Sprintf("%s", storageClass))
} else {
- if input.StorageClass != "WARM" && input.StorageClass != "COLD" {
+ if input.StorageClass != StorageClassWarm && input.StorageClass != StorageClassCold {
input.StorageClass = StorageClassStandard
}
params = map[string]string{string(SubResourceStorageClass): ""}
@@ -189,6 +207,9 @@ func (input ListObjsInput) trans(isObs bool) (params map[string]string, headers
if input.MaxKeys > 0 {
params["max-keys"] = IntToString(input.MaxKeys)
}
+ if input.EncodingType != "" {
+ params["encoding-type"] = input.EncodingType
+ }
headers = make(map[string][]string)
if origin := strings.TrimSpace(input.Origin); origin != "" {
headers[HEADER_ORIGIN_CAMEL] = []string{origin}
@@ -242,6 +263,9 @@ func (input ListMultipartUploadsInput) trans(isObs bool) (params map[string]stri
if input.UploadIdMarker != "" {
params["upload-id-marker"] = input.UploadIdMarker
}
+ if input.EncodingType != "" {
+ params["encoding-type"] = input.EncodingType
+ }
return
}
@@ -256,7 +280,7 @@ func (input SetBucketAclInput) trans(isObs bool) (params map[string]string, head
if acl := string(input.ACL); acl != "" {
setHeaders(headers, HEADER_ACL, []string{acl}, isObs)
} else {
- data, _ = convertBucketAclToXml(input.AccessControlPolicy, false, isObs)
+ data, _ = convertBucketACLToXML(input.AccessControlPolicy, false, isObs)
}
return
}
@@ -273,7 +297,7 @@ func (input SetBucketCorsInput) trans(isObs bool) (params map[string]string, hea
if err != nil {
return
}
- headers = map[string][]string{HEADER_MD5_CAMEL: []string{md5}}
+ headers = map[string][]string{HEADER_MD5_CAMEL: {md5}}
return
}
@@ -307,7 +331,7 @@ func (input SetBucketLoggingConfigurationInput) trans(isObs bool) (params map[st
func (input SetBucketLifecycleConfigurationInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
params = map[string]string{string(SubResourceLifecycle): ""}
data, md5 := ConvertLifecyleConfigurationToXml(input.BucketLifecyleConfiguration, true, isObs)
- headers = map[string][]string{HEADER_MD5_CAMEL: []string{md5}}
+ headers = map[string][]string{HEADER_MD5_CAMEL: {md5}}
return
}
@@ -317,7 +341,7 @@ func (input SetBucketTaggingInput) trans(isObs bool) (params map[string]string,
if err != nil {
return
}
- headers = map[string][]string{HEADER_MD5_CAMEL: []string{md5}}
+ headers = map[string][]string{HEADER_MD5_CAMEL: {md5}}
return
}
@@ -337,11 +361,13 @@ func (input DeleteObjectInput) trans(isObs bool) (params map[string]string, head
func (input DeleteObjectsInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
params = map[string]string{string(SubResourceDelete): ""}
- data, md5, err := ConvertRequestToIoReaderV2(input)
- if err != nil {
- return
+ if strings.ToLower(input.EncodingType) == "url" {
+ for index, object := range input.Objects {
+ input.Objects[index].Key = url.QueryEscape(object.Key)
+ }
}
- headers = map[string][]string{HEADER_MD5_CAMEL: []string{md5}}
+ data, md5 := convertDeleteObjectsToXML(input)
+ headers = map[string][]string{HEADER_MD5_CAMEL: {md5}}
return
}
@@ -380,21 +406,23 @@ func (input RestoreObjectInput) trans(isObs bool) (params map[string]string, hea
return
}
+// GetEncryption gets the Encryption field value from SseKmsHeader
func (header SseKmsHeader) GetEncryption() string {
if header.Encryption != "" {
return header.Encryption
}
if !header.isObs {
return DEFAULT_SSE_KMS_ENCRYPTION
- } else {
- return DEFAULT_SSE_KMS_ENCRYPTION_OBS
}
+ return DEFAULT_SSE_KMS_ENCRYPTION_OBS
}
+// GetKey gets the Key field value from SseKmsHeader
func (header SseKmsHeader) GetKey() string {
return header.Key
}
+// GetEncryption gets the Encryption field value from SseCHeader
func (header SseCHeader) GetEncryption() string {
if header.Encryption != "" {
return header.Encryption
@@ -402,10 +430,12 @@ func (header SseCHeader) GetEncryption() string {
return DEFAULT_SSE_C_ENCRYPTION
}
+// GetKey gets the Key field value from SseCHeader
func (header SseCHeader) GetKey() string {
return header.Key
}
+// GetKeyMD5 gets the KeyMD5 field value from SseCHeader
func (header SseCHeader) GetKeyMD5() string {
if header.KeyMD5 != "" {
return header.KeyMD5
@@ -422,11 +452,13 @@ func setSseHeader(headers map[string][]string, sseHeader ISseHeader, sseCOnly bo
if sseCHeader, ok := sseHeader.(SseCHeader); ok {
setHeaders(headers, HEADER_SSEC_ENCRYPTION, []string{sseCHeader.GetEncryption()}, isObs)
setHeaders(headers, HEADER_SSEC_KEY, []string{sseCHeader.GetKey()}, isObs)
- setHeaders(headers, HEADER_SSEC_KEY_MD5, []string{sseCHeader.GetEncryption()}, isObs)
+ setHeaders(headers, HEADER_SSEC_KEY_MD5, []string{sseCHeader.GetKeyMD5()}, isObs)
} else if sseKmsHeader, ok := sseHeader.(SseKmsHeader); !sseCOnly && ok {
sseKmsHeader.isObs = isObs
setHeaders(headers, HEADER_SSEKMS_ENCRYPTION, []string{sseKmsHeader.GetEncryption()}, isObs)
- setHeadersNext(headers, HEADER_SSEKMS_KEY_OBS, HEADER_SSEKMS_KEY_AMZ, []string{sseKmsHeader.GetKey()}, isObs)
+ if sseKmsHeader.GetKey() != "" {
+ setHeadersNext(headers, HEADER_SSEKMS_KEY_OBS, HEADER_SSEKMS_KEY_AMZ, []string{sseKmsHeader.GetKey()}, isObs)
+ }
}
}
}
@@ -449,6 +481,35 @@ func (input GetObjectMetadataInput) trans(isObs bool) (params map[string]string,
return
}
+func (input SetObjectMetadataInput) prepareContentHeaders(headers map[string][]string) {
+ if input.ContentDisposition != "" {
+ headers[HEADER_CONTENT_DISPOSITION_CAMEL] = []string{input.ContentDisposition}
+ }
+ if input.ContentEncoding != "" {
+ headers[HEADER_CONTENT_ENCODING_CAMEL] = []string{input.ContentEncoding}
+ }
+ if input.ContentLanguage != "" {
+ headers[HEADER_CONTENT_LANGUAGE_CAMEL] = []string{input.ContentLanguage}
+ }
+
+ if input.ContentType != "" {
+ headers[HEADER_CONTENT_TYPE_CAML] = []string{input.ContentType}
+ }
+}
+
+func (input SetObjectMetadataInput) prepareStorageClass(headers map[string][]string, isObs bool) {
+ if storageClass := string(input.StorageClass); storageClass != "" {
+ if !isObs {
+ if storageClass == string(StorageClassWarm) {
+ storageClass = string(storageClassStandardIA)
+ } else if storageClass == string(StorageClassCold) {
+ storageClass = string(storageClassGlacier)
+ }
+ }
+ setHeaders(headers, HEADER_STORAGE_CLASS2, []string{storageClass}, isObs)
+ }
+}
+
func (input SetObjectMetadataInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
params = make(map[string]string)
params = map[string]string{string(SubResourceMetadata): ""}
@@ -465,35 +526,14 @@ func (input SetObjectMetadataInput) trans(isObs bool) (params map[string]string,
if input.CacheControl != "" {
headers[HEADER_CACHE_CONTROL_CAMEL] = []string{input.CacheControl}
}
- if input.ContentDisposition != "" {
- headers[HEADER_CONTENT_DISPOSITION_CAMEL] = []string{input.ContentDisposition}
- }
- if input.ContentEncoding != "" {
- headers[HEADER_CONTENT_ENCODING_CAMEL] = []string{input.ContentEncoding}
- }
- if input.ContentLanguage != "" {
- headers[HEADER_CONTENT_LANGUAGE_CAMEL] = []string{input.ContentLanguage}
- }
-
- if input.ContentType != "" {
- headers[HEADER_CONTENT_TYPE_CAML] = []string{input.ContentType}
- }
+ input.prepareContentHeaders(headers)
if input.Expires != "" {
headers[HEADER_EXPIRES_CAMEL] = []string{input.Expires}
}
if input.WebsiteRedirectLocation != "" {
setHeaders(headers, HEADER_WEBSITE_REDIRECT_LOCATION, []string{input.WebsiteRedirectLocation}, isObs)
}
- if storageClass := string(input.StorageClass); storageClass != "" {
- if !isObs {
- if storageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if storageClass == "COLD" {
- storageClass = "GLACIER"
- }
- }
- setHeaders(headers, HEADER_STORAGE_CLASS2, []string{storageClass}, isObs)
- }
+ input.prepareStorageClass(headers, isObs)
if input.Metadata != nil {
for key, value := range input.Metadata {
key = strings.TrimSpace(key)
@@ -503,11 +543,7 @@ func (input SetObjectMetadataInput) trans(isObs bool) (params map[string]string,
return
}
-func (input GetObjectInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
- params, headers, data, err = input.GetObjectMetadataInput.trans(isObs)
- if err != nil {
- return
- }
+func (input GetObjectInput) prepareResponseParams(params map[string]string) {
if input.ResponseCacheControl != "" {
params[PARAM_RESPONSE_CACHE_CONTROL] = input.ResponseCacheControl
}
@@ -526,6 +562,14 @@ func (input GetObjectInput) trans(isObs bool) (params map[string]string, headers
if input.ResponseExpires != "" {
params[PARAM_RESPONSE_EXPIRES] = input.ResponseExpires
}
+}
+
+func (input GetObjectInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ params, headers, data, err = input.GetObjectMetadataInput.trans(isObs)
+ if err != nil {
+ return
+ }
+ input.prepareResponseParams(params)
if input.ImageProcess != "" {
params[PARAM_IMAGE_PROCESS] = input.ImageProcess
}
@@ -548,30 +592,34 @@ func (input GetObjectInput) trans(isObs bool) (params map[string]string, headers
return
}
+func (input ObjectOperationInput) prepareGrantHeaders(headers map[string][]string) {
+ if GrantReadID := input.GrantReadId; GrantReadID != "" {
+ setHeaders(headers, HEADER_GRANT_READ_OBS, []string{GrantReadID}, true)
+ }
+ if GrantReadAcpID := input.GrantReadAcpId; GrantReadAcpID != "" {
+ setHeaders(headers, HEADER_GRANT_READ_ACP_OBS, []string{GrantReadAcpID}, true)
+ }
+ if GrantWriteAcpID := input.GrantWriteAcpId; GrantWriteAcpID != "" {
+ setHeaders(headers, HEADER_GRANT_WRITE_ACP_OBS, []string{GrantWriteAcpID}, true)
+ }
+ if GrantFullControlID := input.GrantFullControlId; GrantFullControlID != "" {
+ setHeaders(headers, HEADER_GRANT_FULL_CONTROL_OBS, []string{GrantFullControlID}, true)
+ }
+}
+
func (input ObjectOperationInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
headers = make(map[string][]string)
params = make(map[string]string)
if acl := string(input.ACL); acl != "" {
setHeaders(headers, HEADER_ACL, []string{acl}, isObs)
}
- if GrantReadId := string(input.GrantReadId); GrantReadId != "" {
- setHeaders(headers, HEADER_GRANT_READ_OBS, []string{GrantReadId}, true)
- }
- if GrantReadAcpId := string(input.GrantReadAcpId); GrantReadAcpId != "" {
- setHeaders(headers, HEADER_GRANT_READ_ACP_OBS, []string{GrantReadAcpId}, true)
- }
- if GrantWriteAcpId := string(input.GrantWriteAcpId); GrantWriteAcpId != "" {
- setHeaders(headers, HEADER_GRANT_WRITE_ACP_OBS, []string{GrantWriteAcpId}, true)
- }
- if GrantFullControlId := string(input.GrantFullControlId); GrantFullControlId != "" {
- setHeaders(headers, HEADER_GRANT_FULL_CONTROL_OBS, []string{GrantFullControlId}, true)
- }
+ input.prepareGrantHeaders(headers)
if storageClass := string(input.StorageClass); storageClass != "" {
if !isObs {
- if storageClass == "WARM" {
- storageClass = "STANDARD_IA"
- } else if storageClass == "COLD" {
- storageClass = "GLACIER"
+ if storageClass == string(StorageClassWarm) {
+ storageClass = string(storageClassStandardIA)
+ } else if storageClass == string(StorageClassCold) {
+ storageClass = string(storageClassGlacier)
}
}
setHeaders(headers, HEADER_STORAGE_CLASS2, []string{storageClass}, isObs)
@@ -624,6 +672,42 @@ func (input PutObjectInput) trans(isObs bool) (params map[string]string, headers
return
}
+func (input CopyObjectInput) prepareReplaceHeaders(headers map[string][]string) {
+ if input.CacheControl != "" {
+ headers[HEADER_CACHE_CONTROL] = []string{input.CacheControl}
+ }
+ if input.ContentDisposition != "" {
+ headers[HEADER_CONTENT_DISPOSITION] = []string{input.ContentDisposition}
+ }
+ if input.ContentEncoding != "" {
+ headers[HEADER_CONTENT_ENCODING] = []string{input.ContentEncoding}
+ }
+ if input.ContentLanguage != "" {
+ headers[HEADER_CONTENT_LANGUAGE] = []string{input.ContentLanguage}
+ }
+ if input.ContentType != "" {
+ headers[HEADER_CONTENT_TYPE] = []string{input.ContentType}
+ }
+ if input.Expires != "" {
+ headers[HEADER_EXPIRES] = []string{input.Expires}
+ }
+}
+
+func (input CopyObjectInput) prepareCopySourceHeaders(headers map[string][]string, isObs bool) {
+ if input.CopySourceIfMatch != "" {
+ setHeaders(headers, HEADER_COPY_SOURCE_IF_MATCH, []string{input.CopySourceIfMatch}, isObs)
+ }
+ if input.CopySourceIfNoneMatch != "" {
+ setHeaders(headers, HEADER_COPY_SOURCE_IF_NONE_MATCH, []string{input.CopySourceIfNoneMatch}, isObs)
+ }
+ if !input.CopySourceIfModifiedSince.IsZero() {
+ setHeaders(headers, HEADER_COPY_SOURCE_IF_MODIFIED_SINCE, []string{FormatUtcToRfc1123(input.CopySourceIfModifiedSince)}, isObs)
+ }
+ if !input.CopySourceIfUnmodifiedSince.IsZero() {
+ setHeaders(headers, HEADER_COPY_SOURCE_IF_UNMODIFIED_SINCE, []string{FormatUtcToRfc1123(input.CopySourceIfUnmodifiedSince)}, isObs)
+ }
+}
+
func (input CopyObjectInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
params, headers, data, err = input.ObjectOperationInput.trans(isObs)
if err != nil {
@@ -643,38 +727,10 @@ func (input CopyObjectInput) trans(isObs bool) (params map[string]string, header
}
if input.MetadataDirective == ReplaceMetadata {
- if input.CacheControl != "" {
- headers[HEADER_CACHE_CONTROL] = []string{input.CacheControl}
- }
- if input.ContentDisposition != "" {
- headers[HEADER_CONTENT_DISPOSITION] = []string{input.ContentDisposition}
- }
- if input.ContentEncoding != "" {
- headers[HEADER_CONTENT_ENCODING] = []string{input.ContentEncoding}
- }
- if input.ContentLanguage != "" {
- headers[HEADER_CONTENT_LANGUAGE] = []string{input.ContentLanguage}
- }
- if input.ContentType != "" {
- headers[HEADER_CONTENT_TYPE] = []string{input.ContentType}
- }
- if input.Expires != "" {
- headers[HEADER_EXPIRES] = []string{input.Expires}
- }
+ input.prepareReplaceHeaders(headers)
}
- if input.CopySourceIfMatch != "" {
- setHeaders(headers, HEADER_COPY_SOURCE_IF_MATCH, []string{input.CopySourceIfMatch}, isObs)
- }
- if input.CopySourceIfNoneMatch != "" {
- setHeaders(headers, HEADER_COPY_SOURCE_IF_NONE_MATCH, []string{input.CopySourceIfNoneMatch}, isObs)
- }
- if !input.CopySourceIfModifiedSince.IsZero() {
- setHeaders(headers, HEADER_COPY_SOURCE_IF_MODIFIED_SINCE, []string{FormatUtcToRfc1123(input.CopySourceIfModifiedSince)}, isObs)
- }
- if !input.CopySourceIfUnmodifiedSince.IsZero() {
- setHeaders(headers, HEADER_COPY_SOURCE_IF_UNMODIFIED_SINCE, []string{FormatUtcToRfc1123(input.CopySourceIfUnmodifiedSince)}, isObs)
- }
+ input.prepareCopySourceHeaders(headers, isObs)
if input.SourceSseHeader != nil {
if sseCHeader, ok := input.SourceSseHeader.(SseCHeader); ok {
setHeaders(headers, HEADER_SSEC_COPY_SOURCE_ENCRYPTION, []string{sseCHeader.GetEncryption()}, isObs)
@@ -702,6 +758,9 @@ func (input InitiateMultipartUploadInput) trans(isObs bool) (params map[string]s
headers[HEADER_CONTENT_TYPE_CAML] = []string{input.ContentType}
}
params[string(SubResourceUploads)] = ""
+ if input.EncodingType != "" {
+ params["encoding-type"] = input.EncodingType
+ }
return
}
@@ -720,6 +779,9 @@ func (input UploadPartInput) trans(isObs bool) (params map[string]string, header
func (input CompleteMultipartUploadInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
params = map[string]string{"uploadId": input.UploadId}
+ if input.EncodingType != "" {
+ params["encoding-type"] = input.EncodingType
+ }
data, _ = ConvertCompleteMultipartUploadInputToXml(input, false)
return
}
@@ -732,6 +794,9 @@ func (input ListPartsInput) trans(isObs bool) (params map[string]string, headers
if input.PartNumberMarker > 0 {
params["part-number-marker"] = IntToString(input.PartNumberMarker)
}
+ if input.EncodingType != "" {
+ params["encoding-type"] = input.EncodingType
+ }
return
}
@@ -761,6 +826,18 @@ func (input CopyPartInput) trans(isObs bool) (params map[string]string, headers
return
}
+func (input HeadObjectInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ params = make(map[string]string)
+ if input.VersionId != "" {
+ params[PARAM_VERSION_ID] = input.VersionId
+ }
+ return
+}
+
+func (input SetBucketRequestPaymentInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ return trans(SubResourceRequestPayment, input)
+}
+
type partSlice []Part
func (parts partSlice) Len() int {
@@ -800,13 +877,13 @@ func (rw *readerWrapper) Read(p []byte) (n int, err error) {
if rw.totalCount > 0 {
n, err = rw.reader.Read(p)
readedOnce := int64(n)
- if remainCount := rw.totalCount - rw.readedCount; remainCount > readedOnce {
+ remainCount := rw.totalCount - rw.readedCount
+ if remainCount > readedOnce {
rw.readedCount += readedOnce
return n, err
- } else {
- rw.readedCount += remainCount
- return int(remainCount), io.EOF
}
+ rw.readedCount += remainCount
+ return int(remainCount), io.EOF
}
return rw.reader.Read(p)
}
@@ -815,3 +892,39 @@ type fileReaderWrapper struct {
readerWrapper
filePath string
}
+
+func (input SetBucketFetchPolicyInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ contentType, _ := mimeTypes["json"]
+ headers = make(map[string][]string, 2)
+ headers[HEADER_CONTENT_TYPE] = []string{contentType}
+ setHeaders(headers, headerOefMarker, []string{"yes"}, isObs)
+ data, err = convertFetchPolicyToJSON(input)
+ return
+}
+
+func (input GetBucketFetchPolicyInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ headers = make(map[string][]string, 1)
+ setHeaders(headers, headerOefMarker, []string{"yes"}, isObs)
+ return
+}
+
+func (input DeleteBucketFetchPolicyInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ headers = make(map[string][]string, 1)
+ setHeaders(headers, headerOefMarker, []string{"yes"}, isObs)
+ return
+}
+
+func (input SetBucketFetchJobInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ contentType, _ := mimeTypes["json"]
+ headers = make(map[string][]string, 2)
+ headers[HEADER_CONTENT_TYPE] = []string{contentType}
+ setHeaders(headers, headerOefMarker, []string{"yes"}, isObs)
+ data, err = convertFetchJobToJSON(input)
+ return
+}
+
+func (input GetBucketFetchJobInput) trans(isObs bool) (params map[string]string, headers map[string][]string, data interface{}, err error) {
+ headers = make(map[string][]string, 1)
+ setHeaders(headers, headerOefMarker, []string{"yes"}, isObs)
+ return
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/transfer.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/transfer.go
new file mode 100644
index 00000000000..ce29e6e460f
--- /dev/null
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/transfer.go
@@ -0,0 +1,874 @@
+// Copyright 2019 Huawei Technologies Co.,Ltd.
+// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
+// this file except in compliance with the License. You may obtain a copy of the
+// License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software distributed
+// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
+// CONDITIONS OF ANY KIND, either express or implied. See the License for the
+// specific language governing permissions and limitations under the License.
+
+package obs
+
+import (
+ "bufio"
+ "encoding/xml"
+ "errors"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "sync"
+ "sync/atomic"
+ "syscall"
+)
+
+var errAbort = errors.New("AbortError")
+
+// FileStatus defines the upload file properties
+type FileStatus struct {
+ XMLName xml.Name `xml:"FileInfo"`
+ LastModified int64 `xml:"LastModified"`
+ Size int64 `xml:"Size"`
+}
+
+// UploadPartInfo defines the upload part properties
+type UploadPartInfo struct {
+ XMLName xml.Name `xml:"UploadPart"`
+ PartNumber int `xml:"PartNumber"`
+ Etag string `xml:"Etag"`
+ PartSize int64 `xml:"PartSize"`
+ Offset int64 `xml:"Offset"`
+ IsCompleted bool `xml:"IsCompleted"`
+}
+
+// UploadCheckpoint defines the upload checkpoint file properties
+type UploadCheckpoint struct {
+ XMLName xml.Name `xml:"UploadFileCheckpoint"`
+ Bucket string `xml:"Bucket"`
+ Key string `xml:"Key"`
+ UploadId string `xml:"UploadId,omitempty"`
+ UploadFile string `xml:"FileUrl"`
+ FileInfo FileStatus `xml:"FileInfo"`
+ UploadParts []UploadPartInfo `xml:"UploadParts>UploadPart"`
+}
+
+func (ufc *UploadCheckpoint) isValid(bucket, key, uploadFile string, fileStat os.FileInfo) bool {
+ if ufc.Bucket != bucket || ufc.Key != key || ufc.UploadFile != uploadFile {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, the bucketName or objectKey or uploadFile was changed. clear the record.")
+ return false
+ }
+
+ if ufc.FileInfo.Size != fileStat.Size() || ufc.FileInfo.LastModified != fileStat.ModTime().Unix() {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, the uploadFile was changed. clear the record.")
+ return false
+ }
+
+ if ufc.UploadId == "" {
+ doLog(LEVEL_INFO, "UploadId is invalid. clear the record.")
+ return false
+ }
+
+ return true
+}
+
+type uploadPartTask struct {
+ UploadPartInput
+ obsClient *ObsClient
+ abort *int32
+ extensions []extensionOptions
+ enableCheckpoint bool
+}
+
+func (task *uploadPartTask) Run() interface{} {
+ if atomic.LoadInt32(task.abort) == 1 {
+ return errAbort
+ }
+
+ input := &UploadPartInput{}
+ input.Bucket = task.Bucket
+ input.Key = task.Key
+ input.PartNumber = task.PartNumber
+ input.UploadId = task.UploadId
+ input.SseHeader = task.SseHeader
+ input.SourceFile = task.SourceFile
+ input.Offset = task.Offset
+ input.PartSize = task.PartSize
+ extensions := task.extensions
+
+ var output *UploadPartOutput
+ var err error
+ if extensions != nil {
+ output, err = task.obsClient.UploadPart(input, extensions...)
+ } else {
+ output, err = task.obsClient.UploadPart(input)
+ }
+
+ if err == nil {
+ if output.ETag == "" {
+ doLog(LEVEL_WARN, "Get invalid etag value after uploading part [%d].", task.PartNumber)
+ if !task.enableCheckpoint {
+ atomic.CompareAndSwapInt32(task.abort, 0, 1)
+ doLog(LEVEL_WARN, "Task is aborted, part number is [%d]", task.PartNumber)
+ }
+ return fmt.Errorf("get invalid etag value after uploading part [%d]", task.PartNumber)
+ }
+ return output
+ } else if obsError, ok := err.(ObsError); ok && obsError.StatusCode >= 400 && obsError.StatusCode < 500 {
+ atomic.CompareAndSwapInt32(task.abort, 0, 1)
+ doLog(LEVEL_WARN, "Task is aborted, part number is [%d]", task.PartNumber)
+ }
+ return err
+}
+
+func loadCheckpointFile(checkpointFile string, result interface{}) error {
+ ret, err := ioutil.ReadFile(checkpointFile)
+ if err != nil {
+ return err
+ }
+ if len(ret) == 0 {
+ return nil
+ }
+ return xml.Unmarshal(ret, result)
+}
+
+func updateCheckpointFile(fc interface{}, checkpointFilePath string) error {
+ result, err := xml.Marshal(fc)
+ if err != nil {
+ return err
+ }
+ err = ioutil.WriteFile(checkpointFilePath, result, 0666)
+ return err
+}
+
+func getCheckpointFile(ufc *UploadCheckpoint, uploadFileStat os.FileInfo, input *UploadFileInput, obsClient *ObsClient, extensions []extensionOptions) (needCheckpoint bool, err error) {
+ checkpointFilePath := input.CheckpointFile
+ checkpointFileStat, err := os.Stat(checkpointFilePath)
+ if err != nil {
+ doLog(LEVEL_DEBUG, fmt.Sprintf("Stat checkpoint file failed with error: [%v].", err))
+ return true, nil
+ }
+ if checkpointFileStat.IsDir() {
+ doLog(LEVEL_ERROR, "Checkpoint file can not be a folder.")
+ return false, errors.New("checkpoint file can not be a folder")
+ }
+ err = loadCheckpointFile(checkpointFilePath, ufc)
+ if err != nil {
+ doLog(LEVEL_WARN, fmt.Sprintf("Load checkpoint file failed with error: [%v].", err))
+ return true, nil
+ } else if !ufc.isValid(input.Bucket, input.Key, input.UploadFile, uploadFileStat) {
+ if ufc.Bucket != "" && ufc.Key != "" && ufc.UploadId != "" {
+ _err := abortTask(ufc.Bucket, ufc.Key, ufc.UploadId, obsClient, extensions)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to abort upload task [%s].", ufc.UploadId)
+ }
+ }
+ _err := os.Remove(checkpointFilePath)
+ if _err != nil {
+ doLog(LEVEL_WARN, fmt.Sprintf("Failed to remove checkpoint file with error: [%v].", _err))
+ }
+ } else {
+ return false, nil
+ }
+
+ return true, nil
+}
+
+func prepareUpload(ufc *UploadCheckpoint, uploadFileStat os.FileInfo, input *UploadFileInput, obsClient *ObsClient, extensions []extensionOptions) error {
+ initiateInput := &InitiateMultipartUploadInput{}
+ initiateInput.ObjectOperationInput = input.ObjectOperationInput
+ initiateInput.ContentType = input.ContentType
+ initiateInput.EncodingType = input.EncodingType
+ var output *InitiateMultipartUploadOutput
+ var err error
+ if extensions != nil {
+ output, err = obsClient.InitiateMultipartUpload(initiateInput, extensions...)
+ } else {
+ output, err = obsClient.InitiateMultipartUpload(initiateInput)
+ }
+ if err != nil {
+ return err
+ }
+
+ ufc.Bucket = input.Bucket
+ ufc.Key = input.Key
+ ufc.UploadFile = input.UploadFile
+ ufc.FileInfo = FileStatus{}
+ ufc.FileInfo.Size = uploadFileStat.Size()
+ ufc.FileInfo.LastModified = uploadFileStat.ModTime().Unix()
+ ufc.UploadId = output.UploadId
+
+ err = sliceFile(input.PartSize, ufc)
+ return err
+}
+
+func sliceFile(partSize int64, ufc *UploadCheckpoint) error {
+ fileSize := ufc.FileInfo.Size
+ cnt := fileSize / partSize
+ if cnt >= 10000 {
+ partSize = fileSize / 10000
+ if fileSize%10000 != 0 {
+ partSize++
+ }
+ cnt = fileSize / partSize
+ }
+ if fileSize%partSize != 0 {
+ cnt++
+ }
+
+ if partSize > MAX_PART_SIZE {
+ doLog(LEVEL_ERROR, "The source upload file is too large")
+ return fmt.Errorf("The source upload file is too large")
+ }
+
+ if cnt == 0 {
+ uploadPart := UploadPartInfo{}
+ uploadPart.PartNumber = 1
+ ufc.UploadParts = []UploadPartInfo{uploadPart}
+ } else {
+ uploadParts := make([]UploadPartInfo, 0, cnt)
+ var i int64
+ for i = 0; i < cnt; i++ {
+ uploadPart := UploadPartInfo{}
+ uploadPart.PartNumber = int(i) + 1
+ uploadPart.PartSize = partSize
+ uploadPart.Offset = i * partSize
+ uploadParts = append(uploadParts, uploadPart)
+ }
+ if value := fileSize % partSize; value != 0 {
+ uploadParts[cnt-1].PartSize = value
+ }
+ ufc.UploadParts = uploadParts
+ }
+ return nil
+}
+
+func abortTask(bucket, key, uploadID string, obsClient *ObsClient, extensions []extensionOptions) error {
+ input := &AbortMultipartUploadInput{}
+ input.Bucket = bucket
+ input.Key = key
+ input.UploadId = uploadID
+ if extensions != nil {
+ _, err := obsClient.AbortMultipartUpload(input, extensions...)
+ return err
+ }
+ _, err := obsClient.AbortMultipartUpload(input)
+ return err
+}
+
+func handleUploadFileResult(uploadPartError error, ufc *UploadCheckpoint, enableCheckpoint bool, obsClient *ObsClient, extensions []extensionOptions) error {
+ if uploadPartError != nil {
+ if enableCheckpoint {
+ return uploadPartError
+ }
+ _err := abortTask(ufc.Bucket, ufc.Key, ufc.UploadId, obsClient, extensions)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to abort task [%s].", ufc.UploadId)
+ }
+ return uploadPartError
+ }
+ return nil
+}
+
+func completeParts(ufc *UploadCheckpoint, enableCheckpoint bool, checkpointFilePath string, obsClient *ObsClient, encodingType string, extensions []extensionOptions) (output *CompleteMultipartUploadOutput, err error) {
+ completeInput := &CompleteMultipartUploadInput{}
+ completeInput.Bucket = ufc.Bucket
+ completeInput.Key = ufc.Key
+ completeInput.UploadId = ufc.UploadId
+ completeInput.EncodingType = encodingType
+ parts := make([]Part, 0, len(ufc.UploadParts))
+ for _, uploadPart := range ufc.UploadParts {
+ part := Part{}
+ part.PartNumber = uploadPart.PartNumber
+ part.ETag = uploadPart.Etag
+ parts = append(parts, part)
+ }
+ completeInput.Parts = parts
+ var completeOutput *CompleteMultipartUploadOutput
+ if extensions != nil {
+ completeOutput, err = obsClient.CompleteMultipartUpload(completeInput, extensions...)
+ } else {
+ completeOutput, err = obsClient.CompleteMultipartUpload(completeInput)
+ }
+
+ if err == nil {
+ if enableCheckpoint {
+ _err := os.Remove(checkpointFilePath)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Upload file successfully, but remove checkpoint file failed with error [%v].", _err)
+ }
+ }
+ return completeOutput, err
+ }
+ if !enableCheckpoint {
+ _err := abortTask(ufc.Bucket, ufc.Key, ufc.UploadId, obsClient, extensions)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to abort task [%s].", ufc.UploadId)
+ }
+ }
+ return completeOutput, err
+}
+
+func (obsClient ObsClient) resumeUpload(input *UploadFileInput, extensions []extensionOptions) (output *CompleteMultipartUploadOutput, err error) {
+ uploadFileStat, err := os.Stat(input.UploadFile)
+ if err != nil {
+ doLog(LEVEL_ERROR, fmt.Sprintf("Failed to stat uploadFile with error: [%v].", err))
+ return nil, err
+ }
+ if uploadFileStat.IsDir() {
+ doLog(LEVEL_ERROR, "UploadFile can not be a folder.")
+ return nil, errors.New("uploadFile can not be a folder")
+ }
+
+ ufc := &UploadCheckpoint{}
+
+ var needCheckpoint = true
+ var checkpointFilePath = input.CheckpointFile
+ var enableCheckpoint = input.EnableCheckpoint
+ if enableCheckpoint {
+ needCheckpoint, err = getCheckpointFile(ufc, uploadFileStat, input, &obsClient, extensions)
+ if err != nil {
+ return nil, err
+ }
+ }
+ if needCheckpoint {
+ err = prepareUpload(ufc, uploadFileStat, input, &obsClient, extensions)
+ if err != nil {
+ return nil, err
+ }
+
+ if enableCheckpoint {
+ err = updateCheckpointFile(ufc, checkpointFilePath)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to update checkpoint file with error [%v].", err)
+ _err := abortTask(ufc.Bucket, ufc.Key, ufc.UploadId, &obsClient, extensions)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to abort task [%s].", ufc.UploadId)
+ }
+ return nil, err
+ }
+ }
+ }
+
+ uploadPartError := obsClient.uploadPartConcurrent(ufc, checkpointFilePath, input, extensions)
+ err = handleUploadFileResult(uploadPartError, ufc, enableCheckpoint, &obsClient, extensions)
+ if err != nil {
+ return nil, err
+ }
+
+ completeOutput, err := completeParts(ufc, enableCheckpoint, checkpointFilePath, &obsClient, input.EncodingType, extensions)
+
+ return completeOutput, err
+}
+
+func handleUploadTaskResult(result interface{}, ufc *UploadCheckpoint, partNum int, enableCheckpoint bool, checkpointFilePath string, lock *sync.Mutex) (err error) {
+ if uploadPartOutput, ok := result.(*UploadPartOutput); ok {
+ lock.Lock()
+ defer lock.Unlock()
+ ufc.UploadParts[partNum-1].Etag = uploadPartOutput.ETag
+ ufc.UploadParts[partNum-1].IsCompleted = true
+ if enableCheckpoint {
+ _err := updateCheckpointFile(ufc, checkpointFilePath)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to update checkpoint file with error [%v].", _err)
+ }
+ }
+ } else if result != errAbort {
+ if _err, ok := result.(error); ok {
+ err = _err
+ }
+ }
+ return
+}
+
+func (obsClient ObsClient) uploadPartConcurrent(ufc *UploadCheckpoint, checkpointFilePath string, input *UploadFileInput, extensions []extensionOptions) error {
+ pool := NewRoutinePool(input.TaskNum, MAX_PART_NUM)
+ var uploadPartError atomic.Value
+ var errFlag int32
+ var abort int32
+ lock := new(sync.Mutex)
+ for _, uploadPart := range ufc.UploadParts {
+ if atomic.LoadInt32(&abort) == 1 {
+ break
+ }
+ if uploadPart.IsCompleted {
+ continue
+ }
+ task := uploadPartTask{
+ UploadPartInput: UploadPartInput{
+ Bucket: ufc.Bucket,
+ Key: ufc.Key,
+ PartNumber: uploadPart.PartNumber,
+ UploadId: ufc.UploadId,
+ SseHeader: input.SseHeader,
+ SourceFile: input.UploadFile,
+ Offset: uploadPart.Offset,
+ PartSize: uploadPart.PartSize,
+ },
+ obsClient: &obsClient,
+ abort: &abort,
+ extensions: extensions,
+ enableCheckpoint: input.EnableCheckpoint,
+ }
+ pool.ExecuteFunc(func() interface{} {
+ result := task.Run()
+ err := handleUploadTaskResult(result, ufc, task.PartNumber, input.EnableCheckpoint, input.CheckpointFile, lock)
+ if err != nil && atomic.CompareAndSwapInt32(&errFlag, 0, 1) {
+ uploadPartError.Store(err)
+ }
+ return nil
+ })
+ }
+ pool.ShutDown()
+ if err, ok := uploadPartError.Load().(error); ok {
+ return err
+ }
+ return nil
+}
+
+// ObjectInfo defines download object info
+type ObjectInfo struct {
+ XMLName xml.Name `xml:"ObjectInfo"`
+ LastModified int64 `xml:"LastModified"`
+ Size int64 `xml:"Size"`
+ ETag string `xml:"ETag"`
+}
+
+// TempFileInfo defines temp download file properties
+type TempFileInfo struct {
+ XMLName xml.Name `xml:"TempFileInfo"`
+ TempFileUrl string `xml:"TempFileUrl"`
+ Size int64 `xml:"Size"`
+}
+
+// DownloadPartInfo defines download part properties
+type DownloadPartInfo struct {
+ XMLName xml.Name `xml:"DownloadPart"`
+ PartNumber int64 `xml:"PartNumber"`
+ RangeEnd int64 `xml:"RangeEnd"`
+ Offset int64 `xml:"Offset"`
+ IsCompleted bool `xml:"IsCompleted"`
+}
+
+// DownloadCheckpoint defines download checkpoint file properties
+type DownloadCheckpoint struct {
+ XMLName xml.Name `xml:"DownloadFileCheckpoint"`
+ Bucket string `xml:"Bucket"`
+ Key string `xml:"Key"`
+ VersionId string `xml:"VersionId,omitempty"`
+ DownloadFile string `xml:"FileUrl"`
+ ObjectInfo ObjectInfo `xml:"ObjectInfo"`
+ TempFileInfo TempFileInfo `xml:"TempFileInfo"`
+ DownloadParts []DownloadPartInfo `xml:"DownloadParts>DownloadPart"`
+}
+
+func (dfc *DownloadCheckpoint) isValid(input *DownloadFileInput, output *GetObjectMetadataOutput) bool {
+ if dfc.Bucket != input.Bucket || dfc.Key != input.Key || dfc.VersionId != input.VersionId || dfc.DownloadFile != input.DownloadFile {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, the bucketName or objectKey or downloadFile was changed. clear the record.")
+ return false
+ }
+ if dfc.ObjectInfo.LastModified != output.LastModified.Unix() || dfc.ObjectInfo.ETag != output.ETag || dfc.ObjectInfo.Size != output.ContentLength {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, the object info was changed. clear the record.")
+ return false
+ }
+ if dfc.TempFileInfo.Size != output.ContentLength {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, size was changed. clear the record.")
+ return false
+ }
+ stat, err := os.Stat(dfc.TempFileInfo.TempFileUrl)
+ if err != nil || stat.Size() != dfc.ObjectInfo.Size {
+ doLog(LEVEL_INFO, "Checkpoint file is invalid, the temp download file was changed. clear the record.")
+ return false
+ }
+
+ return true
+}
+
+type downloadPartTask struct {
+ GetObjectInput
+ obsClient *ObsClient
+ extensions []extensionOptions
+ abort *int32
+ partNumber int64
+ tempFileURL string
+ enableCheckpoint bool
+}
+
+func (task *downloadPartTask) Run() interface{} {
+ if atomic.LoadInt32(task.abort) == 1 {
+ return errAbort
+ }
+ getObjectInput := &GetObjectInput{}
+ getObjectInput.GetObjectMetadataInput = task.GetObjectMetadataInput
+ getObjectInput.IfMatch = task.IfMatch
+ getObjectInput.IfNoneMatch = task.IfNoneMatch
+ getObjectInput.IfModifiedSince = task.IfModifiedSince
+ getObjectInput.IfUnmodifiedSince = task.IfUnmodifiedSince
+ getObjectInput.RangeStart = task.RangeStart
+ getObjectInput.RangeEnd = task.RangeEnd
+
+ var output *GetObjectOutput
+ var err error
+ if task.extensions != nil {
+ output, err = task.obsClient.GetObject(getObjectInput, task.extensions...)
+ } else {
+ output, err = task.obsClient.GetObject(getObjectInput)
+ }
+
+ if err == nil {
+ defer func() {
+ errMsg := output.Body.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close response body.")
+ }
+ }()
+ _err := updateDownloadFile(task.tempFileURL, task.RangeStart, output)
+ if _err != nil {
+ if !task.enableCheckpoint {
+ atomic.CompareAndSwapInt32(task.abort, 0, 1)
+ doLog(LEVEL_WARN, "Task is aborted, part number is [%d]", task.partNumber)
+ }
+ return _err
+ }
+ return output
+ } else if obsError, ok := err.(ObsError); ok && obsError.StatusCode >= 400 && obsError.StatusCode < 500 {
+ atomic.CompareAndSwapInt32(task.abort, 0, 1)
+ doLog(LEVEL_WARN, "Task is aborted, part number is [%d]", task.partNumber)
+ }
+ return err
+}
+
+func getObjectInfo(input *DownloadFileInput, obsClient *ObsClient, extensions []extensionOptions) (getObjectmetaOutput *GetObjectMetadataOutput, err error) {
+ if extensions != nil {
+ getObjectmetaOutput, err = obsClient.GetObjectMetadata(&input.GetObjectMetadataInput, extensions...)
+ } else {
+ getObjectmetaOutput, err = obsClient.GetObjectMetadata(&input.GetObjectMetadataInput)
+ }
+
+ return
+}
+
+func getDownloadCheckpointFile(dfc *DownloadCheckpoint, input *DownloadFileInput, output *GetObjectMetadataOutput) (needCheckpoint bool, err error) {
+ checkpointFilePath := input.CheckpointFile
+ checkpointFileStat, err := os.Stat(checkpointFilePath)
+ if err != nil {
+ doLog(LEVEL_DEBUG, fmt.Sprintf("Stat checkpoint file failed with error: [%v].", err))
+ return true, nil
+ }
+ if checkpointFileStat.IsDir() {
+ doLog(LEVEL_ERROR, "Checkpoint file can not be a folder.")
+ return false, errors.New("checkpoint file can not be a folder")
+ }
+ err = loadCheckpointFile(checkpointFilePath, dfc)
+ if err != nil {
+ doLog(LEVEL_WARN, fmt.Sprintf("Load checkpoint file failed with error: [%v].", err))
+ return true, nil
+ } else if !dfc.isValid(input, output) {
+ if dfc.TempFileInfo.TempFileUrl != "" {
+ _err := os.Remove(dfc.TempFileInfo.TempFileUrl)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to remove temp download file with error [%v].", _err)
+ }
+ }
+ _err := os.Remove(checkpointFilePath)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to remove checkpoint file with error [%v].", _err)
+ }
+ } else {
+ return false, nil
+ }
+
+ return true, nil
+}
+
+func sliceObject(objectSize, partSize int64, dfc *DownloadCheckpoint) {
+ cnt := objectSize / partSize
+ if objectSize%partSize > 0 {
+ cnt++
+ }
+
+ if cnt == 0 {
+ downloadPart := DownloadPartInfo{}
+ downloadPart.PartNumber = 1
+ dfc.DownloadParts = []DownloadPartInfo{downloadPart}
+ } else {
+ downloadParts := make([]DownloadPartInfo, 0, cnt)
+ var i int64
+ for i = 0; i < cnt; i++ {
+ downloadPart := DownloadPartInfo{}
+ downloadPart.PartNumber = i + 1
+ downloadPart.Offset = i * partSize
+ downloadPart.RangeEnd = (i+1)*partSize - 1
+ downloadParts = append(downloadParts, downloadPart)
+ }
+ dfc.DownloadParts = downloadParts
+ if value := objectSize % partSize; value > 0 {
+ dfc.DownloadParts[cnt-1].RangeEnd = dfc.ObjectInfo.Size - 1
+ }
+ }
+}
+
+func createFile(tempFileURL string, fileSize int64) error {
+ fd, err := syscall.Open(tempFileURL, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
+ if err != nil {
+ doLog(LEVEL_WARN, "Failed to open temp download file [%s].", tempFileURL)
+ return err
+ }
+ defer func() {
+ errMsg := syscall.Close(fd)
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with error [%v].", errMsg)
+ }
+ }()
+ err = syscall.Ftruncate(fd, fileSize)
+ if err != nil {
+ doLog(LEVEL_WARN, "Failed to create file with error [%v].", err)
+ }
+ return err
+}
+
+func prepareTempFile(tempFileURL string, fileSize int64) error {
+ parentDir := filepath.Dir(tempFileURL)
+ stat, err := os.Stat(parentDir)
+ if err != nil {
+ doLog(LEVEL_DEBUG, "Failed to stat path with error [%v].", err)
+ _err := os.MkdirAll(parentDir, os.ModePerm)
+ if _err != nil {
+ doLog(LEVEL_ERROR, "Failed to make dir with error [%v].", _err)
+ return _err
+ }
+ } else if !stat.IsDir() {
+ doLog(LEVEL_ERROR, "Cannot create folder [%s] due to a same file exists.", parentDir)
+ return fmt.Errorf("cannot create folder [%s] due to a same file exists", parentDir)
+ }
+
+ err = createFile(tempFileURL, fileSize)
+ if err == nil {
+ return nil
+ }
+ fd, err := os.OpenFile(tempFileURL, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to open temp download file [%s].", tempFileURL)
+ return err
+ }
+ defer func() {
+ errMsg := fd.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with error [%v].", errMsg)
+ }
+ }()
+ if fileSize > 0 {
+ _, err = fd.WriteAt([]byte("a"), fileSize-1)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to create temp download file with error [%v].", err)
+ return err
+ }
+ }
+
+ return nil
+}
+
+func handleDownloadFileResult(tempFileURL string, enableCheckpoint bool, downloadFileError error) error {
+ if downloadFileError != nil {
+ if !enableCheckpoint {
+ _err := os.Remove(tempFileURL)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to remove temp download file with error [%v].", _err)
+ }
+ }
+ return downloadFileError
+ }
+ return nil
+}
+
+func (obsClient ObsClient) resumeDownload(input *DownloadFileInput, extensions []extensionOptions) (output *GetObjectMetadataOutput, err error) {
+ getObjectmetaOutput, err := getObjectInfo(input, &obsClient, extensions)
+ if err != nil {
+ return nil, err
+ }
+
+ objectSize := getObjectmetaOutput.ContentLength
+ partSize := input.PartSize
+ dfc := &DownloadCheckpoint{}
+
+ var needCheckpoint = true
+ var checkpointFilePath = input.CheckpointFile
+ var enableCheckpoint = input.EnableCheckpoint
+ if enableCheckpoint {
+ needCheckpoint, err = getDownloadCheckpointFile(dfc, input, getObjectmetaOutput)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ if needCheckpoint {
+ dfc.Bucket = input.Bucket
+ dfc.Key = input.Key
+ dfc.VersionId = input.VersionId
+ dfc.DownloadFile = input.DownloadFile
+ dfc.ObjectInfo = ObjectInfo{}
+ dfc.ObjectInfo.LastModified = getObjectmetaOutput.LastModified.Unix()
+ dfc.ObjectInfo.Size = getObjectmetaOutput.ContentLength
+ dfc.ObjectInfo.ETag = getObjectmetaOutput.ETag
+ dfc.TempFileInfo = TempFileInfo{}
+ dfc.TempFileInfo.TempFileUrl = input.DownloadFile + ".tmp"
+ dfc.TempFileInfo.Size = getObjectmetaOutput.ContentLength
+
+ sliceObject(objectSize, partSize, dfc)
+ _err := prepareTempFile(dfc.TempFileInfo.TempFileUrl, dfc.TempFileInfo.Size)
+ if _err != nil {
+ return nil, _err
+ }
+
+ if enableCheckpoint {
+ _err := updateCheckpointFile(dfc, checkpointFilePath)
+ if _err != nil {
+ doLog(LEVEL_ERROR, "Failed to update checkpoint file with error [%v].", _err)
+ _errMsg := os.Remove(dfc.TempFileInfo.TempFileUrl)
+ if _errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to remove temp download file with error [%v].", _errMsg)
+ }
+ return nil, _err
+ }
+ }
+ }
+
+ downloadFileError := obsClient.downloadFileConcurrent(input, dfc, extensions)
+ err = handleDownloadFileResult(dfc.TempFileInfo.TempFileUrl, enableCheckpoint, downloadFileError)
+ if err != nil {
+ return nil, err
+ }
+
+ err = os.Rename(dfc.TempFileInfo.TempFileUrl, input.DownloadFile)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to rename temp download file [%s] to download file [%s] with error [%v].", dfc.TempFileInfo.TempFileUrl, input.DownloadFile, err)
+ return nil, err
+ }
+ if enableCheckpoint {
+ err = os.Remove(checkpointFilePath)
+ if err != nil {
+ doLog(LEVEL_WARN, "Download file successfully, but remove checkpoint file failed with error [%v].", err)
+ }
+ }
+
+ return getObjectmetaOutput, nil
+}
+
+func updateDownloadFile(filePath string, rangeStart int64, output *GetObjectOutput) error {
+ fd, err := os.OpenFile(filePath, os.O_WRONLY, 0666)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to open file [%s].", filePath)
+ return err
+ }
+ defer func() {
+ errMsg := fd.Close()
+ if errMsg != nil {
+ doLog(LEVEL_WARN, "Failed to close file with error [%v].", errMsg)
+ }
+ }()
+ _, err = fd.Seek(rangeStart, 0)
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to seek file with error [%v].", err)
+ return err
+ }
+ fileWriter := bufio.NewWriterSize(fd, 65536)
+ part := make([]byte, 8192)
+ var readErr error
+ var readCount int
+ for {
+ readCount, readErr = output.Body.Read(part)
+ if readCount > 0 {
+ wcnt, werr := fileWriter.Write(part[0:readCount])
+ if werr != nil {
+ doLog(LEVEL_ERROR, "Failed to write to file with error [%v].", werr)
+ return werr
+ }
+ if wcnt != readCount {
+ doLog(LEVEL_ERROR, "Failed to write to file [%s], expect: [%d], actual: [%d]", filePath, readCount, wcnt)
+ return fmt.Errorf("Failed to write to file [%s], expect: [%d], actual: [%d]", filePath, readCount, wcnt)
+ }
+ }
+ if readErr != nil {
+ if readErr != io.EOF {
+ doLog(LEVEL_ERROR, "Failed to read response body with error [%v].", readErr)
+ return readErr
+ }
+ break
+ }
+ }
+ err = fileWriter.Flush()
+ if err != nil {
+ doLog(LEVEL_ERROR, "Failed to flush file with error [%v].", err)
+ return err
+ }
+ return nil
+}
+
+func handleDownloadTaskResult(result interface{}, dfc *DownloadCheckpoint, partNum int64, enableCheckpoint bool, checkpointFile string, lock *sync.Mutex) (err error) {
+ if _, ok := result.(*GetObjectOutput); ok {
+ lock.Lock()
+ defer lock.Unlock()
+ dfc.DownloadParts[partNum-1].IsCompleted = true
+ if enableCheckpoint {
+ _err := updateCheckpointFile(dfc, checkpointFile)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to update checkpoint file with error [%v].", _err)
+ }
+ }
+ } else if result != errAbort {
+ if _err, ok := result.(error); ok {
+ err = _err
+ }
+ }
+ return
+}
+
+func (obsClient ObsClient) downloadFileConcurrent(input *DownloadFileInput, dfc *DownloadCheckpoint, extensions []extensionOptions) error {
+ pool := NewRoutinePool(input.TaskNum, MAX_PART_NUM)
+ var downloadPartError atomic.Value
+ var errFlag int32
+ var abort int32
+ lock := new(sync.Mutex)
+ for _, downloadPart := range dfc.DownloadParts {
+ if atomic.LoadInt32(&abort) == 1 {
+ break
+ }
+ if downloadPart.IsCompleted {
+ continue
+ }
+ task := downloadPartTask{
+ GetObjectInput: GetObjectInput{
+ GetObjectMetadataInput: input.GetObjectMetadataInput,
+ IfMatch: input.IfMatch,
+ IfNoneMatch: input.IfNoneMatch,
+ IfUnmodifiedSince: input.IfUnmodifiedSince,
+ IfModifiedSince: input.IfModifiedSince,
+ RangeStart: downloadPart.Offset,
+ RangeEnd: downloadPart.RangeEnd,
+ },
+ obsClient: &obsClient,
+ extensions: extensions,
+ abort: &abort,
+ partNumber: downloadPart.PartNumber,
+ tempFileURL: dfc.TempFileInfo.TempFileUrl,
+ enableCheckpoint: input.EnableCheckpoint,
+ }
+ pool.ExecuteFunc(func() interface{} {
+ result := task.Run()
+ err := handleDownloadTaskResult(result, dfc, task.partNumber, input.EnableCheckpoint, input.CheckpointFile, lock)
+ if err != nil && atomic.CompareAndSwapInt32(&errFlag, 0, 1) {
+ downloadPartError.Store(err)
+ }
+ return nil
+ })
+ }
+ pool.ShutDown()
+ if err, ok := downloadPartError.Load().(error); ok {
+ return err
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/util.go b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/util.go
index b685b81b7cc..6ee53826c67 100644
--- a/vendor/github.com/huaweicloud/golangsdk/openstack/obs/util.go
+++ b/vendor/github.com/huaweicloud/golangsdk/openstack/obs/util.go
@@ -19,6 +19,7 @@ import (
"crypto/sha256"
"encoding/base64"
"encoding/hex"
+ "encoding/json"
"encoding/xml"
"fmt"
"net/url"
@@ -33,9 +34,12 @@ var ipRegex = regexp.MustCompile("^((2[0-4]\\d|25[0-5]|[01]?\\d\\d?)\\.){3}(2[0-
var v4AuthRegex = regexp.MustCompile("Credential=(.+?),SignedHeaders=(.+?),Signature=.+")
var regionRegex = regexp.MustCompile(".+/\\d+/(.+?)/.+")
+// StringContains replaces subStr in src with subTranscoding and returns the new string
func StringContains(src string, subStr string, subTranscoding string) string {
return strings.Replace(src, subStr, subTranscoding, -1)
}
+
+// XmlTranscoding replaces special characters with their escaped form
func XmlTranscoding(src string) string {
srcTmp := StringContains(src, "&", "&")
srcTmp = StringContains(srcTmp, "<", "<")
@@ -44,6 +48,8 @@ func XmlTranscoding(src string) string {
srcTmp = StringContains(srcTmp, "\"", """)
return srcTmp
}
+
+// StringToInt converts string value to int value with default value
func StringToInt(value string, def int) int {
ret, err := strconv.Atoi(value)
if err != nil {
@@ -52,6 +58,7 @@ func StringToInt(value string, def int) int {
return ret
}
+// StringToInt64 converts string value to int64 value with default value
func StringToInt64(value string, def int64) int64 {
ret, err := strconv.ParseInt(value, 10, 64)
if err != nil {
@@ -60,79 +67,93 @@ func StringToInt64(value string, def int64) int64 {
return ret
}
+// IntToString converts int value to string value
func IntToString(value int) string {
return strconv.Itoa(value)
}
+// Int64ToString converts int64 value to string value
func Int64ToString(value int64) string {
return strconv.FormatInt(value, 10)
}
+// GetCurrentTimestamp gets unix time in milliseconds
func GetCurrentTimestamp() int64 {
return time.Now().UnixNano() / 1000000
}
+// FormatUtcNow gets a textual representation of the UTC format time value
func FormatUtcNow(format string) string {
return time.Now().UTC().Format(format)
}
+// FormatUtcToRfc1123 gets a textual representation of the RFC1123 format time value
func FormatUtcToRfc1123(t time.Time) string {
ret := t.UTC().Format(time.RFC1123)
return ret[:strings.LastIndex(ret, "UTC")] + "GMT"
}
+// Md5 gets the md5 value of input
func Md5(value []byte) []byte {
m := md5.New()
_, err := m.Write(value)
if err != nil {
- doLog(LEVEL_WARN, "MD5 failed to write with reason: %v", err)
+ doLog(LEVEL_WARN, "MD5 failed to write")
}
return m.Sum(nil)
}
+// HmacSha1 gets hmac sha1 value of input
func HmacSha1(key, value []byte) []byte {
mac := hmac.New(sha1.New, key)
_, err := mac.Write(value)
if err != nil {
- doLog(LEVEL_WARN, "HmacSha1 failed to write with reason: %v", err)
+ doLog(LEVEL_WARN, "HmacSha1 failed to write")
}
return mac.Sum(nil)
}
+// HmacSha256 get hmac sha256 value if input
func HmacSha256(key, value []byte) []byte {
mac := hmac.New(sha256.New, key)
_, err := mac.Write(value)
if err != nil {
- doLog(LEVEL_WARN, "HmacSha256 failed to write with reason: %v", err)
+ doLog(LEVEL_WARN, "HmacSha256 failed to write")
}
return mac.Sum(nil)
}
+// Base64Encode wrapper of base64.StdEncoding.EncodeToString
func Base64Encode(value []byte) string {
return base64.StdEncoding.EncodeToString(value)
}
+// Base64Decode wrapper of base64.StdEncoding.DecodeString
func Base64Decode(value string) ([]byte, error) {
return base64.StdEncoding.DecodeString(value)
}
+// HexMd5 returns the md5 value of input in hexadecimal format
func HexMd5(value []byte) string {
return Hex(Md5(value))
}
+// Base64Md5 returns the md5 value of input with Base64Encode
func Base64Md5(value []byte) string {
return Base64Encode(Md5(value))
}
+// Sha256Hash returns sha256 checksum
func Sha256Hash(value []byte) []byte {
hash := sha256.New()
_, err := hash.Write(value)
if err != nil {
- doLog(LEVEL_WARN, "Sha256Hash failed to write with reason: %v", err)
+ doLog(LEVEL_WARN, "Sha256Hash failed to write")
}
return hash.Sum(nil)
}
+// ParseXml wrapper of xml.Unmarshal
func ParseXml(value []byte, result interface{}) error {
if len(value) == 0 {
return nil
@@ -140,6 +161,15 @@ func ParseXml(value []byte, result interface{}) error {
return xml.Unmarshal(value, result)
}
+// parseJSON wrapper of json.Unmarshal
+func parseJSON(value []byte, result interface{}) error {
+ if len(value) == 0 {
+ return nil
+ }
+ return json.Unmarshal(value, result)
+}
+
+// TransToXml wrapper of xml.Marshal
func TransToXml(value interface{}) ([]byte, error) {
if value == nil {
return []byte{}, nil
@@ -147,14 +177,17 @@ func TransToXml(value interface{}) ([]byte, error) {
return xml.Marshal(value)
}
+// Hex wrapper of hex.EncodeToString
func Hex(value []byte) string {
return hex.EncodeToString(value)
}
+// HexSha256 returns the Sha256Hash value of input in hexadecimal format
func HexSha256(value []byte) string {
return Hex(Sha256Hash(value))
}
+// UrlDecode wrapper of url.QueryUnescape
func UrlDecode(value string) (string, error) {
ret, err := url.QueryUnescape(value)
if err == nil {
@@ -163,21 +196,24 @@ func UrlDecode(value string) (string, error) {
return "", err
}
+// UrlDecodeWithoutError wrapper of UrlDecode
func UrlDecodeWithoutError(value string) string {
ret, err := UrlDecode(value)
if err == nil {
return ret
}
if isErrorLogEnabled() {
- doLog(LEVEL_ERROR, "Url decode error: %v", err)
+ doLog(LEVEL_ERROR, "Url decode error")
}
return ""
}
+// IsIP checks whether the value matches ip address
func IsIP(value string) bool {
return ipRegex.MatchString(value)
}
+// UrlEncode encodes the input value
func UrlEncode(value string, chineseOnly bool) string {
if chineseOnly {
values := make([]string, 0, len(value))
@@ -253,7 +289,7 @@ func getIsObs(isTemporary bool, querys []string, headers map[string][]string) bo
}
}
} else {
- for key, _ := range headers {
+ for key := range headers {
keyPrefix := strings.ToLower(key)
if strings.HasPrefix(keyPrefix, HEADER_PREFIX) {
isObs = false
@@ -264,15 +300,23 @@ func getIsObs(isTemporary bool, querys []string, headers map[string][]string) bo
return isObs
}
-func GetV2Authorization(ak, sk, method, bucketName, objectKey, queryUrl string, headers map[string][]string) (ret map[string]string) {
+func isPathStyle(headers map[string][]string, bucketName string) bool {
+ if receviedHost, ok := headers[HEADER_HOST]; ok && len(receviedHost) > 0 && !strings.HasPrefix(receviedHost[0], bucketName+".") {
+ return true
+ }
+ return false
+}
+
+// GetV2Authorization v2 Authorization
+func GetV2Authorization(ak, sk, method, bucketName, objectKey, queryURL string, headers map[string][]string) (ret map[string]string) {
- if strings.HasPrefix(queryUrl, "?") {
- queryUrl = queryUrl[1:]
+ if strings.HasPrefix(queryURL, "?") {
+ queryURL = queryURL[1:]
}
method = strings.ToUpper(method)
- querys := strings.Split(queryUrl, "&")
+ querys := strings.Split(queryURL, "&")
querysResult := make([]string, 0)
for _, value := range querys {
if value != "=" && len(value) != 0 {
@@ -298,11 +342,8 @@ func GetV2Authorization(ak, sk, method, bucketName, objectKey, queryUrl string,
}
}
headers = copyHeaders(headers)
- pathStyle := false
- if receviedHost, ok := headers[HEADER_HOST]; ok && len(receviedHost) > 0 && !strings.HasPrefix(receviedHost[0], bucketName+".") {
- pathStyle = true
- }
- conf := &config{securityProvider: &securityProvider{ak: ak, sk: sk},
+ pathStyle := isPathStyle(headers, bucketName)
+ conf := &config{securityProviders: []securityProvider{NewBasicSecurityProvider(ak, sk, "")},
urlHolder: &urlHolder{scheme: "https", host: "dummy", port: 443},
pathStyle: pathStyle}
conf.signature = SignatureObs
@@ -313,23 +354,18 @@ func GetV2Authorization(ak, sk, method, bucketName, objectKey, queryUrl string,
return
}
-func GetAuthorization(ak, sk, method, bucketName, objectKey, queryUrl string, headers map[string][]string) (ret map[string]string) {
-
- if strings.HasPrefix(queryUrl, "?") {
- queryUrl = queryUrl[1:]
- }
-
- method = strings.ToUpper(method)
-
- querys := strings.Split(queryUrl, "&")
+func getQuerysResult(querys []string) []string {
querysResult := make([]string, 0)
for _, value := range querys {
if value != "=" && len(value) != 0 {
querysResult = append(querysResult, value)
}
}
- params := make(map[string]string)
+ return querysResult
+}
+func getParams(querysResult []string) map[string]string {
+ params := make(map[string]string)
for _, value := range querysResult {
kv := strings.Split(value, "=")
length := len(kv)
@@ -346,6 +382,10 @@ func GetAuthorization(ak, sk, method, bucketName, objectKey, queryUrl string, he
params[key] = strings.Join(vals, "=")
}
}
+ return params
+}
+
+func getTemporaryAndSignature(params map[string]string) (bool, string) {
isTemporary := false
signature := "v2"
temporaryKeys := getTemporaryKeys()
@@ -360,51 +400,68 @@ func GetAuthorization(ak, sk, method, bucketName, objectKey, queryUrl string, he
break
}
}
+ return isTemporary, signature
+}
+
+// GetAuthorization Authorization
+func GetAuthorization(ak, sk, method, bucketName, objectKey, queryURL string, headers map[string][]string) (ret map[string]string) {
+
+ if strings.HasPrefix(queryURL, "?") {
+ queryURL = queryURL[1:]
+ }
+
+ method = strings.ToUpper(method)
+
+ querys := strings.Split(queryURL, "&")
+ querysResult := getQuerysResult(querys)
+ params := getParams(querysResult)
+
+ isTemporary, signature := getTemporaryAndSignature(params)
+
isObs := getIsObs(isTemporary, querysResult, headers)
headers = copyHeaders(headers)
pathStyle := false
if receviedHost, ok := headers[HEADER_HOST]; ok && len(receviedHost) > 0 && !strings.HasPrefix(receviedHost[0], bucketName+".") {
pathStyle = true
}
- conf := &config{securityProvider: &securityProvider{ak: ak, sk: sk},
+ conf := &config{securityProviders: []securityProvider{NewBasicSecurityProvider(ak, sk, "")},
urlHolder: &urlHolder{scheme: "https", host: "dummy", port: 443},
pathStyle: pathStyle}
if isTemporary {
return getTemporaryAuthorization(ak, sk, method, bucketName, objectKey, signature, conf, params, headers, isObs)
- } else {
- signature, region, signedHeaders := parseHeaders(headers)
- if signature == "v4" {
- conf.signature = SignatureV4
- requestUrl, canonicalizedUrl := conf.formatUrls(bucketName, objectKey, params, false)
- parsedRequestUrl, _err := url.Parse(requestUrl)
- if _err != nil {
- doLog(LEVEL_WARN, "Failed to parse requestUrl with reason: %v", _err)
- return nil
- }
- headerKeys := strings.Split(signedHeaders, ";")
- _headers := make(map[string][]string, len(headerKeys))
- for _, headerKey := range headerKeys {
- _headers[headerKey] = headers[headerKey]
- }
- ret = v4Auth(ak, sk, region, method, canonicalizedUrl, parsedRequestUrl.RawQuery, _headers)
- ret[HEADER_AUTH_CAMEL] = fmt.Sprintf("%s Credential=%s,SignedHeaders=%s,Signature=%s", V4_HASH_PREFIX, ret["Credential"], ret["SignedHeaders"], ret["Signature"])
- } else if signature == "v2" {
- if isObs {
- conf.signature = SignatureObs
- } else {
- conf.signature = SignatureV2
- }
- _, canonicalizedUrl := conf.formatUrls(bucketName, objectKey, params, false)
- ret = v2Auth(ak, sk, method, canonicalizedUrl, headers, isObs)
- v2HashPrefix := V2_HASH_PREFIX
- if isObs {
- v2HashPrefix = OBS_HASH_PREFIX
- }
- ret[HEADER_AUTH_CAMEL] = fmt.Sprintf("%s %s:%s", v2HashPrefix, ak, ret["Signature"])
+ }
+ signature, region, signedHeaders := parseHeaders(headers)
+ if signature == "v4" {
+ conf.signature = SignatureV4
+ requestURL, canonicalizedURL := conf.formatUrls(bucketName, objectKey, params, false)
+ parsedRequestURL, _err := url.Parse(requestURL)
+ if _err != nil {
+ doLog(LEVEL_WARN, "Failed to parse requestURL")
+ return nil
+ }
+ headerKeys := strings.Split(signedHeaders, ";")
+ _headers := make(map[string][]string, len(headerKeys))
+ for _, headerKey := range headerKeys {
+ _headers[headerKey] = headers[headerKey]
}
- return
+ ret = v4Auth(ak, sk, region, method, canonicalizedURL, parsedRequestURL.RawQuery, _headers)
+ ret[HEADER_AUTH_CAMEL] = fmt.Sprintf("%s Credential=%s,SignedHeaders=%s,Signature=%s", V4_HASH_PREFIX, ret["Credential"], ret["SignedHeaders"], ret["Signature"])
+ } else if signature == "v2" {
+ if isObs {
+ conf.signature = SignatureObs
+ } else {
+ conf.signature = SignatureV2
+ }
+ _, canonicalizedURL := conf.formatUrls(bucketName, objectKey, params, false)
+ ret = v2Auth(ak, sk, method, canonicalizedURL, headers, isObs)
+ v2HashPrefix := V2_HASH_PREFIX
+ if isObs {
+ v2HashPrefix = OBS_HASH_PREFIX
+ }
+ ret[HEADER_AUTH_CAMEL] = fmt.Sprintf("%s %s:%s", v2HashPrefix, ak, ret["Signature"])
}
+ return
}
@@ -463,13 +520,13 @@ func getTemporaryAuthorization(ak, sk, method, bucketName, objectKey, signature
ret[PARAM_EXPIRES_AMZ_CAMEL] = expires
ret[PARAM_SIGNEDHEADERS_AMZ_CAMEL] = signedHeaders
- requestUrl, canonicalizedUrl := conf.formatUrls(bucketName, objectKey, params, false)
- parsedRequestUrl, _err := url.Parse(requestUrl)
+ requestURL, canonicalizedURL := conf.formatUrls(bucketName, objectKey, params, false)
+ parsedRequestURL, _err := url.Parse(requestURL)
if _err != nil {
- doLog(LEVEL_WARN, "Failed to parse requestUrl with reason: %v", _err)
+ doLog(LEVEL_WARN, "Failed to parse requestUrl")
return nil
}
- stringToSign := getV4StringToSign(method, canonicalizedUrl, parsedRequestUrl.RawQuery, scope, longDate, UNSIGNED_PAYLOAD, strings.Split(signedHeaders, ";"), headers)
+ stringToSign := getV4StringToSign(method, canonicalizedURL, parsedRequestURL.RawQuery, scope, longDate, UNSIGNED_PAYLOAD, strings.Split(signedHeaders, ";"), headers)
ret[PARAM_SIGNATURE_AMZ_CAMEL] = UrlEncode(getSignature(stringToSign, sk, region, shortDate), false)
} else if signature == "v2" {
if isObs {
@@ -477,13 +534,13 @@ func getTemporaryAuthorization(ak, sk, method, bucketName, objectKey, signature
} else {
conf.signature = SignatureV2
}
- _, canonicalizedUrl := conf.formatUrls(bucketName, objectKey, params, false)
+ _, canonicalizedURL := conf.formatUrls(bucketName, objectKey, params, false)
expires, ok := params["Expires"]
if !ok {
expires = params["expires"]
}
headers[HEADER_DATE_CAMEL] = []string{expires}
- stringToSign := getV2StringToSign(method, canonicalizedUrl, headers, isObs)
+ stringToSign := getV2StringToSign(method, canonicalizedURL, headers, isObs)
ret = make(map[string]string, 3)
ret["Signature"] = UrlEncode(Base64Encode(HmacSha1([]byte(sk), []byte(stringToSign))), false)
ret["AWSAccessKeyId"] = UrlEncode(ak, false)
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 57c4d5e17d4..56c88b87851 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -267,7 +267,7 @@ github.com/hashicorp/terraform-svchost/auth
github.com/hashicorp/terraform-svchost/disco
# github.com/hashicorp/yamux v0.0.0-20181012175058-2f1d1f20f75d
github.com/hashicorp/yamux
-# github.com/huaweicloud/golangsdk v0.0.0-20210528023633-c90ae4249a71
+# github.com/huaweicloud/golangsdk v0.0.0-20210602080359-3d6e5cdfc40f
## explicit
github.com/huaweicloud/golangsdk
github.com/huaweicloud/golangsdk/internal