From 47082af2e6bfcc12465aac8cc2666ffff7ee8a8d Mon Sep 17 00:00:00 2001 From: Tamer Sherif <69483382+tasherif-msft@users.noreply.github.com> Date: Mon, 26 Jun 2023 14:23:56 -0700 Subject: [PATCH] [Datalake] Client Constructors (#21034) * Enable gocritic during linting (#20715) Enabled gocritic's evalOrder to catch dependencies on undefined behavior on return statements. Updated to latest version of golangci-lint. Fixed issue in azblob flagged by latest linter. * Cosmos DB: Enable merge support (#20716) * Adding header and value * Wiring and tests * format * Fixing value * change log * [azservicebus, azeventhubs] Stress test and logging improvement (#20710) Logging improvements: * Updating the logging to print more tracing information (per-link) in prep for the bigger release coming up. * Trimming out some of the verbose logging, seeing if I can get it a bit more reasonable. Stress tests: * Add a timestamp to the log name we generate and also default to append, not overwrite. * Use 0.5 cores, 0.5GB as our baseline. Some pods use more and I'll tune them more later. * update proxy version (#20712) Co-authored-by: Scott Beddall * Return an error when you try to send a message that's too large. (#20721) This now works just like the message batch - you'll get an ErrMessageTooLarge if you attempt to send a message that's too large for the link's configured size. NOTE: there's a patch to `internal/go-amqp/Sender.go` to match what's in go-amqp's main so it returns a programmatically useful error when the message is too large. Fixes #20647 * Changes in test that is failing in pipeline (#20693) * [azservicebus, azeventhubs] Treat 'entity full' as a fatal error (#20722) When the remote entity is full we get a resource-limit-exceeded condition. This isn't something we should keep retrying on and it's best to just abort and let the user know immediately, rather than hoping it might eventually clear out. This affected both Event Hubs and Service Bus. Fixes #20647 * [azservicebus/azeventhubs] Redirect stderr and stdout to tee (#20726) * Update changelog with latest features (#20730) * Update changelog with latest features Prepare for upcoming release. * bump minor version * pass along the artifact name so we can override it later (#20732) Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> * [azeventhubs] Fixing checkpoint store race condition (#20727) The checkpoint store wasn't guarding against multiple owners claiming for the first time - fixing this by using IfNoneMatch Fixes #20717 * Fix azidentity troubleshooting guide link (#20736) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 (#20437) * [Release] sdk/resourcemanager/paloaltonetworksngfw/armpanngfw/0.1.0 generation from spec commit: 85fb4ac6f8bfefd179e6c2632976a154b5c9ff04 * client factory * fix * fix * update * add sdk/resourcemanager/postgresql/armpostgresql live test (#20685) * add sdk/resourcemanager/postgresql/armpostgresql live test * update assets.json * set subscriptionId default value * format * add sdk/resourcemanager/eventhub/armeventhub live test (#20686) * add sdk/resourcemanager/eventhub/armeventhub live test * update assets * add sdk/resourcemanager/compute/armcompute live test (#20048) * add sdk/resourcemanager/compute/armcompute live test * skus filter * fix subscriptionId default value * fix * gofmt * update recording * sdk/resourcemanager/network/armnetwork live test (#20331) * sdk/resourcemanager/network/armnetwork live test * update subscriptionId default value * update recording * add sdk/resourcemanager/cosmos/armcosmos live test (#20705) * add sdk/resourcemanager/cosmos/armcosmos live test * update assets.json * update assets.json * update assets.json * update assets.json * Increment package version after release of azcore (#20740) * [azeventhubs] Improperly resetting etag in the checkpoint store (#20737) We shouldn't be resetting the etag to nil - it's what we use to enforce a "single winner" when doing ownership claims. The bug here was two-fold: I had bad logic in my previous claim ownership, which I fixed in a previous PR, but we need to reflect that same constraint properly in our in-memory checkpoint store for these tests. * Eng workflows sync and branch cleanup additions (#20743) Co-authored-by: James Suplizio * [azeventhubs] Latest start position can also be inclusive (ie, get the latest message) (#20744) * Update GitHubEventProcessor version and remove pull_request_review procesing (#20751) Co-authored-by: James Suplizio * Rename DisableAuthorityValidationAndInstanceDiscovery (#20746) * fix (#20707) * AzFile (#20739) * azfile: Fixing connection string parsing logic (#20798) * Fixing connection string parse logic * Update README * [azadmin] fix flaky test (#20758) * fix flaky test * charles suggestion * Prepare azidentity v1.3.0 for release (#20756) * Fix broken podman link (#20801) Co-authored-by: Wes Haggard * [azquery] update doc comments (#20755) * update doc comments * update statistics and visualization generation * prep-for-release * Fixed contribution section (#20752) Co-authored-by: Bob Tabor * [azeventhubs,azservicebus] Some API cleanup, renames (#20754) * Adding options to UpdateCheckpoint(), just for future potential expansion * Make Offset an int64, not a *int64 (it's not optional, it'll always come back with ReceivedEvents) * Adding more logging into the checkpoint store. * Point all imports at the production go-amqp * Add supporting features to enable distributed tracing (#20301) (#20708) * Add supporting features to enable distributed tracing This includes new internal pipeline policies and other supporting types. See the changelog for a full description. Added some missing doc comments. * fix linter issue * add net.peer.name trace attribute sequence custom HTTP header policy before logging policy. sequence logging policy after HTTP trace policy. keep body download policy at the end. * add span for iterating over pages * Restore ARM CAE support for azcore beta (#20657) This reverts commit 902097226ff3fe2fc6c3e7fc50d3478350253614. * Upgrade to stable azcore (#20808) * Increment package version after release of data/azcosmos (#20807) * Updating changelog (#20810) * Add fake package to azcore (#20711) * Add fake package to azcore This is the supporting infrastructure for the generated SDK fakes. * fix doc comment * Updating CHANGELOG.md (#20809) * changelog (#20811) * Increment package version after release of storage/azfile (#20813) * Update changelog (azblob) (#20815) * Updating CHANGELOG.md * Update the changelog with correct version * [azquery] migration guide (#20742) * migration guide * Charles feedback * Richard feedback --------- Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> * Increment package version after release of monitor/azquery (#20820) * [keyvault] prep for release (#20819) * prep for release * perf tests * update date * added sas support * small fix * query params fix * fix * added some tests * added more tests * resolved some comments * added encoding * added constructors * cleanup * cleanup * gmt * added all necessary builders * merge branch * removed path client --------- Co-authored-by: Joel Hendrix Co-authored-by: Matias Quaranta Co-authored-by: Richard Park <51494936+richardpark-msft@users.noreply.github.com> Co-authored-by: Azure SDK Bot <53356347+azure-sdk@users.noreply.github.com> Co-authored-by: Scott Beddall Co-authored-by: siminsavani-msft <77068571+siminsavani-msft@users.noreply.github.com> Co-authored-by: scbedd <45376673+scbedd@users.noreply.github.com> Co-authored-by: Charles Lowell <10964656+chlowell@users.noreply.github.com> Co-authored-by: Peng Jiahui <46921893+Alancere@users.noreply.github.com> Co-authored-by: James Suplizio Co-authored-by: Sourav Gupta <98318303+souravgupta-msft@users.noreply.github.com> Co-authored-by: gracewilcox <43627800+gracewilcox@users.noreply.github.com> Co-authored-by: Wes Haggard Co-authored-by: Bob Tabor Co-authored-by: Bob Tabor --- sdk/storage/azdatalake/common.go | 6 - sdk/storage/azdatalake/directory/client.go | 158 ++++++++++-- sdk/storage/azdatalake/directory/constants.go | 34 ++- sdk/storage/azdatalake/directory/models.go | 225 +++++++++++++++-- sdk/storage/azdatalake/directory/responses.go | 28 +-- sdk/storage/azdatalake/file/client.go | 162 +++++++++++-- sdk/storage/azdatalake/file/constants.go | 29 +-- sdk/storage/azdatalake/file/models.go | 228 ++++++++++++++++-- sdk/storage/azdatalake/file/responses.go | 26 +- sdk/storage/azdatalake/filesystem/client.go | 108 +++++++-- .../azdatalake/internal/base/clients.go | 74 ++++++ .../exported/shared_key_credential.go | 12 +- .../internal/generated/filesystem_client.go | 21 +- .../internal/generated/path_client.go | 19 +- .../internal/generated/service_client.go | 19 +- .../azdatalake/internal/path/client.go | 71 ------ .../azdatalake/internal/path/constants.go | 43 ---- .../azdatalake/internal/path/models.go | 227 ----------------- .../azdatalake/internal/path/responses.go | 42 ---- .../azdatalake/internal/shared/shared.go | 7 + sdk/storage/azdatalake/service/client.go | 108 +++++++-- 21 files changed, 1058 insertions(+), 589 deletions(-) delete mode 100644 sdk/storage/azdatalake/internal/path/client.go delete mode 100644 sdk/storage/azdatalake/internal/path/constants.go delete mode 100644 sdk/storage/azdatalake/internal/path/models.go delete mode 100644 sdk/storage/azdatalake/internal/path/responses.go diff --git a/sdk/storage/azdatalake/common.go b/sdk/storage/azdatalake/common.go index 03fe643423db..fb79dcc0dc7a 100644 --- a/sdk/storage/azdatalake/common.go +++ b/sdk/storage/azdatalake/common.go @@ -7,15 +7,9 @@ package azdatalake import ( - "github.com/Azure/azure-sdk-for-go/sdk/azcore" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" ) -// ClientOptions contains the optional parameters when creating a Client. -type ClientOptions struct { - azcore.ClientOptions -} - // AccessConditions identifies container-specific access conditions which you optionally set. type AccessConditions struct { ModifiedAccessConditions *ModifiedAccessConditions diff --git a/sdk/storage/azdatalake/directory/client.go b/sdk/storage/azdatalake/directory/client.go index f376ad4ec7b8..6aaf8bf63138 100644 --- a/sdk/storage/azdatalake/directory/client.go +++ b/sdk/storage/azdatalake/directory/client.go @@ -9,44 +9,123 @@ package directory import ( "context" "github.com/Azure/azure-sdk-for-go/sdk/azcore" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" + "strings" ) -// Client represents a URL to the Azure Datalake Storage service allowing you to manipulate datalake directories. -type Client struct { - path.Client -} +// ClientOptions contains the optional parameters when creating a Client. +type ClientOptions base.ClientOptions -// NewClient creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ -// - cred - an Azure AD credential, typically obtained via the azidentity module -// - options - client options; pass nil to accept the default values -func NewClient(serviceURL string, cred azcore.TokenCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil -} +// Client represents a URL to the Azure Datalake Storage service. +type Client base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client] + +//TODO: NewClient() // NewClientWithNoCredential creates an instance of Client with the specified values. // This is used to anonymously access a storage account or with a shared access signature (SAS) token. // - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/? // - options - client options; pass nil to accept the default values -func NewClientWithNoCredential(serviceURL string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithNoCredential(directoryURL string, options *ClientOptions) (*Client, error) { + blobURL := strings.Replace(directoryURL, ".dfs.", ".blob.", 1) + directoryURL = strings.Replace(directoryURL, ".blob.", ".dfs.", 1) + + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{} + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.DirectoryClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobClientOpts := blob.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobClient, _ := blob.NewClientWithNoCredential(blobURL, &blobClientOpts) + dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(dirClient), nil } // NewClientWithSharedKeyCredential creates an instance of Client with the specified values. // - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values -func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithSharedKeyCredential(directoryURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { + blobURL := strings.Replace(directoryURL, ".dfs.", ".blob.", 1) + directoryURL = strings.Replace(directoryURL, ".blob.", ".dfs.", 1) + + authPolicy := exported.NewSharedKeyCredPolicy(cred) + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{ + PerRetry: []policy.Policy{authPolicy}, + } + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.DirectoryClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobClientOpts := blob.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) + blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKeyCredential, &blobClientOpts) + dirClient := base.NewPathClient(directoryURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(dirClient), nil } // NewClientFromConnectionString creates an instance of Client with the specified values. // - connectionString - a connection string for the desired storage account // - options - client options; pass nil to accept the default values -func NewClientFromConnectionString(connectionString string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientFromConnectionString(connectionString string, options *ClientOptions) (*Client, error) { + parsed, err := shared.ParseConnectionString(connectionString) + if err != nil { + return nil, err + } + + if parsed.AccountKey != "" && parsed.AccountName != "" { + credential, err := exported.NewSharedKeyCredential(parsed.AccountName, parsed.AccountKey) + if err != nil { + return nil, err + } + return NewClientWithSharedKeyCredential(parsed.ServiceURL, credential, options) + } + + return NewClientWithNoCredential(parsed.ServiceURL, options) +} + +func (d *Client) generatedFSClientWithDFS() *generated.PathClient { + //base.SharedKeyComposite((*base.CompositeClient[generated.BlobClient, generated.BlockBlobClient])(bb)) + dirClientWithDFS, _, _ := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(d)) + return dirClientWithDFS +} + +func (d *Client) generatedFSClientWithBlob() *generated.PathClient { + _, dirClientWithBlob, _ := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(d)) + return dirClientWithBlob +} + +func (d *Client) blobClient() *blob.Client { + _, _, blobClient := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(d)) + return blobClient +} + +func (d *Client) sharedKey() *exported.SharedKeyCredential { + return base.SharedKeyComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(d)) +} + +// URL returns the URL endpoint used by the Client object. +func (d *Client) URL() string { + return "s.generated().Endpoint()" } // Create creates a new directory (dfs1). @@ -70,3 +149,44 @@ func (d *Client) GetProperties(ctx context.Context, options *GetPropertiesOption func (d *Client) Rename(ctx context.Context, newName string, options *RenameOptions) (RenameResponse, error) { return RenameResponse{}, nil } + +// SetAccessControl sets the owner, owning group, and permissions for a file or directory (dfs1). +func (d *Client) SetAccessControl(ctx context.Context, options *SetAccessControlOptions) (SetAccessControlResponse, error) { + return SetAccessControlResponse{}, nil +} + +// SetAccessControlRecursive sets the owner, owning group, and permissions for a file or directory (dfs1). +func (d *Client) SetAccessControlRecursive(ctx context.Context, options *SetAccessControlRecursiveOptions) (SetAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// UpdateAccessControlRecursive updates the owner, owning group, and permissions for a file or directory (dfs1). +func (d *Client) UpdateAccessControlRecursive(ctx context.Context, options *UpdateAccessControlRecursiveOptions) (UpdateAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1). +func (d *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) { + return GetAccessControlResponse{}, nil +} + +// RemoveAccessControlRecursive removes the owner, owning group, and permissions for a file or directory (dfs1). +func (d *Client) RemoveAccessControlRecursive(ctx context.Context, options *RemoveAccessControlRecursiveOptions) (RemoveAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// SetMetadata sets the metadata for a file or directory (blob3). +func (d *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) { + // TODO: call directly into blob + return SetMetadataResponse{}, nil +} + +// SetHTTPHeaders sets the HTTP headers for a file or directory (blob3). +func (d *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, options *SetHTTPHeadersOptions) (SetHTTPHeadersResponse, error) { + // TODO: call formatBlobHTTPHeaders() since we want to add the blob prefix to our options before calling into blob + // TODO: call into blob + return SetHTTPHeadersResponse{}, nil +} diff --git a/sdk/storage/azdatalake/directory/constants.go b/sdk/storage/azdatalake/directory/constants.go index 70e675e9f10a..ca7d9525c6ac 100644 --- a/sdk/storage/azdatalake/directory/constants.go +++ b/sdk/storage/azdatalake/directory/constants.go @@ -7,29 +7,37 @@ package directory import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" ) -// RenameMode defines the rename mode for RenameDirectory -type RenameMode = path.RenameMode +type ResourceType = generated.PathResourceType +// TODO: consider the possibility of not exposing this and just pass it under the hood const ( - RenameModeLegacy RenameMode = path.RenameModeLegacy - RenameModePosix RenameMode = path.RenameModePosix + ResourceTypeFile ResourceType = generated.PathResourceTypeFile + ResourceTypeDirectory ResourceType = generated.PathResourceTypeDirectory ) -// SetAccessControlRecursiveMode defines the set access control recursive mode for SetAccessControlRecursive -type SetAccessControlRecursiveMode = path.SetAccessControlRecursiveMode +type RenameMode = generated.PathRenameMode +// TODO: consider the possibility of not exposing this and just pass it under the hood const ( - SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeSet - SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeModify - SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeRemove + RenameModeLegacy RenameMode = generated.PathRenameModeLegacy + RenameModePosix RenameMode = generated.PathRenameModePosix ) -type EncryptionAlgorithmType = path.EncryptionAlgorithmType +type SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveMode const ( - EncryptionAlgorithmTypeNone EncryptionAlgorithmType = path.EncryptionAlgorithmTypeNone - EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = path.EncryptionAlgorithmTypeAES256 + SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeSet + SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeModify + SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeRemove +) + +type EncryptionAlgorithmType = blob.EncryptionAlgorithmType + +const ( + EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone + EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 ) diff --git a/sdk/storage/azdatalake/directory/models.go b/sdk/storage/azdatalake/directory/models.go index 4be9c742e337..e349e0478da8 100644 --- a/sdk/storage/azdatalake/directory/models.go +++ b/sdk/storage/azdatalake/directory/models.go @@ -11,7 +11,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "time" ) @@ -99,38 +98,218 @@ func (o *GetPropertiesOptions) format() *blob.GetPropertiesOptions { // ===================================== PATH IMPORTS =========================================== -// CPKInfo contains a group of parameters for client provided encryption key. -type CPKInfo = path.CPKInfo +// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint +type SetAccessControlOptions struct { + // Owner is the owner of the path. + Owner *string + // Group is the owning group of the path. + Group *string + // ACL is the access control list for the path. + ACL *string + // Permissions is the octal representation of the permissions for user, group and mask. + Permissions *string + // AccessConditions contains parameters for accessing the path. + AccessConditions *azdatalake.AccessConditions +} -// CPKScopeInfo contains a group of parameters for client provided encryption scope. -type CPKScopeInfo = path.CPKScopeInfo +func (o *SetAccessControlOptions) format() (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { + if o == nil { + return nil, nil, nil, nil + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) + return &generated.PathClientSetAccessControlOptions{ + Owner: o.Owner, + Group: o.Group, + ACL: o.ACL, + Permissions: o.Permissions, + }, leaseAccessConditions, modifiedAccessConditions, nil +} -// HTTPHeaders contains the HTTP headers for path operations. -type HTTPHeaders = path.HTTPHeaders +// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. +type GetAccessControlOptions struct { + // UPN is the user principal name. + UPN *bool + // AccessConditions contains parameters for accessing the path. + AccessConditions *azdatalake.AccessConditions +} -// SourceModifiedAccessConditions identifies the source path access conditions. -type SourceModifiedAccessConditions = path.SourceModifiedAccessConditions +func (o *GetAccessControlOptions) format() (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { + action := generated.PathGetPropertiesActionGetAccessControl + if o == nil { + return &generated.PathClientGetPropertiesOptions{ + Action: &action, + }, nil, nil, nil + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) + return &generated.PathClientGetPropertiesOptions{ + Upn: o.UPN, + Action: &action, + }, leaseAccessConditions, modifiedAccessConditions, nil +} -// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. -type SetAccessControlOptions = path.SetAccessControlOptions +// SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. TODO: Design formatter +type SetAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} -// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. -type GetAccessControlOptions = path.GetAccessControlOptions +func (o *SetAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter + return nil, nil +} -// SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. -type SetAccessControlRecursiveOptions = path.SetAccessControlRecursiveOptions +// UpdateAccessControlRecursiveOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. TODO: Design formatter +type UpdateAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} -// SetMetadataOptions contains the optional parameters when calling the SetMetadata operation. -type SetMetadataOptions = path.SetMetadataOptions +func (o *UpdateAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter - similar to SetAccessControlRecursiveOptions + return nil, nil +} -// SetHTTPHeadersOptions contains the optional parameters when calling the SetHTTPHeaders operation. -type SetHTTPHeadersOptions = path.SetHTTPHeadersOptions +// RemoveAccessControlRecursiveOptions contains the optional parameters when calling the RemoveAccessControlRecursive operation. TODO: Design formatter +type RemoveAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} -// RemoveAccessControlRecursiveOptions contains the optional parameters when calling the RemoveAccessControlRecursive operation. -type RemoveAccessControlRecursiveOptions = path.RemoveAccessControlRecursiveOptions +func (o *RemoveAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter - similar to SetAccessControlRecursiveOptions + return nil, nil +} -// UpdateAccessControlRecursiveOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. -type UpdateAccessControlRecursiveOptions = path.UpdateAccessControlRecursiveOptions +// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. +type SetHTTPHeadersOptions struct { + AccessConditions *azdatalake.AccessConditions +} + +func (o *SetHTTPHeadersOptions) format() *blob.SetHTTPHeadersOptions { + if o == nil { + return nil + } + accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) + return &blob.SetHTTPHeadersOptions{ + AccessConditions: accessConditions, + } +} + +// HTTPHeaders contains the HTTP headers for path operations. +type HTTPHeaders struct { + // Optional. Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. + CacheControl *string + // Optional. Sets the path's Content-Disposition header. + ContentDisposition *string + // Optional. Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read + // request. + ContentEncoding *string + // Optional. Set the path's content language. If specified, this property is stored with the path and returned with a read + // request. + ContentLanguage *string + // Specify the transactional md5 for the body, to be validated by the service. + ContentMD5 []byte + // Optional. Sets the path's content type. If specified, this property is stored with the path and returned with a read request. + ContentType *string +} + +func (o *HTTPHeaders) formatBlobHTTPHeaders() (*blob.HTTPHeaders, error) { + if o == nil { + return nil, nil + } + opts := blob.HTTPHeaders{ + BlobCacheControl: o.CacheControl, + BlobContentDisposition: o.ContentDisposition, + BlobContentEncoding: o.ContentEncoding, + BlobContentLanguage: o.ContentLanguage, + BlobContentMD5: o.ContentMD5, + BlobContentType: o.ContentType, + } + return &opts, nil +} + +func (o *HTTPHeaders) formatPathHTTPHeaders() (*generated.PathHTTPHeaders, error) { + // TODO: will be used for file related ops, like append + if o == nil { + return nil, nil + } + opts := generated.PathHTTPHeaders{ + CacheControl: o.CacheControl, + ContentDisposition: o.ContentDisposition, + ContentEncoding: o.ContentEncoding, + ContentLanguage: o.ContentLanguage, + ContentMD5: o.ContentMD5, + ContentType: o.ContentType, + TransactionalContentHash: o.ContentMD5, + } + return &opts, nil +} + +// SetMetadataOptions provides set of configurations for Set Metadata on path operation +type SetMetadataOptions struct { + AccessConditions *azdatalake.AccessConditions + CPKInfo *CPKInfo + CPKScopeInfo *CPKScopeInfo +} + +func (o *SetMetadataOptions) format() *blob.SetMetadataOptions { + if o == nil { + return nil + } + accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) + return &blob.SetMetadataOptions{ + AccessConditions: accessConditions, + CPKInfo: &blob.CPKInfo{ + EncryptionKey: o.CPKInfo.EncryptionKey, + EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, + EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, + }, + CPKScopeInfo: &blob.CPKScopeInfo{ + EncryptionScope: o.CPKScopeInfo.EncryptionScope, + }, + } +} + +// CPKInfo contains a group of parameters for the PathClient.Download method. +type CPKInfo struct { + EncryptionAlgorithm *EncryptionAlgorithmType + EncryptionKey *string + EncryptionKeySHA256 *string +} + +// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. +type CPKScopeInfo struct { + EncryptionScope *string +} + +// SourceModifiedAccessConditions identifies the source path access conditions. +type SourceModifiedAccessConditions = generated.SourceModifiedAccessConditions // SharedKeyCredential contains an account's name and its primary or secondary key. type SharedKeyCredential = exported.SharedKeyCredential diff --git a/sdk/storage/azdatalake/directory/responses.go b/sdk/storage/azdatalake/directory/responses.go index 19a975c5a384..3327f41360ce 100644 --- a/sdk/storage/azdatalake/directory/responses.go +++ b/sdk/storage/azdatalake/directory/responses.go @@ -7,8 +7,8 @@ package directory import ( + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) // CreateResponse contains the response fields for the Create operation. @@ -17,29 +17,29 @@ type CreateResponse = generated.PathClientCreateResponse // DeleteResponse contains the response fields for the Delete operation. type DeleteResponse = generated.PathClientDeleteResponse +// RenameResponse contains the response fields for the Create operation. +type RenameResponse = generated.PathClientCreateResponse + // SetAccessControlResponse contains the response fields for the SetAccessControl operation. -type SetAccessControlResponse = path.SetAccessControlResponse +type SetAccessControlResponse = generated.PathClientSetAccessControlResponse // SetAccessControlRecursiveResponse contains the response fields for the SetAccessControlRecursive operation. -type SetAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type SetAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse // UpdateAccessControlRecursiveResponse contains the response fields for the UpdateAccessControlRecursive operation. -type UpdateAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type UpdateAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse // RemoveAccessControlRecursiveResponse contains the response fields for the RemoveAccessControlRecursive operation. -type RemoveAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type RemoveAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse + +// GetAccessControlResponse contains the response fields for the GetAccessControl operation. +type GetAccessControlResponse = generated.PathClientGetPropertiesResponse // GetPropertiesResponse contains the response fields for the GetProperties operation. -type GetPropertiesResponse = path.GetPropertiesResponse +type GetPropertiesResponse = generated.PathClientGetPropertiesResponse // SetMetadataResponse contains the response fields for the SetMetadata operation. -type SetMetadataResponse = path.SetMetadataResponse +type SetMetadataResponse = blob.SetMetadataResponse // SetHTTPHeadersResponse contains the response fields for the SetHTTPHeaders operation. -type SetHTTPHeadersResponse = path.SetHTTPHeadersResponse - -// RenameResponse contains the response fields for the Rename operation. -type RenameResponse = path.CreateResponse - -// GetAccessControlResponse contains the response fields for the GetAccessControl operation. -type GetAccessControlResponse = path.GetAccessControlResponse +type SetHTTPHeadersResponse = blob.SetHTTPHeadersResponse diff --git a/sdk/storage/azdatalake/file/client.go b/sdk/storage/azdatalake/file/client.go index 84e9b64ca4ae..f76101a3c5b3 100644 --- a/sdk/storage/azdatalake/file/client.go +++ b/sdk/storage/azdatalake/file/client.go @@ -9,44 +9,123 @@ package file import ( "context" "github.com/Azure/azure-sdk-for-go/sdk/azcore" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" + "strings" ) -// Client represents a URL to the Azure Datalake Storage service allowing you to manipulate files. -type Client struct { - path.Client -} +// ClientOptions contains the optional parameters when creating a Client. +type ClientOptions base.ClientOptions -// NewClient creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ -// - cred - an Azure AD credential, typically obtained via the azidentity module -// - options - client options; pass nil to accept the default values -func NewClient(serviceURL string, cred azcore.TokenCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil -} +// Client represents a URL to the Azure Datalake Storage service. +type Client base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client] + +//TODO: NewClient() // NewClientWithNoCredential creates an instance of Client with the specified values. // This is used to anonymously access a storage account or with a shared access signature (SAS) token. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/? +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/? // - options - client options; pass nil to accept the default values -func NewClientWithNoCredential(serviceURL string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithNoCredential(fileURL string, options *ClientOptions) (*Client, error) { + blobURL := strings.Replace(fileURL, ".dfs.", ".blob.", 1) + fileURL = strings.Replace(fileURL, ".blob.", ".dfs.", 1) + + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{} + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.FileClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobClientOpts := blob.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobClient, _ := blob.NewClientWithNoCredential(blobURL, &blobClientOpts) + fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(fileClient), nil } // NewClientWithSharedKeyCredential creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values -func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithSharedKeyCredential(fileURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { + blobURL := strings.Replace(fileURL, ".dfs.", ".blob.", 1) + fileURL = strings.Replace(fileURL, ".blob.", ".dfs.", 1) + + authPolicy := exported.NewSharedKeyCredPolicy(cred) + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{ + PerRetry: []policy.Policy{authPolicy}, + } + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.FileClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobClientOpts := blob.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) + blobClient, _ := blob.NewClientWithSharedKeyCredential(blobURL, blobSharedKeyCredential, &blobClientOpts) + fileClient := base.NewPathClient(fileURL, blobURL, blobClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(fileClient), nil } // NewClientFromConnectionString creates an instance of Client with the specified values. // - connectionString - a connection string for the desired storage account // - options - client options; pass nil to accept the default values -func NewClientFromConnectionString(connectionString string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientFromConnectionString(connectionString string, options *ClientOptions) (*Client, error) { + parsed, err := shared.ParseConnectionString(connectionString) + if err != nil { + return nil, err + } + + if parsed.AccountKey != "" && parsed.AccountName != "" { + credential, err := exported.NewSharedKeyCredential(parsed.AccountName, parsed.AccountKey) + if err != nil { + return nil, err + } + return NewClientWithSharedKeyCredential(parsed.ServiceURL, credential, options) + } + + return NewClientWithNoCredential(parsed.ServiceURL, options) +} + +func (f *Client) generatedFSClientWithDFS() *generated.PathClient { + //base.SharedKeyComposite((*base.CompositeClient[generated.BlobClient, generated.BlockBlobClient])(bb)) + dirClientWithDFS, _, _ := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(f)) + return dirClientWithDFS +} + +func (f *Client) generatedFSClientWithBlob() *generated.PathClient { + _, dirClientWithBlob, _ := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(f)) + return dirClientWithBlob +} + +func (f *Client) blobClient() *blob.Client { + _, _, blobClient := base.InnerClients((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(f)) + return blobClient +} + +func (f *Client) sharedKey() *exported.SharedKeyCredential { + return base.SharedKeyComposite((*base.CompositeClient[generated.PathClient, generated.PathClient, blob.Client])(f)) +} + +// URL returns the URL endpoint used by the Client object. +func (f *Client) URL() string { + return "s.generated().Endpoint()" } // Create creates a new file (dfs1). @@ -98,3 +177,44 @@ func (f *Client) Flush(ctx context.Context) { func (f *Client) Download(ctx context.Context) { } + +// SetAccessControl sets the owner, owning group, and permissions for a file or directory (dfs1). +func (f *Client) SetAccessControl(ctx context.Context, options *SetAccessControlOptions) (SetAccessControlResponse, error) { + return SetAccessControlResponse{}, nil +} + +// SetAccessControlRecursive sets the owner, owning group, and permissions for a file or directory (dfs1). +func (f *Client) SetAccessControlRecursive(ctx context.Context, options *SetAccessControlRecursiveOptions) (SetAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// UpdateAccessControlRecursive updates the owner, owning group, and permissions for a file or directory (dfs1). +func (f *Client) UpdateAccessControlRecursive(ctx context.Context, options *UpdateAccessControlRecursiveOptions) (UpdateAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1). +func (f *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) { + return GetAccessControlResponse{}, nil +} + +// RemoveAccessControlRecursive removes the owner, owning group, and permissions for a file or directory (dfs1). +func (f *Client) RemoveAccessControlRecursive(ctx context.Context, options *RemoveAccessControlRecursiveOptions) (RemoveAccessControlRecursiveResponse, error) { + // TODO explicitly pass SetAccessControlRecursiveMode + return SetAccessControlRecursiveResponse{}, nil +} + +// SetMetadata sets the metadata for a file or directory (blob3). +func (f *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) { + // TODO: call directly into blob + return SetMetadataResponse{}, nil +} + +// SetHTTPHeaders sets the HTTP headers for a file or directory (blob3). +func (f *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, options *SetHTTPHeadersOptions) (SetHTTPHeadersResponse, error) { + // TODO: call formatBlobHTTPHeaders() since we want to add the blob prefix to our options before calling into blob + // TODO: call into blob + return SetHTTPHeadersResponse{}, nil +} diff --git a/sdk/storage/azdatalake/file/constants.go b/sdk/storage/azdatalake/file/constants.go index 5ce02ebff960..60eabcfcce37 100644 --- a/sdk/storage/azdatalake/file/constants.go +++ b/sdk/storage/azdatalake/file/constants.go @@ -7,36 +7,37 @@ package file import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" ) -type ResourceType = path.ResourceType +type ResourceType = generated.PathResourceType // TODO: consider the possibility of not exposing this and just pass it under the hood const ( - ResourceTypeFile ResourceType = path.ResourceTypeFile - ResourceTypeDirectory ResourceType = path.ResourceTypeDirectory + ResourceTypeFile ResourceType = generated.PathResourceTypeFile + ResourceTypeDirectory ResourceType = generated.PathResourceTypeDirectory ) -type RenameMode = path.RenameMode +type RenameMode = generated.PathRenameMode // TODO: consider the possibility of not exposing this and just pass it under the hood const ( - RenameModeLegacy RenameMode = path.RenameModeLegacy - RenameModePosix RenameMode = path.RenameModePosix + RenameModeLegacy RenameMode = generated.PathRenameModeLegacy + RenameModePosix RenameMode = generated.PathRenameModePosix ) -type SetAccessControlRecursiveMode = path.SetAccessControlRecursiveMode +type SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveMode const ( - SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeSet - SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeModify - SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = path.SetAccessControlRecursiveModeRemove + SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeSet + SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeModify + SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeRemove ) -type EncryptionAlgorithmType = path.EncryptionAlgorithmType +type EncryptionAlgorithmType = blob.EncryptionAlgorithmType const ( - EncryptionAlgorithmTypeNone EncryptionAlgorithmType = path.EncryptionAlgorithmTypeNone - EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = path.EncryptionAlgorithmTypeAES256 + EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone + EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 ) diff --git a/sdk/storage/azdatalake/file/models.go b/sdk/storage/azdatalake/file/models.go index f45f6041b50b..18551b86a14d 100644 --- a/sdk/storage/azdatalake/file/models.go +++ b/sdk/storage/azdatalake/file/models.go @@ -11,7 +11,6 @@ import ( "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" "time" ) @@ -101,39 +100,221 @@ func (o *GetPropertiesOptions) format() *blob.GetPropertiesOptions { } // ===================================== PATH IMPORTS =========================================== +// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint +type SetAccessControlOptions struct { + // Owner is the owner of the path. + Owner *string + // Group is the owning group of the path. + Group *string + // ACL is the access control list for the path. + ACL *string + // Permissions is the octal representation of the permissions for user, group and mask. + Permissions *string + // AccessConditions contains parameters for accessing the path. + AccessConditions *azdatalake.AccessConditions +} + +func (o *SetAccessControlOptions) format() (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { + if o == nil { + return nil, nil, nil, nil + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) + return &generated.PathClientSetAccessControlOptions{ + Owner: o.Owner, + Group: o.Group, + ACL: o.ACL, + Permissions: o.Permissions, + }, leaseAccessConditions, modifiedAccessConditions, nil +} + +// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. +type GetAccessControlOptions struct { + // UPN is the user principal name. + UPN *bool + // AccessConditions contains parameters for accessing the path. + AccessConditions *azdatalake.AccessConditions +} + +func (o *GetAccessControlOptions) format() (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { + action := generated.PathGetPropertiesActionGetAccessControl + if o == nil { + return &generated.PathClientGetPropertiesOptions{ + Action: &action, + }, nil, nil, nil + } + // call path formatter since we're hitting dfs in this operation + leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) + return &generated.PathClientGetPropertiesOptions{ + Upn: o.UPN, + Action: &action, + }, leaseAccessConditions, modifiedAccessConditions, nil +} + +// SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. TODO: Design formatter +type SetAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} + +func (o *SetAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter + return nil, nil +} + +// UpdateAccessControlRecursiveOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. TODO: Design formatter +type UpdateAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} + +func (o *UpdateAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter - similar to SetAccessControlRecursiveOptions + return nil, nil +} + +// RemoveAccessControlRecursiveOptions contains the optional parameters when calling the RemoveAccessControlRecursive operation. TODO: Design formatter +type RemoveAccessControlRecursiveOptions struct { + // ACL is the access control list for the path. + ACL *string + // BatchSize is the number of paths to set access control recursively in a single call. + BatchSize *int32 + // MaxBatches is the maximum number of batches to perform the operation on. + MaxBatches *int32 + // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. + ContinueOnFailure *bool + // Marker is the continuation token to use when continuing the operation. + Marker *string +} + +func (o *RemoveAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { + // TODO: design formatter - similar to SetAccessControlRecursiveOptions + return nil, nil +} -// CPKInfo contains a group of parameters for client provided encryption key. -type CPKInfo = path.CPKInfo +// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. +type SetHTTPHeadersOptions struct { + AccessConditions *azdatalake.AccessConditions +} -// CPKScopeInfo contains a group of parameters for client provided encryption scope. -type CPKScopeInfo = path.CPKScopeInfo +func (o *SetHTTPHeadersOptions) format() *blob.SetHTTPHeadersOptions { + if o == nil { + return nil + } + accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) + return &blob.SetHTTPHeadersOptions{ + AccessConditions: accessConditions, + } +} // HTTPHeaders contains the HTTP headers for path operations. -type HTTPHeaders = path.HTTPHeaders +type HTTPHeaders struct { + // Optional. Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. + CacheControl *string + // Optional. Sets the path's Content-Disposition header. + ContentDisposition *string + // Optional. Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read + // request. + ContentEncoding *string + // Optional. Set the path's content language. If specified, this property is stored with the path and returned with a read + // request. + ContentLanguage *string + // Specify the transactional md5 for the body, to be validated by the service. + ContentMD5 []byte + // Optional. Sets the path's content type. If specified, this property is stored with the path and returned with a read request. + ContentType *string +} -// SourceModifiedAccessConditions identifies the source path access conditions. -type SourceModifiedAccessConditions = path.SourceModifiedAccessConditions +func (o *HTTPHeaders) formatBlobHTTPHeaders() (*blob.HTTPHeaders, error) { + if o == nil { + return nil, nil + } + opts := blob.HTTPHeaders{ + BlobCacheControl: o.CacheControl, + BlobContentDisposition: o.ContentDisposition, + BlobContentEncoding: o.ContentEncoding, + BlobContentLanguage: o.ContentLanguage, + BlobContentMD5: o.ContentMD5, + BlobContentType: o.ContentType, + } + return &opts, nil +} -// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. -type SetAccessControlOptions = path.SetAccessControlOptions +func (o *HTTPHeaders) formatPathHTTPHeaders() (*generated.PathHTTPHeaders, error) { + // TODO: will be used for file related ops, like append + if o == nil { + return nil, nil + } + opts := generated.PathHTTPHeaders{ + CacheControl: o.CacheControl, + ContentDisposition: o.ContentDisposition, + ContentEncoding: o.ContentEncoding, + ContentLanguage: o.ContentLanguage, + ContentMD5: o.ContentMD5, + ContentType: o.ContentType, + TransactionalContentHash: o.ContentMD5, + } + return &opts, nil +} -// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. -type GetAccessControlOptions = path.GetAccessControlOptions +// SetMetadataOptions provides set of configurations for Set Metadata on path operation +type SetMetadataOptions struct { + AccessConditions *azdatalake.AccessConditions + CPKInfo *CPKInfo + CPKScopeInfo *CPKScopeInfo +} -// SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. -type SetAccessControlRecursiveOptions = path.SetAccessControlRecursiveOptions +func (o *SetMetadataOptions) format() *blob.SetMetadataOptions { + if o == nil { + return nil + } + accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) + return &blob.SetMetadataOptions{ + AccessConditions: accessConditions, + CPKInfo: &blob.CPKInfo{ + EncryptionKey: o.CPKInfo.EncryptionKey, + EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, + EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, + }, + CPKScopeInfo: &blob.CPKScopeInfo{ + EncryptionScope: o.CPKScopeInfo.EncryptionScope, + }, + } +} -// SetMetadataOptions contains the optional parameters when calling the SetMetadata operation. -type SetMetadataOptions = path.SetMetadataOptions +// CPKInfo contains a group of parameters for the PathClient.Download method. +type CPKInfo struct { + EncryptionAlgorithm *EncryptionAlgorithmType + EncryptionKey *string + EncryptionKeySHA256 *string +} -// SetHTTPHeadersOptions contains the optional parameters when calling the SetHTTPHeaders operation. -type SetHTTPHeadersOptions = path.SetHTTPHeadersOptions +// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. +type CPKScopeInfo struct { + EncryptionScope *string +} -// RemoveAccessControlRecursiveOptions contains the optional parameters when calling the RemoveAccessControlRecursive operation. -type RemoveAccessControlRecursiveOptions = path.RemoveAccessControlRecursiveOptions +// SourceModifiedAccessConditions identifies the source path access conditions. +type SourceModifiedAccessConditions = generated.SourceModifiedAccessConditions -// UpdateAccessControlRecursiveOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. -type UpdateAccessControlRecursiveOptions = path.UpdateAccessControlRecursiveOptions +// SharedKeyCredential contains an account's name and its primary or secondary key. +type SharedKeyCredential = exported.SharedKeyCredential // ExpiryType defines values for ExpiryType. type ExpiryType = exported.ExpiryType @@ -152,6 +333,3 @@ type ExpiryTypeNever = exported.ExpiryTypeNever // SetExpiryOptions contains the optional parameters for the Client.SetExpiry method. type SetExpiryOptions = exported.SetExpiryOptions - -// SharedKeyCredential contains an account's name and its primary or secondary key. -type SharedKeyCredential = exported.SharedKeyCredential diff --git a/sdk/storage/azdatalake/file/responses.go b/sdk/storage/azdatalake/file/responses.go index 8543eaf3902a..e7ace65c52ee 100644 --- a/sdk/storage/azdatalake/file/responses.go +++ b/sdk/storage/azdatalake/file/responses.go @@ -7,8 +7,8 @@ package file import ( + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/path" ) // SetExpiryResponse contains the response fields for the SetExpiry operation. @@ -21,28 +21,28 @@ type CreateResponse = generated.PathClientCreateResponse type DeleteResponse = generated.PathClientDeleteResponse // SetAccessControlResponse contains the response fields for the SetAccessControl operation. -type SetAccessControlResponse = path.SetAccessControlResponse +type SetAccessControlResponse = generated.PathClientSetAccessControlResponse // SetAccessControlRecursiveResponse contains the response fields for the SetAccessControlRecursive operation. -type SetAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type SetAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse // UpdateAccessControlRecursiveResponse contains the response fields for the UpdateAccessControlRecursive operation. -type UpdateAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type UpdateAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse // RemoveAccessControlRecursiveResponse contains the response fields for the RemoveAccessControlRecursive operation. -type RemoveAccessControlRecursiveResponse = path.SetAccessControlRecursiveResponse +type RemoveAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse + +// GetAccessControlResponse contains the response fields for the GetAccessControl operation. +type GetAccessControlResponse = generated.PathClientGetPropertiesResponse // GetPropertiesResponse contains the response fields for the GetProperties operation. -type GetPropertiesResponse = path.GetPropertiesResponse +type GetPropertiesResponse = generated.PathClientGetPropertiesResponse // SetMetadataResponse contains the response fields for the SetMetadata operation. -type SetMetadataResponse = path.SetMetadataResponse +type SetMetadataResponse = blob.SetMetadataResponse // SetHTTPHeadersResponse contains the response fields for the SetHTTPHeaders operation. -type SetHTTPHeadersResponse = path.SetHTTPHeadersResponse +type SetHTTPHeadersResponse = blob.SetHTTPHeadersResponse -// RenameResponse contains the response fields for the Rename operation. -type RenameResponse = path.CreateResponse - -// GetAccessControlResponse contains the response fields for the GetAccessControl operation. -type GetAccessControlResponse = path.GetAccessControlResponse +// RenameResponse contains the response fields for the Create operation. +type RenameResponse = generated.PathClientCreateResponse diff --git a/sdk/storage/azdatalake/filesystem/client.go b/sdk/storage/azdatalake/filesystem/client.go index ae2cbdb536d2..06f76dd537c7 100644 --- a/sdk/storage/azdatalake/filesystem/client.go +++ b/sdk/storage/azdatalake/filesystem/client.go @@ -9,53 +9,119 @@ package filesystem import ( "context" "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" + "strings" ) -// Client represents a URL to the Azure Datalake Storage service allowing you to manipulate filesystems. -type Client base.Client[generated.FileSystemClient] +// ClientOptions contains the optional parameters when creating a Client. +type ClientOptions base.ClientOptions -// NewClient creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ -// - cred - an Azure AD credential, typically obtained via the azidentity module -// - options - client options; pass nil to accept the default values -func NewClient(serviceURL string, cred azcore.TokenCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil -} +// Client represents a URL to the Azure Datalake Storage service. +type Client base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client] + +//TODO: NewClient() // NewClientWithNoCredential creates an instance of Client with the specified values. // This is used to anonymously access a storage account or with a shared access signature (SAS) token. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/? +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/? // - options - client options; pass nil to accept the default values -func NewClientWithNoCredential(serviceURL string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithNoCredential(filesystemURL string, options *ClientOptions) (*Client, error) { + containerURL := strings.Replace(filesystemURL, ".dfs.", ".blob.", 1) + filesystemURL = strings.Replace(filesystemURL, ".blob.", ".dfs.", 1) + + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{} + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.FilesystemClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + containerClientOpts := container.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobContainerClient, _ := container.NewClientWithNoCredential(containerURL, &containerClientOpts) + fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(fsClient), nil } // NewClientWithSharedKeyCredential creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values -func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithSharedKeyCredential(filesystemURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { + containerURL := strings.Replace(filesystemURL, ".dfs.", ".blob.", 1) + filesystemURL = strings.Replace(filesystemURL, ".blob.", ".dfs.", 1) + + authPolicy := exported.NewSharedKeyCredPolicy(cred) + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{ + PerRetry: []policy.Policy{authPolicy}, + } + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.FilesystemClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + containerClientOpts := container.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) + blobContainerClient, _ := container.NewClientWithSharedKeyCredential(containerURL, blobSharedKeyCredential, &containerClientOpts) + fsClient := base.NewFilesystemClient(filesystemURL, containerURL, blobContainerClient, azClient, cred, (*base.ClientOptions)(conOptions)) + + return (*Client)(fsClient), nil } // NewClientFromConnectionString creates an instance of Client with the specified values. // - connectionString - a connection string for the desired storage account // - options - client options; pass nil to accept the default values -func NewClientFromConnectionString(connectionString string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientFromConnectionString(connectionString string, options *ClientOptions) (*Client, error) { + parsed, err := shared.ParseConnectionString(connectionString) + if err != nil { + return nil, err + } + + if parsed.AccountKey != "" && parsed.AccountName != "" { + credential, err := exported.NewSharedKeyCredential(parsed.AccountName, parsed.AccountKey) + if err != nil { + return nil, err + } + return NewClientWithSharedKeyCredential(parsed.ServiceURL, credential, options) + } + + return NewClientWithNoCredential(parsed.ServiceURL, options) +} + +func (fs *Client) generatedFSClientWithDFS() *generated.FileSystemClient { + //base.SharedKeyComposite((*base.CompositeClient[generated.BlobClient, generated.BlockBlobClient])(bb)) + fsClientWithDFS, _, _ := base.InnerClients((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) + return fsClientWithDFS +} + +func (fs *Client) generatedFSClientWithBlob() *generated.FileSystemClient { + _, fsClientWithBlob, _ := base.InnerClients((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) + return fsClientWithBlob } -func (fs *Client) generated() *generated.FileSystemClient { - return base.InnerClient((*base.Client[generated.FileSystemClient])(fs)) +func (fs *Client) containerClient() *container.Client { + _, _, containerClient := base.InnerClients((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) + return containerClient } func (fs *Client) sharedKey() *exported.SharedKeyCredential { - return base.SharedKey((*base.Client[generated.FileSystemClient])(fs)) + return base.SharedKeyComposite((*base.CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client])(fs)) } // URL returns the URL endpoint used by the Client object. diff --git a/sdk/storage/azdatalake/internal/base/clients.go b/sdk/storage/azdatalake/internal/base/clients.go index 40c909a17596..e526f500a25e 100644 --- a/sdk/storage/azdatalake/internal/base/clients.go +++ b/sdk/storage/azdatalake/internal/base/clients.go @@ -7,12 +7,25 @@ package base import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/container" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" ) +// ClientOptions contains the optional parameters when creating a Client. +type ClientOptions struct { + azcore.ClientOptions + pipelineOptions *runtime.PipelineOptions +} + type Client[T any] struct { inner *T sharedKey *exported.SharedKeyCredential + options *ClientOptions } func InnerClient[T any](client *Client[T]) *T { @@ -22,3 +35,64 @@ func InnerClient[T any](client *Client[T]) *T { func SharedKey[T any](client *Client[T]) *exported.SharedKeyCredential { return client.sharedKey } + +func GetClientOptions[T any](client *Client[T]) *ClientOptions { + return client.options +} + +func GetPipelineOptions(clOpts *ClientOptions) *runtime.PipelineOptions { + return clOpts.pipelineOptions +} + +func SetPipelineOptions(clOpts *ClientOptions, plOpts *runtime.PipelineOptions) { + clOpts.pipelineOptions = plOpts +} + +type CompositeClient[T, K, U any] struct { + // generated client with dfs + innerT *T + // generated client with blob + innerK *K + // blob client + innerU *U + sharedKey *exported.SharedKeyCredential + options *ClientOptions +} + +func InnerClients[T, K, U any](client *CompositeClient[T, K, U]) (*T, *K, *U) { + return client.innerT, client.innerK, client.innerU +} + +func SharedKeyComposite[T, K, U any](client *CompositeClient[T, K, U]) *exported.SharedKeyCredential { + return client.sharedKey +} + +func NewFilesystemClient(fsURL string, fsURLWithBlobEndpoint string, client *container.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client] { + return &CompositeClient[generated.FileSystemClient, generated.FileSystemClient, container.Client]{ + innerT: generated.NewFilesystemClient(fsURL, azClient), + innerK: generated.NewFilesystemClient(fsURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + innerU: client, + options: options, + } +} + +func NewServiceClient(serviceURL string, serviceURLWithBlobEndpoint string, client *service.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client] { + return &CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client]{ + innerT: generated.NewServiceClient(serviceURL, azClient), + innerK: generated.NewServiceClient(serviceURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + innerU: client, + options: options, + } +} + +func NewPathClient(dirURL string, dirURLWithBlobEndpoint string, client *blob.Client, azClient *azcore.Client, sharedKey *exported.SharedKeyCredential, options *ClientOptions) *CompositeClient[generated.PathClient, generated.PathClient, blob.Client] { + return &CompositeClient[generated.PathClient, generated.PathClient, blob.Client]{ + innerT: generated.NewPathClient(dirURL, azClient), + innerK: generated.NewPathClient(dirURLWithBlobEndpoint, azClient), + sharedKey: sharedKey, + innerU: client, + options: options, + } +} diff --git a/sdk/storage/azdatalake/internal/exported/shared_key_credential.go b/sdk/storage/azdatalake/internal/exported/shared_key_credential.go index 01934a0f14de..980b5a9a4f5a 100644 --- a/sdk/storage/azdatalake/internal/exported/shared_key_credential.go +++ b/sdk/storage/azdatalake/internal/exported/shared_key_credential.go @@ -28,7 +28,7 @@ import ( // NewSharedKeyCredential creates an immutable SharedKeyCredential containing the // storage account's name and either its primary or secondary key. func NewSharedKeyCredential(accountName string, accountKey string) (*SharedKeyCredential, error) { - c := SharedKeyCredential{accountName: accountName} + c := SharedKeyCredential{accountName: accountName, accountKeyString: accountKey} if err := c.SetAccountKey(accountKey); err != nil { return nil, err } @@ -38,8 +38,9 @@ func NewSharedKeyCredential(accountName string, accountKey string) (*SharedKeyCr // SharedKeyCredential contains an account's name and its primary or secondary key. type SharedKeyCredential struct { // Only the NewSharedKeyCredential method should set these; all other methods should treat them as read-only - accountName string - accountKey atomic.Value // []byte + accountName string + accountKey atomic.Value // []byte + accountKeyString string } // AccountName returns the Storage account's name. @@ -47,6 +48,11 @@ func (c *SharedKeyCredential) AccountName() string { return c.accountName } +// AccountKey returns the Storage account's name. +func (c *SharedKeyCredential) AccountKey() string { + return c.accountKeyString +} + // SetAccountKey replaces the existing account key with the specified account key. func (c *SharedKeyCredential) SetAccountKey(accountKey string) error { _bytes, err := base64.StdEncoding.DecodeString(accountKey) diff --git a/sdk/storage/azdatalake/internal/generated/filesystem_client.go b/sdk/storage/azdatalake/internal/generated/filesystem_client.go index 27c60f60cbcb..d35651c781b8 100644 --- a/sdk/storage/azdatalake/internal/generated/filesystem_client.go +++ b/sdk/storage/azdatalake/internal/generated/filesystem_client.go @@ -7,17 +7,28 @@ package generated import ( - "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" "time" ) +// used to convert times from UTC to GMT before sending across the wire +var gmt = time.FixedZone("GMT", 0) + func (client *FileSystemClient) Endpoint() string { return client.endpoint } -func (client *FileSystemClient) Pipeline() runtime.Pipeline { - return client.internal.Pipeline() +func (client *FileSystemClient) InternalClient() *azcore.Client { + return client.internal } -// used to convert times from UTC to GMT before sending across the wire -var gmt = time.FixedZone("GMT", 0) +// NewFilesystemClient creates a new instance of ServiceClient with the specified values. +// - endpoint - The URL of the service account, share, directory or file that is the target of the desired operation. +// - azClient - azcore.Client is a basic HTTP client. It consists of a pipeline and tracing provider. +func NewFilesystemClient(endpoint string, azClient *azcore.Client) *FileSystemClient { + client := &FileSystemClient{ + internal: azClient, + endpoint: endpoint, + } + return client +} diff --git a/sdk/storage/azdatalake/internal/generated/path_client.go b/sdk/storage/azdatalake/internal/generated/path_client.go index 3de3766c9850..e12c424816c9 100644 --- a/sdk/storage/azdatalake/internal/generated/path_client.go +++ b/sdk/storage/azdatalake/internal/generated/path_client.go @@ -6,12 +6,25 @@ package generated -import "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" +) func (client *PathClient) Endpoint() string { return client.endpoint } -func (client *PathClient) Pipeline() runtime.Pipeline { - return client.internal.Pipeline() +func (client *PathClient) InternalClient() *azcore.Client { + return client.internal +} + +// NewPathClient creates a new instance of ServiceClient with the specified values. +// - endpoint - The URL of the service account, share, directory or file that is the target of the desired operation. +// - azClient - azcore.Client is a basic HTTP client. It consists of a pipeline and tracing provider. +func NewPathClient(endpoint string, azClient *azcore.Client) *PathClient { + client := &PathClient{ + internal: azClient, + endpoint: endpoint, + } + return client } diff --git a/sdk/storage/azdatalake/internal/generated/service_client.go b/sdk/storage/azdatalake/internal/generated/service_client.go index 22f11c20a9fd..8c99594b7dbd 100644 --- a/sdk/storage/azdatalake/internal/generated/service_client.go +++ b/sdk/storage/azdatalake/internal/generated/service_client.go @@ -6,12 +6,25 @@ package generated -import "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" +) func (client *ServiceClient) Endpoint() string { return client.endpoint } -func (client *ServiceClient) Pipeline() runtime.Pipeline { - return client.internal.Pipeline() +func (client *ServiceClient) InternalClient() *azcore.Client { + return client.internal +} + +// NewServiceClient creates a new instance of ServiceClient with the specified values. +// - endpoint - The URL of the service account, share, directory or file that is the target of the desired operation. +// - azClient - azcore.Client is a basic HTTP client. It consists of a pipeline and tracing provider. +func NewServiceClient(endpoint string, azClient *azcore.Client) *ServiceClient { + client := &ServiceClient{ + internal: azClient, + endpoint: endpoint, + } + return client } diff --git a/sdk/storage/azdatalake/internal/path/client.go b/sdk/storage/azdatalake/internal/path/client.go deleted file mode 100644 index bfeab81bcbcc..000000000000 --- a/sdk/storage/azdatalake/internal/path/client.go +++ /dev/null @@ -1,71 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package path - -import ( - "context" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" -) - -// Client represents a URL to the Azure Datalake Storage service allowing you to manipulate paths. -type Client base.Client[generated.PathClient] - -func (p *Client) generated() *generated.PathClient { - return base.InnerClient((*base.Client[generated.PathClient])(p)) -} - -func (p *Client) sharedKey() *exported.SharedKeyCredential { - return base.SharedKey((*base.Client[generated.PathClient])(p)) -} - -// URL returns the URL endpoint used by the Client object. -func (p *Client) URL() string { - return "s.generated().Endpoint()" -} - -// SetAccessControl sets the owner, owning group, and permissions for a file or directory (dfs1). -func (p *Client) SetAccessControl(ctx context.Context, options *SetAccessControlOptions) (SetAccessControlResponse, error) { - return SetAccessControlResponse{}, nil -} - -// SetAccessControlRecursive sets the owner, owning group, and permissions for a file or directory (dfs1). -func (p *Client) SetAccessControlRecursive(ctx context.Context, options *SetAccessControlRecursiveOptions) (SetAccessControlRecursiveResponse, error) { - // TODO explicitly pass SetAccessControlRecursiveMode - return SetAccessControlRecursiveResponse{}, nil -} - -// UpdateAccessControlRecursive updates the owner, owning group, and permissions for a file or directory (dfs1). -func (p *Client) UpdateAccessControlRecursive(ctx context.Context, options *UpdateAccessControlRecursiveOptions) (UpdateAccessControlRecursiveResponse, error) { - // TODO explicitly pass SetAccessControlRecursiveMode - return SetAccessControlRecursiveResponse{}, nil -} - -// GetAccessControl gets the owner, owning group, and permissions for a file or directory (dfs1). -func (p *Client) GetAccessControl(ctx context.Context, options *GetAccessControlOptions) (GetAccessControlResponse, error) { - return GetAccessControlResponse{}, nil -} - -// RemoveAccessControlRecursive removes the owner, owning group, and permissions for a file or directory (dfs1). -func (p *Client) RemoveAccessControlRecursive(ctx context.Context, options *RemoveAccessControlRecursiveOptions) (RemoveAccessControlRecursiveResponse, error) { - // TODO explicitly pass SetAccessControlRecursiveMode - return SetAccessControlRecursiveResponse{}, nil -} - -// SetMetadata sets the metadata for a file or directory (blob3). -func (p *Client) SetMetadata(ctx context.Context, options *SetMetadataOptions) (SetMetadataResponse, error) { - // TODO: call directly into blob - return SetMetadataResponse{}, nil -} - -// SetHTTPHeaders sets the HTTP headers for a file or directory (blob3). -func (p *Client) SetHTTPHeaders(ctx context.Context, httpHeaders HTTPHeaders, options *SetHTTPHeadersOptions) (SetHTTPHeadersResponse, error) { - // TODO: call formatBlobHTTPHeaders() since we want to add the blob prefix to our options before calling into blob - // TODO: call into blob - return SetHTTPHeadersResponse{}, nil -} diff --git a/sdk/storage/azdatalake/internal/path/constants.go b/sdk/storage/azdatalake/internal/path/constants.go deleted file mode 100644 index d2d11defca1c..000000000000 --- a/sdk/storage/azdatalake/internal/path/constants.go +++ /dev/null @@ -1,43 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package path - -import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" -) - -type ResourceType = generated.PathResourceType - -// TODO: consider the possibility of not exposing this and just pass it under the hood -const ( - ResourceTypeFile ResourceType = generated.PathResourceTypeFile - ResourceTypeDirectory ResourceType = generated.PathResourceTypeDirectory -) - -type RenameMode = generated.PathRenameMode - -// TODO: consider the possibility of not exposing this and just pass it under the hood -const ( - RenameModeLegacy RenameMode = generated.PathRenameModeLegacy - RenameModePosix RenameMode = generated.PathRenameModePosix -) - -type SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveMode - -const ( - SetAccessControlRecursiveModeSet SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeSet - SetAccessControlRecursiveModeModify SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeModify - SetAccessControlRecursiveModeRemove SetAccessControlRecursiveMode = generated.PathSetAccessControlRecursiveModeRemove -) - -type EncryptionAlgorithmType = blob.EncryptionAlgorithmType - -const ( - EncryptionAlgorithmTypeNone EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeNone - EncryptionAlgorithmTypeAES256 EncryptionAlgorithmType = blob.EncryptionAlgorithmTypeAES256 -) diff --git a/sdk/storage/azdatalake/internal/path/models.go b/sdk/storage/azdatalake/internal/path/models.go deleted file mode 100644 index 75cca62c85c8..000000000000 --- a/sdk/storage/azdatalake/internal/path/models.go +++ /dev/null @@ -1,227 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package path - -import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" -) - -// SetAccessControlOptions contains the optional parameters when calling the SetAccessControl operation. dfs endpoint -type SetAccessControlOptions struct { - // Owner is the owner of the path. - Owner *string - // Group is the owning group of the path. - Group *string - // ACL is the access control list for the path. - ACL *string - // Permissions is the octal representation of the permissions for user, group and mask. - Permissions *string - // AccessConditions contains parameters for accessing the path. - AccessConditions *azdatalake.AccessConditions -} - -func (o *SetAccessControlOptions) format() (*generated.PathClientSetAccessControlOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { - if o == nil { - return nil, nil, nil, nil - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) - return &generated.PathClientSetAccessControlOptions{ - Owner: o.Owner, - Group: o.Group, - ACL: o.ACL, - Permissions: o.Permissions, - }, leaseAccessConditions, modifiedAccessConditions, nil -} - -// GetAccessControlOptions contains the optional parameters when calling the GetAccessControl operation. -type GetAccessControlOptions struct { - // UPN is the user principal name. - UPN *bool - // AccessConditions contains parameters for accessing the path. - AccessConditions *azdatalake.AccessConditions -} - -func (o *GetAccessControlOptions) format() (*generated.PathClientGetPropertiesOptions, *generated.LeaseAccessConditions, *generated.ModifiedAccessConditions, error) { - action := generated.PathGetPropertiesActionGetAccessControl - if o == nil { - return &generated.PathClientGetPropertiesOptions{ - Action: &action, - }, nil, nil, nil - } - // call path formatter since we're hitting dfs in this operation - leaseAccessConditions, modifiedAccessConditions := shared.FormatPathAccessConditions(o.AccessConditions) - return &generated.PathClientGetPropertiesOptions{ - Upn: o.UPN, - Action: &action, - }, leaseAccessConditions, modifiedAccessConditions, nil -} - -// SetAccessControlRecursiveOptions contains the optional parameters when calling the SetAccessControlRecursive operation. TODO: Design formatter -type SetAccessControlRecursiveOptions struct { - // ACL is the access control list for the path. - ACL *string - // BatchSize is the number of paths to set access control recursively in a single call. - BatchSize *int32 - // MaxBatches is the maximum number of batches to perform the operation on. - MaxBatches *int32 - // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. - ContinueOnFailure *bool - // Marker is the continuation token to use when continuing the operation. - Marker *string -} - -func (o *SetAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { - // TODO: design formatter - return nil, nil -} - -// UpdateAccessControlRecursiveOptions contains the optional parameters when calling the UpdateAccessControlRecursive operation. TODO: Design formatter -type UpdateAccessControlRecursiveOptions struct { - // ACL is the access control list for the path. - ACL *string - // BatchSize is the number of paths to set access control recursively in a single call. - BatchSize *int32 - // MaxBatches is the maximum number of batches to perform the operation on. - MaxBatches *int32 - // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. - ContinueOnFailure *bool - // Marker is the continuation token to use when continuing the operation. - Marker *string -} - -func (o *UpdateAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { - // TODO: design formatter - similar to SetAccessControlRecursiveOptions - return nil, nil -} - -// RemoveAccessControlRecursiveOptions contains the optional parameters when calling the RemoveAccessControlRecursive operation. TODO: Design formatter -type RemoveAccessControlRecursiveOptions struct { - // ACL is the access control list for the path. - ACL *string - // BatchSize is the number of paths to set access control recursively in a single call. - BatchSize *int32 - // MaxBatches is the maximum number of batches to perform the operation on. - MaxBatches *int32 - // ContinueOnFailure indicates whether to continue on failure when the operation encounters an error. - ContinueOnFailure *bool - // Marker is the continuation token to use when continuing the operation. - Marker *string -} - -func (o *RemoveAccessControlRecursiveOptions) format() (*generated.PathClientSetAccessControlRecursiveOptions, error) { - // TODO: design formatter - similar to SetAccessControlRecursiveOptions - return nil, nil -} - -// SetHTTPHeadersOptions contains the optional parameters for the Client.SetHTTPHeaders method. -type SetHTTPHeadersOptions struct { - AccessConditions *azdatalake.AccessConditions -} - -func (o *SetHTTPHeadersOptions) format() *blob.SetHTTPHeadersOptions { - if o == nil { - return nil - } - accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) - return &blob.SetHTTPHeadersOptions{ - AccessConditions: accessConditions, - } -} - -// HTTPHeaders contains the HTTP headers for path operations. -type HTTPHeaders struct { - // Optional. Sets the path's cache control. If specified, this property is stored with the path and returned with a read request. - CacheControl *string - // Optional. Sets the path's Content-Disposition header. - ContentDisposition *string - // Optional. Sets the path's content encoding. If specified, this property is stored with the blobpath and returned with a read - // request. - ContentEncoding *string - // Optional. Set the path's content language. If specified, this property is stored with the path and returned with a read - // request. - ContentLanguage *string - // Specify the transactional md5 for the body, to be validated by the service. - ContentMD5 []byte - // Optional. Sets the path's content type. If specified, this property is stored with the path and returned with a read request. - ContentType *string -} - -func (o *HTTPHeaders) formatBlobHTTPHeaders() (*blob.HTTPHeaders, error) { - if o == nil { - return nil, nil - } - opts := blob.HTTPHeaders{ - BlobCacheControl: o.CacheControl, - BlobContentDisposition: o.ContentDisposition, - BlobContentEncoding: o.ContentEncoding, - BlobContentLanguage: o.ContentLanguage, - BlobContentMD5: o.ContentMD5, - BlobContentType: o.ContentType, - } - return &opts, nil -} - -func (o *HTTPHeaders) formatPathHTTPHeaders() (*generated.PathHTTPHeaders, error) { - // TODO: will be used for file related ops, like append - if o == nil { - return nil, nil - } - opts := generated.PathHTTPHeaders{ - CacheControl: o.CacheControl, - ContentDisposition: o.ContentDisposition, - ContentEncoding: o.ContentEncoding, - ContentLanguage: o.ContentLanguage, - ContentMD5: o.ContentMD5, - ContentType: o.ContentType, - TransactionalContentHash: o.ContentMD5, - } - return &opts, nil -} - -// SetMetadataOptions provides set of configurations for Set Metadata on path operation -type SetMetadataOptions struct { - AccessConditions *azdatalake.AccessConditions - CPKInfo *CPKInfo - CPKScopeInfo *CPKScopeInfo -} - -func (o *SetMetadataOptions) format() *blob.SetMetadataOptions { - if o == nil { - return nil - } - accessConditions := shared.FormatBlobAccessConditions(o.AccessConditions) - return &blob.SetMetadataOptions{ - AccessConditions: accessConditions, - CPKInfo: &blob.CPKInfo{ - EncryptionKey: o.CPKInfo.EncryptionKey, - EncryptionAlgorithm: o.CPKInfo.EncryptionAlgorithm, - EncryptionKeySHA256: o.CPKInfo.EncryptionKeySHA256, - }, - CPKScopeInfo: &blob.CPKScopeInfo{ - EncryptionScope: o.CPKScopeInfo.EncryptionScope, - }, - } -} - -// CPKInfo contains a group of parameters for the PathClient.Download method. -type CPKInfo struct { - EncryptionAlgorithm *EncryptionAlgorithmType - EncryptionKey *string - EncryptionKeySHA256 *string -} - -// CPKScopeInfo contains a group of parameters for the PathClient.SetMetadata method. -type CPKScopeInfo struct { - EncryptionScope *string -} - -// SourceModifiedAccessConditions identifies the source path access conditions. -type SourceModifiedAccessConditions = generated.SourceModifiedAccessConditions diff --git a/sdk/storage/azdatalake/internal/path/responses.go b/sdk/storage/azdatalake/internal/path/responses.go deleted file mode 100644 index e61078ead210..000000000000 --- a/sdk/storage/azdatalake/internal/path/responses.go +++ /dev/null @@ -1,42 +0,0 @@ -//go:build go1.18 -// +build go1.18 - -// Copyright (c) Microsoft Corporation. All rights reserved. -// Licensed under the MIT License. See License.txt in the project root for license information. - -package path - -import ( - "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" -) - -// CreateResponse contains the response fields for the Create operation. -type CreateResponse = generated.PathClientCreateResponse - -// DeleteResponse contains the response fields for the Delete operation. -type DeleteResponse = generated.PathClientDeleteResponse - -// SetAccessControlResponse contains the response fields for the SetAccessControl operation. -type SetAccessControlResponse = generated.PathClientSetAccessControlResponse - -// SetAccessControlRecursiveResponse contains the response fields for the SetAccessControlRecursive operation. -type SetAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse - -// UpdateAccessControlRecursiveResponse contains the response fields for the UpdateAccessControlRecursive operation. -type UpdateAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse - -// RemoveAccessControlRecursiveResponse contains the response fields for the RemoveAccessControlRecursive operation. -type RemoveAccessControlRecursiveResponse = generated.PathClientSetAccessControlRecursiveResponse - -// GetAccessControlResponse contains the response fields for the GetAccessControl operation. -type GetAccessControlResponse = generated.PathClientGetPropertiesResponse - -// GetPropertiesResponse contains the response fields for the GetProperties operation. -type GetPropertiesResponse = generated.PathClientGetPropertiesResponse - -// SetMetadataResponse contains the response fields for the SetMetadata operation. -type SetMetadataResponse = blob.SetMetadataResponse - -// SetHTTPHeadersResponse contains the response fields for the SetHTTPHeaders operation. -type SetHTTPHeadersResponse = blob.SetHTTPHeadersResponse diff --git a/sdk/storage/azdatalake/internal/shared/shared.go b/sdk/storage/azdatalake/internal/shared/shared.go index d94f84deb8a6..7fd977d8b059 100644 --- a/sdk/storage/azdatalake/internal/shared/shared.go +++ b/sdk/storage/azdatalake/internal/shared/shared.go @@ -24,6 +24,13 @@ const ( TokenScope = "https://storage.azure.com/.default" ) +const ( + ServiceClient = "azdatalake/service.Client" + FilesystemClient = "azdatalake/share.Client" + DirectoryClient = "azdatalake/directory.Client" + FileClient = "azdatalake/file.Client" +) + const ( HeaderAuthorization = "Authorization" HeaderXmsDate = "x-ms-date" diff --git a/sdk/storage/azdatalake/service/client.go b/sdk/storage/azdatalake/service/client.go index 5d887ffddd9d..eeac42200d11 100644 --- a/sdk/storage/azdatalake/service/client.go +++ b/sdk/storage/azdatalake/service/client.go @@ -9,46 +9,97 @@ package service import ( "context" "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" - "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/filesystem" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/base" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/exported" "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/generated" + "github.com/Azure/azure-sdk-for-go/sdk/storage/azdatalake/internal/shared" + "strings" ) -// Client represents a URL to the Azure Datalake Storage service. -type Client base.Client[generated.ServiceClient] +// ClientOptions contains the optional parameters when creating a Client. +type ClientOptions base.ClientOptions -// NewClient creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ -// - cred - an Azure AD credential, typically obtained via the azidentity module -// - options - client options; pass nil to accept the default values -func NewClient(serviceURL string, cred azcore.TokenCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil -} +// Client represents a URL to the Azure Datalake Storage service. +type Client base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client] // NewClientWithNoCredential creates an instance of Client with the specified values. -// This is used to anonymously access a storage account or with a shared access signature (SAS) token. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/? -// - options - client options; pass nil to accept the default values -func NewClientWithNoCredential(serviceURL string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ +// - options - client options; pass nil to accept the default values. +func NewClientWithNoCredential(serviceURL string, options *ClientOptions) (*Client, error) { + blobServiceURL := strings.Replace(serviceURL, ".dfs.", ".blob.", 1) + datalakeServiceURL := strings.Replace(serviceURL, ".blob.", ".dfs.", 1) + + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{} + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.ServiceClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobServiceClientOpts := service.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobSvcClient, _ := service.NewClientWithNoCredential(blobServiceURL, &blobServiceClientOpts) + svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, nil, (*base.ClientOptions)(conOptions)) + + return (*Client)(svcClient), nil } // NewClientWithSharedKeyCredential creates an instance of Client with the specified values. -// - serviceURL - the URL of the storage account e.g. https://.file.core.windows.net/ +// - serviceURL - the URL of the storage account e.g. https://.dfs.core.windows.net/ // - cred - a SharedKeyCredential created with the matching storage account and access key // - options - client options; pass nil to accept the default values -func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientWithSharedKeyCredential(serviceURL string, cred *SharedKeyCredential, options *ClientOptions) (*Client, error) { + blobServiceURL := strings.Replace(serviceURL, ".dfs.", ".blob.", 1) + datalakeServiceURL := strings.Replace(serviceURL, ".blob.", ".dfs.", 1) + + authPolicy := exported.NewSharedKeyCredPolicy(cred) + conOptions := shared.GetClientOptions(options) + plOpts := runtime.PipelineOptions{ + PerRetry: []policy.Policy{authPolicy}, + } + base.SetPipelineOptions((*base.ClientOptions)(conOptions), &plOpts) + + azClient, err := azcore.NewClient(shared.ServiceClient, exported.ModuleVersion, plOpts, &conOptions.ClientOptions) + if err != nil { + return nil, err + } + + blobServiceClientOpts := service.ClientOptions{ + ClientOptions: options.ClientOptions, + } + blobSharedKeyCredential, _ := blob.NewSharedKeyCredential(cred.AccountName(), cred.AccountKey()) + blobSvcClient, _ := service.NewClientWithSharedKeyCredential(blobServiceURL, blobSharedKeyCredential, &blobServiceClientOpts) + svcClient := base.NewServiceClient(datalakeServiceURL, blobServiceURL, blobSvcClient, azClient, cred, (*base.ClientOptions)(conOptions)) + + return (*Client)(svcClient), nil } // NewClientFromConnectionString creates an instance of Client with the specified values. // - connectionString - a connection string for the desired storage account // - options - client options; pass nil to accept the default values -func NewClientFromConnectionString(connectionString string, options *azdatalake.ClientOptions) (*Client, error) { - return nil, nil +func NewClientFromConnectionString(connectionString string, options *ClientOptions) (*Client, error) { + parsed, err := shared.ParseConnectionString(connectionString) + if err != nil { + return nil, err + } + + if parsed.AccountKey != "" && parsed.AccountName != "" { + credential, err := exported.NewSharedKeyCredential(parsed.AccountName, parsed.AccountKey) + if err != nil { + return nil, err + } + return NewClientWithSharedKeyCredential(parsed.ServiceURL, credential, options) + } + + return NewClientWithNoCredential(parsed.ServiceURL, options) } // NewFilesystemClient creates a new share.Client object by concatenating shareName to the end of this Client's URL. @@ -69,12 +120,23 @@ func (s *Client) NewFileClient(fileName string) *filesystem.Client { return nil } -func (s *Client) generated() *generated.ServiceClient { - return base.InnerClient((*base.Client[generated.ServiceClient])(s)) +func (s *Client) generatedFSClientWithDFS() *generated.ServiceClient { + svcClientWithDFS, _, _ := base.InnerClients((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) + return svcClientWithDFS +} + +func (s *Client) generatedFSClientWithBlob() *generated.ServiceClient { + _, svcClientWithBlob, _ := base.InnerClients((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) + return svcClientWithBlob +} + +func (s *Client) containerClient() *service.Client { + _, _, serviceClient := base.InnerClients((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) + return serviceClient } func (s *Client) sharedKey() *exported.SharedKeyCredential { - return base.SharedKey((*base.Client[generated.ServiceClient])(s)) + return base.SharedKeyComposite((*base.CompositeClient[generated.ServiceClient, generated.ServiceClient, service.Client])(s)) } // URL returns the URL endpoint used by the Client object.