Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added codespell #2008

Merged
merged 4 commits into from
Jan 5, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 24 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
# GitHub Action to automate the identification of common misspellings in text files.
# https://github.com/codespell-project/actions-codespell
# https://github.com/codespell-project/codespell
name: codespell
on:
push:
branches:
- dev
- main
pull_request:
branches:
- dev
- main
jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: codespell-project/actions-codespell@master
with:
check_filenames: true
skip: ./sddl/sddlPortable_test.go,./sddl/sddlHelper_linux.go
ignore_words_list: "resue,pase,cancl,cacl,froms"
18 changes: 9 additions & 9 deletions ChangeLog.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@
1. Fixed [issue 1506](https://github.com/Azure/azure-storage-azcopy/issues/1506): Added input watcher to resolve issue since job could not be resumed.
2. Fixed [issue 1794](https://github.com/Azure/azure-storage-azcopy/issues/1794): Moved log-level to root.go so log-level arguments do not get ignored.
3. Fixed [issue 1824](https://github.com/Azure/azure-storage-azcopy/issues/1824): Avoid creating .azcopy under HOME if plan/log location is specified elsewhere.
4. Fixed [isue 1830](https://github.com/Azure/azure-storage-azcopy/issues/1830), [issue 1412](https://github.com/Azure/azure-storage-azcopy/issues/1418), and [issue 873](https://github.com/Azure/azure-storage-azcopy/issues/873): Improved error message for when AzCopy cannot determine if source is directory.
4. Fixed [issue 1830](https://github.com/Azure/azure-storage-azcopy/issues/1830), [issue 1412](https://github.com/Azure/azure-storage-azcopy/issues/1418), and [issue 873](https://github.com/Azure/azure-storage-azcopy/issues/873): Improved error message for when AzCopy cannot determine if source is directory.
5. Fixed [issue 1777](https://github.com/Azure/azure-storage-azcopy/issues/1777): Fixed job list to handle respective output-type correctly.
6. Fixed win64 alignment issue.

Expand Down Expand Up @@ -191,7 +191,7 @@

### New features
1. Added option to [disable parallel blob listing](https://github.com/Azure/azure-storage-azcopy/pull/1263)
1. Added support for uploading [large files](https://github.com/Azure/azure-storage-azcopy/pull/1254/files) upto 4TiB. Please refer the [public documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/create-file) for more information
1. Added support for uploading [large files](https://github.com/Azure/azure-storage-azcopy/pull/1254/files) up to 4TiB. Please refer the [public documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/create-file) for more information
1. Added support for `include-before`flag. Refer [this](https://github.com/Azure/azure-storage-azcopy/issues/1075) for more information

### Bug fixes
Expand Down Expand Up @@ -469,7 +469,7 @@ disallowed because none (other than include-path) are respected.

1. The `*` character is no longer supported as a wildcard in URLs, except for the two exceptions
noted below. It remains supported in local file paths.
1. The first execption is that `/*` is still allowed at the very end of the "path" section of a
1. The first exception is that `/*` is still allowed at the very end of the "path" section of a
URL. This is illustrated by the difference between these two source URLs:
`https://account/container/virtual?SAS` and
`https://account/container/virtualDir/*?SAS`. The former copies the virtual directory
Expand Down Expand Up @@ -501,7 +501,7 @@ disallowed because none (other than include-path) are respected.
1. Percent complete is displayed as each job runs.
1. VHD files are auto-detected as page blobs.
1. A new benchmark mode allows quick and easy performance benchmarking of your network connection to
Blob Storage. Run AzCopy with the paramaters `bench --help` for details. This feature is in
Blob Storage. Run AzCopy with the parameters `bench --help` for details. This feature is in
Preview status.
1. The location for AzCopy's "plan" files can be specified with the environment variable
`AZCOPY_JOB_PLAN_LOCATION`. (If you move the plan files and also move the log files using the existing
Expand All @@ -520,7 +520,7 @@ disallowed because none (other than include-path) are respected.
1. Memory usage can be controlled by setting the new environment variable `AZCOPY_BUFFER_GB`.
Decimal values are supported. Actual usage will be the value specified, plus some overhead.
1. An extra integrity check has been added: the length of the
completed desination file is checked against that of the source.
completed destination file is checked against that of the source.
1. When downloading, AzCopy can automatically decompress blobs (or Azure Files) that have a
`Content-Encoding` of `gzip` or `deflate`. To enable this behaviour, supply the `--decompress`
parameter.
Expand Down Expand Up @@ -685,21 +685,21 @@ information, including those needed to set the new headers.

1. For creating MD5 hashes when uploading, version 10.x now has the OPPOSITE default to version
AzCopy 8.x. Specifically, as of version 10.0.9, MD5 hashes are NOT created by default. To create
Content-MD5 hashs when uploading, you must now specify `--put-md5` on the command line.
Content-MD5 hashes when uploading, you must now specify `--put-md5` on the command line.

### New features

1. Can migrate data directly from Amazon Web Services (AWS). In this high-performance data path
the data is read directly from AWS by the Azure Storage service. It does not need to pass through
the machine running AzCopy. The copy happens syncronously, so you can see its exact progress.
the machine running AzCopy. The copy happens synchronously, so you can see its exact progress.
1. Can migrate data directly from Azure Files or Azure Blobs (any blob type) to Azure Blobs (any
blob type). In this high-performance data path the data is read directly from the source by the
Azure Storage service. It does not need to pass through the machine running AzCopy. The copy
happens syncronously, so you can see its exact progress.
happens synchronously, so you can see its exact progress.
1. Sync command prompts with 4 options about deleting unneeded files from the target: Yes, No, All or
None. (Deletion only happens if the `--delete-destination` flag is specified).
1. Can download to /dev/null. This throws the data away - but is useful for testing raw network
performance unconstrained by disk; and also for validing MD5 hashes in bulk (when run in a cloud
performance unconstrained by disk; and also for validating MD5 hashes in bulk (when run in a cloud
VM in the same region as the Storage account)

### Bug fixes
Expand Down
2 changes: 1 addition & 1 deletion azbfs/parsing_urls.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ type BfsURLParts struct {
isIPEndpointStyle bool // Ex: "https://ip/accountname/filesystem"
}

// isIPEndpointStyle checkes if URL's host is IP, in this case the storage account endpoint will be composed as:
// isIPEndpointStyle checks if URL's host is IP, in this case the storage account endpoint will be composed as:
// http(s)://IP(:port)/storageaccount/share(||container||etc)/...
func isIPEndpointStyle(url url.URL) bool {
return net.ParseIP(url.Host) != nil
Expand Down
2 changes: 1 addition & 1 deletion azbfs/zc_credential_token.go
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ type TokenCredential interface {
// indicating how long the TokenCredential object should wait before calling your tokenRefresher function again.
func NewTokenCredential(initialToken string, tokenRefresher func(credential TokenCredential) time.Duration) TokenCredential {
tc := &tokenCredential{}
tc.SetToken(initialToken) // We dont' set it above to guarantee atomicity
tc.SetToken(initialToken) // We don't set it above to guarantee atomicity
if tokenRefresher == nil {
return tc // If no callback specified, return the simple tokenCredential
}
Expand Down
8 changes: 4 additions & 4 deletions cmd/credentialUtil.go
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ func GetOAuthTokenManagerInstance() (*common.UserOAuthTokenManager, error) {
glcm.Error("Invalid Auto-login type specified.")
return
}

if tenantID := glcm.GetEnvironmentVariable(common.EEnvironmentVariable.TenantID()); tenantID != "" {
lca.tenantID = tenantID
}
Expand Down Expand Up @@ -470,7 +470,7 @@ func checkAuthSafeForTarget(ct common.CredentialType, resource, extraSuffixesAAD
// something like https://someApi.execute-api.someRegion.amazonaws.com is AWS but is a customer-
// written code, not S3.
ok := false
host := "<unparseable url>"
host := "<unparsable url>"
u, err := url.Parse(resource)
if err == nil {
host = u.Host
Expand All @@ -483,14 +483,14 @@ func checkAuthSafeForTarget(ct common.CredentialType, resource, extraSuffixesAAD

if !ok {
return fmt.Errorf(
"s3 authentication to %s is not currently suported in AzCopy", host)
"s3 authentication to %s is not currently supported in AzCopy", host)
}
case common.ECredentialType.GoogleAppCredentials():
if resourceType != common.ELocation.GCP() {
return fmt.Errorf("Google Application Credentials to %s is not valid", resourceType.String())
}

host := "<unparseable url>"
host := "<unparsable url>"
u, err := url.Parse(resource)
if err == nil {
host = u.Host
Expand Down
2 changes: 1 addition & 1 deletion cmd/pathUtils.go
Original file line number Diff line number Diff line change
Expand Up @@ -297,7 +297,7 @@ func splitQueryFromSaslessResource(resource string, loc common.Location) (mainUr
if u, err := url.Parse(resource); err == nil && u.Query().Get("sig") != "" {
panic("this routine can only be called after the SAS has been removed")
// because, for security reasons, we don't want SASs returned in queryAndFragment, since
// we wil persist that (but we don't want to persist SAS's)
// we will persist that (but we don't want to persist SAS's)
}

// Work directly with a string-based format, so that we get both snapshot identifiers AND any other unparsed params
Expand Down
4 changes: 2 additions & 2 deletions cmd/zt_copy_file_file_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ func (s *cmdIntegrationSuite) TestFileCopyS2SWithIncludeFlag(c *chk.C) {
raw.include = includeString
raw.recursive = true

// verify that only the files specified by the include flag are copyed
// verify that only the files specified by the include flag are copied
runCopyAndVerify(c, raw, func(err error) {
c.Assert(err, chk.IsNil)
validateS2STransfersAreScheduled(c, "/", "/", filesToInclude, mockedRPC)
Expand Down Expand Up @@ -232,7 +232,7 @@ func (s *cmdIntegrationSuite) TestFileCopyS2SWithIncludeAndExcludeFlag(c *chk.C)
raw.exclude = excludeString
raw.recursive = true

// verify that only the files specified by the include flag are copyed
// verify that only the files specified by the include flag are copied
runCopyAndVerify(c, raw, func(err error) {
c.Assert(err, chk.IsNil)
validateS2STransfersAreScheduled(c, "/", "/", filesToInclude, mockedRPC)
Expand Down
2 changes: 1 addition & 1 deletion cmd/zt_generic_filter_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@ func (_ *genericFilterSuite) findAmbiguousTime() (string, time.Time, time.Time,
localString := u.Local().Format(localTimeFormat)
hourLaterLocalString := u.Add(time.Hour).Local().Format(localTimeFormat)
if localString == hourLaterLocalString {
// return the string, and the two UTC times that map to that local time (with their fractional seconds trucated away)
// return the string, and the two UTC times that map to that local time (with their fractional seconds truncated away)
return localString, u.Truncate(time.Second), u.Add(time.Hour).Truncate(time.Second), nil
}
}
Expand Down
4 changes: 2 additions & 2 deletions cmd/zt_scenario_helpers_for_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -466,7 +466,7 @@ func (scenarioHelper) generateCommonRemoteScenarioForS3(c *chk.C, client *minio.
objectName5 := createNewObject(c, client, bucketName, prefix+specialNames[i])

// Note: common.AZCOPY_PATH_SEPARATOR_STRING is added before bucket or objectName, as in the change minimize JobPartPlan file size,
// transfer.Source & transfer.Destination(after trimed the SourceRoot and DestinationRoot) are with AZCOPY_PATH_SEPARATOR_STRING suffix,
// transfer.Source & transfer.Destination(after trimming the SourceRoot and DestinationRoot) are with AZCOPY_PATH_SEPARATOR_STRING suffix,
// when user provided source & destination are without / suffix, which is the case for scenarioHelper generated URL.

bucketPath := ""
Expand Down Expand Up @@ -496,7 +496,7 @@ func (scenarioHelper) generateCommonRemoteScenarioForGCP(c *chk.C, client *gcpUt
objectName5 := createNewGCPObject(c, client, bucketName, prefix+specialNames[i])

// Note: common.AZCOPY_PATH_SEPARATOR_STRING is added before bucket or objectName, as in the change minimize JobPartPlan file size,
// transfer.Source & transfer.Destination(after trimed the SourceRoot and DestinationRoot) are with AZCOPY_PATH_SEPARATOR_STRING suffix,
// transfer.Source & transfer.Destination(after trimming the SourceRoot and DestinationRoot) are with AZCOPY_PATH_SEPARATOR_STRING suffix,
// when user provided source & destination are without / suffix, which is the case for scenarioHelper generated URL.

bucketPath := ""
Expand Down
2 changes: 1 addition & 1 deletion common/azError.go
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ type AzError struct {
additonalInfo string
}

// NewAzError composes an AzError with given code and messgae
// NewAzError composes an AzError with given code and message
func NewAzError(base AzError, additionalInfo string) AzError {
base.additonalInfo = additionalInfo
return base
Expand Down
4 changes: 2 additions & 2 deletions common/chunkStatusLogger.go
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ func NewChunkStatusLogger(jobID JobID, cpuMon CPUMonitor, logFileFolder string,
}

func numWaitReasons() int32 {
return EWaitReason.Cancelled().index + 1 // assume that maitainers follow the comment above to always keep Cancelled as numerically the greatest one
return EWaitReason.Cancelled().index + 1 // assume that maintainers follow the comment above to always keep Cancelled as numerically the greatest one
}

type chunkStatusCount struct {
Expand Down Expand Up @@ -538,7 +538,7 @@ DateTime? ParseStart(string s)
}
}

// convert to real datetime (default unparseable ones to a fixed value, simply to avoid needing to deal with nulls below, and because all valid records should be parseable. Only exception would be something partially written a time of a crash)
// convert to real datetime (default unparsable ones to a fixed value, simply to avoid needing to deal with nulls below, and because all valid records should be parseable. Only exception would be something partially written a time of a crash)
var parsed = data.Select(d => new { d.Name, d.Offset, d.State, StateStartTime = ParseStart(d.StateStartTime) ?? DateTime.MaxValue}).ToList();

var grouped = parsed.GroupBy(c => new {c.Name, c.Offset});
Expand Down
2 changes: 1 addition & 1 deletion common/credCache_darwin.go
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ func NewCredCache(options CredCacheOptions) *CredCache {
}
}

// keychain is used for intenal integration as well.
// keychain is used for internal integration as well.
var NewCredCacheInternalIntegration = NewCredCache

// HasCachedToken returns if there is cached token for current executing user.
Expand Down
2 changes: 1 addition & 1 deletion common/credCache_linux.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ import (
)

// CredCache manages credential caches.
// Use keyring in Linux OS. Session keyring is choosed,
// Use keyring in Linux OS. Session keyring is chosen,
// the session hooks key should be created since user first login (i.e. by pam).
// So the session is inherited by processes created from login session.
// When user logout, the session keyring is recycled.
Expand Down
2 changes: 1 addition & 1 deletion common/credentialFactory.go
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ func CreateBlobCredential(ctx context.Context, credInfo CredentialInfo, options
}

// refreshPolicyHalfOfExpiryWithin is used for calculating next refresh time,
// it checkes how long it will be before the token get expired, and use half of the value as
// it checks how long it will be before the token get expired, and use half of the value as
// duration to wait.
func refreshPolicyHalfOfExpiryWithin(token *adal.Token, options CredentialOpOptions) time.Duration {
if token == nil {
Expand Down
2 changes: 1 addition & 1 deletion common/iff.go
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@

package common

// GetBlocksRoundedUp returns the number of blocks given sie, rounded up
// GetBlocksRoundedUp returns the number of blocks given size, rounded up
func GetBlocksRoundedUp(size uint64, blockSize uint64) uint16 {
return uint16(size/blockSize) + Iffuint16((size%blockSize) == 0, 0, 1)
}
Expand Down
2 changes: 1 addition & 1 deletion common/lifecyleMgr.go
Original file line number Diff line number Diff line change
Expand Up @@ -619,7 +619,7 @@ func (_ *lifecycleMgr) awaitChannel(ch chan struct{}, timeout time.Duration) {
}
}

// E2EAwaitContinue is used in case where a developer want's to debug AzCopy by attaching to the running process,
// E2EAwaitContinue is used in case where a developer wants to debug AzCopy by attaching to the running process,
// before it starts doing any actual work.
func (lcm *lifecycleMgr) E2EAwaitContinue() {
lcm.e2eAllowAwaitContinue = true // not technically gorountine safe (since its shared state) but its consistent with EnableInputWatcher
Expand Down
11 changes: 6 additions & 5 deletions common/oauthTokenManager.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,9 +108,10 @@ func newAzcopyHTTPClient() *http.Client {
}

// GetTokenInfo gets token info, it follows rule:
// 1. If there is token passed from environment variable(note this is only for testing purpose),
// use token passed from environment variable.
// 2. Otherwise, try to get token from cache.
// 1. If there is token passed from environment variable(note this is only for testing purpose),
// use token passed from environment variable.
// 2. Otherwise, try to get token from cache.
//
// This method either successfully return token, or return error.
func (uotm *UserOAuthTokenManager) GetTokenInfo(ctx context.Context) (*OAuthTokenInfo, error) {
if uotm.stashedInfo != nil {
Expand Down Expand Up @@ -508,7 +509,7 @@ func (uotm *UserOAuthTokenManager) UserLogin(tenantID, activeDirectoryEndpoint s
// getCachedTokenInfo get a fresh token from local disk cache.
// If access token is expired, it will refresh the token.
// If refresh token is expired, the method will fail and return failure reason.
// Fresh token is persisted if acces token or refresh token is changed.
// Fresh token is persisted if access token or refresh token is changed.
func (uotm *UserOAuthTokenManager) getCachedTokenInfo(ctx context.Context) (*OAuthTokenInfo, error) {
hasToken, err := uotm.credCache.HasCachedToken()
if err != nil {
Expand Down Expand Up @@ -592,7 +593,7 @@ func (uotm *UserOAuthTokenManager) getTokenInfoFromEnvVar(ctx context.Context) (
}

// Remove the env var after successfully fetching once,
// in case of env var is further spreading into child processes unexpectly.
// in case of env var is further spreading into child processes unexpectedly.
lcm.ClearEnvironmentVariable(EEnvironmentVariable.OAuthTokenInfo())

tokenInfo, err := jsonToTokenInfo([]byte(rawToken))
Expand Down
2 changes: 1 addition & 1 deletion common/randomDataGenerator.go
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ func (r *randomDataGenerator) freshenRandomData(count int) {

// ALSO flip random bits in every yth one (where y is much smaller than the x we used above)
// This is not as random as what we do above, but its faster. And without it, the data is too compressible
var skipSize = 2 // with skip-size = 3 its slightly faster, and still uncompressible with zip but it is
var skipSize = 2 // with skip-size = 3 its slightly faster, and still incompressible with zip but it is
// compressible (down to 30% of original size) with 7zip's compression
bitFlipMask := byte(r.randGen.Int31n(128)) + 128
for i := r.readIterationCount % skipSize; i < count; i += skipSize {
Expand Down
2 changes: 1 addition & 1 deletion common/rpc-models.go
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ func ConsolidatePathSeparators(path string) string {
// //////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

// Transfers describes each file/folder being transferred in a given JobPartOrder, and
// other auxilliary details of this order.
// other auxiliary details of this order.
type Transfers struct {
List []CopyTransfer
TotalSizeInBytes uint64
Expand Down
4 changes: 2 additions & 2 deletions common/s3URLParts.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ const s3EssentialHostPart = "amazonaws.com"

var s3HostRegex = regexp.MustCompile(s3HostPattern)

// IsS3URL verfies if a given URL points to S3 URL supported by AzCopy-v10
// IsS3URL verifies if a given URL points to S3 URL supported by AzCopy-v10
func IsS3URL(u url.URL) bool {
if _, isS3URL := findS3URLMatches(strings.ToLower(u.Host)); isS3URL {
return true
Expand Down Expand Up @@ -102,7 +102,7 @@ func NewS3URLParts(u url.URL) (S3URLParts, error) {
}

// Check what's the path style, and parse accordingly.
if matchSlices[1] != "" { // Go's implementatoin is a bit strange, even if the first subexp fail to be matched, "" will be returned for that sub exp
if matchSlices[1] != "" { // Go's implementation is a bit strange, even if the first subexp fail to be matched, "" will be returned for that sub exp
// In this case, it would be in virtual-hosted-style URL, and has host prefix like bucket.s3[-.]
up.BucketName = matchSlices[1][:len(matchSlices[1])-1] // Removing the trailing '.' at the end
up.ObjectKey = path
Expand Down
Loading