Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error to upload files to Azure blob storage #209

Open
italoveloso89 opened this issue Dec 13, 2021 · 10 comments
Open

Error to upload files to Azure blob storage #209

italoveloso89 opened this issue Dec 13, 2021 · 10 comments
Labels

Comments

@italoveloso89
Copy link

italoveloso89 commented Dec 13, 2021

Version report

Jenkins and plugins versions report:

Jenkins version: 2.303.3

Azure AD Plugin - 185.v3b416408dcb1
Azure Artifact Manager plugin - 97.v074e1332e88d
Azure CLI Plugin - 0.9
Azure Credentials - 198.vf9c2fdfde55c
Azure SDK API Plugin - 70.v63f6a95999a7
Azure Storage plugin - 365.vf41653c43b01
Azure VM Agents - 799.va4c741108611

  • What Operating System are you using (both controller, and any agents involved in the problem)?
    Windows server 2019 (only master)

Reproduction steps

  • Upload large files (1GB +)

  • Build config:
    azureUpload blobProperties: [contentLanguage: 'en-US'], containerName: 'testproject', filesPath: '**/Deploy/*.zip, **/Deploy/*.txt', storageCredentialId: 'TestUploadArtifactsAzure', storageType: 'blobstorage', virtualPath: '${P4_CHANGELIST}'

Results

Expected result:

Upload files to Azure blob storage

Actual result:

13:59:59  ERROR: AzureStorage - Error occurred while uploading to Azure - lumetobuilds
13:59:59  com.microsoftopentechnologies.windowsazurestorage.exceptions.WAStorageException: Fail to upload individual files to blob
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.service.UploadToBlobService.uploadIndividuals(UploadToBlobService.java:138)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.service.UploadService.execute(UploadService.java:545)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.WAStoragePublisher.perform(WAStoragePublisher.java:472)
13:59:59  	at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:100)
13:59:59  	at org.jenkinsci.plugins.workflow.steps.CoreStep$Execution.run(CoreStep.java:70)
13:59:59  	at org.jenkinsci.plugins.workflow.steps.SynchronousNonBlockingStepExecution.lambda$start$0(SynchronousNonBlockingStepExecution.java:47)
13:59:59  	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
13:59:59  	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
13:59:59  	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
13:59:59  	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
13:59:59  	at java.base/java.lang.Thread.run(Thread.java:834)
13:59:59  Caused by: java.nio.file.AccessDeniedException: F:\P4\SampleAutomation\DeEscalationXR\Deploy\Android-Development-29967.zip
13:59:59  	at java.base/sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:89)
13:59:59  	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:103)
13:59:59  	at java.base/sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:108)
13:59:59  	at java.base/sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:235)
13:59:59  	at java.base/java.nio.file.Files.newByteChannel(Files.java:371)
13:59:59  	at java.base/java.nio.file.Files.newByteChannel(Files.java:422)
13:59:59  	at java.base/java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:420)
13:59:59  	at java.base/java.nio.file.Files.newInputStream(Files.java:156)
13:59:59  	at hudson.FilePath.newInputStreamDenyingSymlinkAsNeeded(FilePath.java:2127)
13:59:59  	at hudson.FilePath.read(FilePath.java:2112)
13:59:59  	at hudson.FilePath.read(FilePath.java:2104)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.AzureBlobProperties.detectContentType(AzureBlobProperties.java:120)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.AzureBlobProperties.configure(AzureBlobProperties.java:107)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.service.UploadToBlobService.configureBlobProperties(UploadToBlobService.java:155)
13:59:59  	at com.microsoftopentechnologies.windowsazurestorage.service.UploadToBlobService.uploadIndividuals(UploadToBlobService.java:125)
@italoveloso89
Copy link
Author

Hi guys, is there any idea why is it not uploading large zip files?
small files like .txt, .exe upload easily, no problem.

@JonathanKayumbo
Copy link

Facing the same issue , anyone having solution or alternative ?

@italoveloso89
Copy link
Author

Hello @JonathanKayumbo , I used another solution that calls Azcopy.
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

@JonathanKayumbo
Copy link

Hello @JonathanKayumbo , I used another solution that calls Azcopy. https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10

Thanks , I could also use az cli to perform that but i was looking for a Jenkins plugin , that way i do not have to install azCopy or az clie on Agents

@timja
Copy link
Member

timja commented Mar 17, 2022

If you can provide a reproduce-able case from scratch e.g. a pipeline I can take a look.

IIRC last I checked this I couldn't reproduce

@JonathanKayumbo
Copy link

JonathanKayumbo commented Mar 18, 2022

If you can provide a reproduce-able case from scratch e.g. a pipeline I can take a look.

IIRC last I checked this I couldn't reproduce

Okay, great
So below is an example pipeline , seems like a file that is less than certain size works but a larger file does not work

pipeline {
    agent {
        node {
            label 'u20'
            customWorkspace "${BRANCH_NAME}"
        }
    }
    options {
        skipDefaultCheckout()
        timeout(time: 25, unit: 'MINUTES')
    }
    stages { 
        stage ("Checkout & Stash") {
            parallel {
                stage ("ub20"){
                    agent {
                        node {
                            label 'u16'
                            customWorkspace "${BRANCH_NAME}"
                        }
                    }
                    steps {
                        script {
                            def containerName = "testcontainer"
                            sh "mkdir testdir"
                            def ArtifactoryServerURL = 'https://artifactoryserver:433/artifactory'
                            def download_list = [
                                "toolchains/arm/ARMCompiler/1.16/ARMCompiler01.tar.gz", // size = 186MB
                                "toolchains/arm/ARMCompiler/6.16/ARMCompiler02.tar.gz", // size = 286MB

                                ]
                            for (item in download_list){
                                withCredentials([string(credentialsId: 'artifactory-apikey', variable: 'ARTIFACTORY_API_KEY')]) {

                                sh """
                                 (curl -sSf -H 'X-jfrog-Art-Api:${ARTIFACTORY_API_KEY}' -O -k '${ArtifactoryServerURL}/${item}')
                                """  
                                }
                            }
                            sh "ls"
                           
                            // Should work (azure-artifact-manager
                            stash name: "SmallSize_zipfile", includes:"ARMCompiler01.tar.gz"
                            azureUpload containerName: containerName, filesPath: 'ARMCompiler01.tar.gz', verbose: true, uploadZips: true

                            stash name: "LargeSize_zipfile", includes:"ARMCompiler02.tar.gz"
                            azureUpload containerName: containerName, filesPath: 'ARMCompiler02.tar.gz', verbose: ture , uploadZips: true
                           
                        }
                    }

                }
            }
        }
    }
    post {
        always {
            script {
                print("\n\nStash timings \n\n")
            }
        }
    }
}

I also found this link while trying to understand the max file size of the azure blob block on the add API call, maybe has something to do with this issue

Thanks for your time

@timja
Copy link
Member

timja commented Mar 18, 2022

Can you provide a self contained example that doesn't require your artifactory please.

generating a file with dd or something like random string utils?

(verifying that it works to reproduce the issue for you?)

@JonathanKayumbo
Copy link

I think I understand now why upload of a large file does not work, the reason is the API blob put request as per the Azure storage account version I'm using only support a chunky file size of 256MB ( ANY FILE SIZE BELOW 256MB works )

According to that documentation, the newer version will support up to 5000MBi

Thank you for you help though

@timja
Copy link
Member

timja commented Mar 23, 2022

Storage clients default to a 128 MiB maximum single blob upload, settable in the Azure Storage client library for .NET version 11 by using the SingleBlobUploadThresholdInBytes property of the BlobRequestOptions object. When a block blob upload is larger than the value in this property, storage clients break the file into blocks. You can set the number of threads used to upload the blocks in parallel on a per-request basis using the ParallelOperationThreadCount property of the BlobRequestOptions object.

It certainly works for bigger, I've been able to upload files of up to 1gb before.

@JonathanKayumbo
Copy link

JonathanKayumbo commented Mar 23, 2022

Okay, so going back to our example above, here is a self-contained example, does this work for you? Or how would you modify the azureUpload command to break down the files in small blocks , thanks

pipeline {
    agent {
        node {
            label 'u20'
            customWorkspace "${BRANCH_NAME}"
        }
    }
    options {
        skipDefaultCheckout()
        timeout(time: 25, unit: 'MINUTES')
    }
    stages { 
        stage ("Checkout & Stash") {
            parallel {
                stage ("Ubuntu-agent"){
                    agent {
                        node {
                            label 'u16'
                            customWorkspace "${BRANCH_NAME}"
                        }
                    }
                    steps {
                        script {
                            def containerName = "testcontainer"
                            sh "curl https://github.com/rancher/k3os/releases/download/v0.21.5-k3s2r1/k3os-amd64.iso > k30s.iso"
                           
                            // k3os.iso file is approx 512 MB
                            stash name: "K3OS-IMAGE", includes:"K3os.iso"
                            azureUpload containerName: containerName, filesPath: 'k2os.iso', verbose: true, uploadZips: true             
                        }
                    }

                }
            }
        }
    }
    post {
        always {
            script {
                print("\n\nStash timings \n\n")
            }
        }
    }
} 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants