-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zip archiver v2.10.1 needs much more heap than v2.9.1 #5
Comments
Due to the parallel compression algorithm we use more memory now than earlier versions, this is expected; I can update the docs about this. But do you have any indications that the memory usage is excessive ? |
I'm seeing this same issue when using the Assembly plugin to create a zip file. The assembly plugin v2.5.5 uses the archiver version 2.10.2. I tried overriding the archiver version to 2.10.3 but I get the same problem. I'm attempting to zip a directory containing about 650 MB of other jars. I have my Maven ops configured for -Xmx512m and the zip creation fails with the heap error. My naive interpretation is that depending on how many CPUs you have and how fast they are, you may need to ensure that the heap size is larger than the maximum zip file you might create which doesn't seem like a great design. I created a simple test case that consistently reproduces the issue on my machine. Run it with the command This will generate about 600 MB of random binary files and then attempt to create a zip of them using the assembly plugin. On my MacBook Pro with 8 processors/cores and -Xmx512m this always results in a heap error. My guess is that you'll have to tweak the values depending on the machine. Having to tweak the numbers is part of the problem because the failure isn't consistent on other machines including my CI server which makes it hard to debug across a team. Relevant output:
|
A quick look at the code shows that DeferredScatterOutputStream creates an OffloadingOutputStream with a 100 MB buffer. The commons ParallelScatterZipCreator will create a thread pool with a thread per available CPU. This means, on my 8 core machine, it would be possible for 800 MB of data to be buffered before the stream is offloaded to disk. It seems like this memory requirement would go up by 100 MB per CPU core which means the build memory requirements would be variable by CPU count which seems odd. Maybe some maximum amount of memory should be used (50% of heap) and then divided across available CPUs so the threads offload to disk sooner. |
As we ran into the same problem fixing it is simple, see attached patch |
Thanks for that patch @eicki. The issue described here matches MSOURCES-94. In the context of that ticket I tested your patch and it seems to work well. Do you mind opening a PR? |
In the Eclipse Tycho project we discoverd that the zip plexus archiver v2.10.1 does need much more heap than in version 2.9.1 (https://bugs.eclipse.org/bugs/show_bug.cgi?id=470074).
The exception we see:
Caused by: org.apache.maven.plugin.MojoExecutionException: Error packing p2 repository
at org.eclipse.tycho.plugins.p2.repository.ArchiveRepositoryMojo.execute(ArchiveRepositoryMojo.java:55)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
... 19 more
Caused by: org.codehaus.plexus.archiver.ArchiverException: Problem creating zip: Execution exception (and the archive is probably corrupt but I could not delete it)
at org.codehaus.plexus.archiver.AbstractArchiver.createArchive(AbstractArchiver.java:1007)
at org.eclipse.tycho.plugins.p2.repository.ArchiveRepositoryMojo.execute(ArchiveRepositoryMojo.java:53)
... 21 more
Caused by: java.io.IOException: Execution exception
at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.close(AbstractZipArchiver.java:764)
at org.codehaus.plexus.archiver.AbstractArchiver.createArchive(AbstractArchiver.java:994)
... 22 more
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.codehaus.plexus.archiver.zip.ByteArrayOutputStream.needNewBuffer(ByteArrayOutputStream.java:121)
at org.codehaus.plexus.archiver.zip.ByteArrayOutputStream.write(ByteArrayOutputStream.java:152)
at org.apache.commons.io.output.ThresholdingOutputStream.write(ThresholdingOutputStream.java:129)
at org.codehaus.plexus.archiver.zip.DeferredScatterOutputStream.writeOut(DeferredScatterOutputStream.java:36)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor$ScatterGatherBackingStoreCompressor.writeOut(StreamCompressor.java:274)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.writeCounted(StreamCompressor.java:257)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.deflate(StreamCompressor.java:248)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.deflateUntilInputIsNeeded(StreamCompressor.java:241)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.writeDeflated(StreamCompressor.java:222)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.write(StreamCompressor.java:190)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.StreamCompressor.deflate(StreamCompressor.java:169)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.ScatterZipOutputStream.addArchiveEntry(ScatterZipOutputStream.java:97)
at org.codehaus.plexus.archiver.commonscompress.archivers.zip.ParallelScatterZipCreator$2.call(ParallelScatterZipCreator.java:175)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[ERROR]
[ERROR]
The text was updated successfully, but these errors were encountered: