Currently, ZipArchiveFakeEntry will read all the bytes of the file into memory unless there are more than 2GB (Integer.MAX_VALUE) - in which case it throws an error. The workaround is to load the data from a java.io.File instead of an InputStream but this may not be obvious or straightforward. Aim is to have a configurable value at which ZipArchiveFakeEntry creates a temp file (or an equivalent class).
IOException("ZIP entry size is too large or invalid") is the existing exception
added implementation using r1893421 Using this will mean all zip entries with greater than 50Mb decompressed data will be put in a temp file instead of kept in memory. ZipInputStreamZipEntrySource.setThresholdBytesForTempFiles(50000000); This only affects users when they read files using Input Streams instead of reading directly from a java.io.File.
ZipInputStreamZipEntrySource.setEncryptTempFiles(true); //also now supported