We're sorry Aspose doesn't work properply without JavaScript enabled.

Free Support Forum - aspose.com

Exception "out of Memory Error" while generating PPTX(Java)

Hi @mudassir.fayyaz

i was trying to build pptx file using 5 pptx file with size 298, 382, 170, 386, 276 MB but i’m getting an ERROR:

Java heap space: java.lang.OutOfMemoryError
java.lang.OutOfMemoryError: Java heap space
at com.aspose.slides.internal.e5.void.setCapacity(Unknown Source)
at com.aspose.slides.internal.e5.void.b(Unknown Source)
at com.aspose.slides.internal.e5.void.write(Unknown Source)
at com.aspose.slides.internal.eu.case.write(Unknown Source)
at com.aspose.slides.internal.eu.case.write(Unknown Source)
at com.aspose.slides.internal.eu.boolean.write(Unknown Source)
at com.aspose.slides.internal.eu.goto.write(Unknown Source)
at com.aspose.slides.internal.eu.public.byte(Unknown Source)
at com.aspose.slides.internal.eu.public.case(Unknown Source)
at com.aspose.slides.internal.eu.public.int(Unknown Source)
at com.aspose.slides.internal.eu.switch.public(Unknown Source)
at com.aspose.slides.internal.eu.switch.if(Unknown Source)
at com.aspose.slides.acy.do(Unknown Source)
at com.aspose.slides.Presentation.do(Unknown Source)
at com.aspose.slides.Presentation.do(Unknown Source)
at com.aspose.slides.Presentation.do(Unknown Source)
at com.aspose.slides.Presentation.save(Unknown Source)
at com.testing.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:77)
at com.testing.demo.LambdaFunctionHandler.handleRequest(LambdaFunctionHandler.java:1)

END RequestId: 3737b71e-7be6-423d-8d55-d7890bce6bbf
REPORT RequestId: 3737b71e-7be6-423d-8d55-d7890bce6bbf Duration: 70644.17 ms Billed Duration: 70700 ms Memory Size: 3000 MB Max Memory Used: 2858 MB
Java heap space
java.lang.OutOfMemoryError

Could you help me where I’m wrong?
Sample Code:

///

String[] keys = input.get(“keys”);

        Presentation finalPresentation = new Presentation();
        ISlideCollection finalPresentationSlides = finalPresentation.getSlides();
	for (int i = 0; i < keys.length; i++) {
		String key_name = keys[i];
		System.out.println(key_name);
		S3ObjectInputStream s3is = null;
		S3Object o = null;
		try {
			o = s3Client.getObject(bucket_name, key_name);
			s3is = o.getObjectContent();
		} catch (AmazonServiceException e) {
			System.err.println(e.getErrorMessage());
			System.exit(1);
		}

		Presentation sourcePresentation = new Presentation(s3is);
		ISlide slide = sourcePresentation.getSlides().get_Item(0);

		finalPresentationSlides.addClone(slide);

		try {
			s3is.close();
			sourcePresentation.dispose();
			o.close();
		} catch (IOException e) {
			e.printStackTrace();
		}

	}

ByteArrayOutputStream os = new ByteArrayOutputStream();
finalPresentation.save(os, SaveFormat.Pptx); // this is line number 77

Thanks in advance!!

One more thing, each pptx file have only video content. and using this pptx file i’m building new pptx file.

@AbhiG,

Can you please try to increase your heap space and if there is still an issue than please share source files so that we may further investigate to help you out.

My JVM is running on AWS Lambda function and there is no way to increase heap size.

Could you explain me what is going wrong?

@AbhiG,

I like to inform that you are using 5 presentations and try to join it into one! Total size of those 5 presentations 1.5Gb - it is extremely much and you should have at least 8Gb to JVM process.

You can try to do following:
Create new Presentation(e.g. pres ) and merge to it first presentation!
2. Close both.
3. Open pres and merge to it second presentation!
4. Close both.
5. Repeate step 3 and 4 for (3rd 4 and 5 presentations)

This will reduce memory consumption.

As you mentioned, I have already write code. Sample:

	        MultiPartOutputStream os = null;
	        
	        String[] keys = input.get("keys");

	        for (int i = 0; i < keys.length; i++) {
	            
	            Presentation finalPresentation = new Presentation();
	            ISlideCollection finalPresentationSlides = finalPresentation.getSlides();

	            String key_name = keys[i]; // key_name = store pptx file name

	            Presentation sourcePresentation = new Presentation(s3Client.getObject(bucket_name, key_name).getObjectContent());
	   
	            finalPresentationSlides.addClone( sourcePresentation.getSlides().get_Item(0));
	            
	            try {
	                os = manager.getMultiPartOutputStreams().get(0);
	                finalPresentation.save(os, SaveFormat.Pptx);
	                
	            } catch (Exception e) {
	                e.printStackTrace();
	            } finally {
	                finalPresentation.dispose();
	                sourcePresentation.dispose();
	            }	
	        }
	        os.close();
	        manager.complete();```
but it merge only first silde with build pptx. Could you give a code sample?

BTW!! What do you mean by open and close pres. Can you give me sample example code like that?
It would be great.

Thanks in advance!!

@AbhiG,

I have made some changes to your sample code to give you an idea about possible proposed approach that may work on your end if you are unable to increase the heap size. If the issue still persists then unfortunately, there is no other option to load such huge presentation files with less heap memory.

public static void TestClonePres()
{
	
	    
        String[] keys = {"pres1.pptx","pres2.pptx","pres3.pptx"};
        Presentation finalPresentation = new Presentation();
        Presentation sourcePresentation=null;
        String lastSaved=" ";
        String path="C:\\Aspose Data\\";
        for (int i = 0; i < keys.length; i++) 
        {
            
        	if(i!=0)
        	{
        		finalPresentation = new Presentation(path+lastSaved);
        	}

        	ISlideCollection finalPresentationSlides = finalPresentation.getSlides();

            String key_name = keys[i]; // key_name = store pptx file name

            sourcePresentation = new Presentation(s3Client.getObject(bucket_name, key_name).getObjectContent());
            
            for(int j=0;j< sourcePresentation.getSlides().size();j++)
            {
            	ISlide newSlide=sourcePresentation.getSlides().get_Item(j);
            	finalPresentationSlides.addClone( sourcePresentation.getSlides().get_Item(0));
            }
            try {
                	//os = manager.getMultiPartOutputStreams().get(0);
                 	//finalPresentation.save(os, SaveFormat.Pptx);
                	lastSaved="temp"+i+".pptx";
 	                finalPresentation.save(path+lastSaved, SaveFormat.Pptx);
 	                finalPresentation.dispose();
	                sourcePresentation.dispose();
 	                
            } catch (Exception e) {
                e.printStackTrace();
            } finally {
                finalPresentation.dispose();
                sourcePresentation.dispose();
            }	
        }
        os.close();
        	//manager.complete();```
        	//but it merge only first silde with build pptx. Could you give a code sample?	

}

Can we pass OutputStream obj instead of path+lastSaved?

@AbhiG,

Yes, Aspose.Slides does allow saving presentation as stream.

Hii @mudassir.fayyaz My requirement is build large(1-2GB) pptx file using only 2.5GB RAM. Here is explanation:

  1. Created temp. presentation obj using directly stream from s3 bucket.
  2. attached first slide (might be slide contents videos(1GB) data) of temp. into new presentation obj.
  3. flushed temp. presentation from memory.
  4. from point 1 to 3 will be continue until loop closed.
  5. Now i have new pres. obj that have all slide info.
  6. Now i’m uploading new pre. to s3 bucket.

Whole this process is running on AWS lambda function. A lambda function have limit 3GB RAM and 512MB ROM. So without load in RAM/ROM how to write into S3 bucket?

Here is my sample code:

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.aspose.slides.ISlide;
import com.aspose.slides.ISlideCollection;
import com.aspose.slides.Presentation;
import com.aspose.slides.SaveFormat;
import alex.mojaki.s3upload.MultiPartOutputStream;
import alex.mojaki.s3upload.StreamTransferManager;

public class LambdaFunctionHandler implements RequestHandler<Map<String, String[]>, String> {

	Runtime r = null;
	@Override
	public String handleRequest(Map<String, String[]> input, Context context) {
		
		 
		// AWS S3 bucket configuration..
		String AccessKeyID = "dummy key";
		String SecretAccessKey = "dummy key";
		String clientRegion = "dummy region";
		String bucket_name = " dummy bucket name";
		BasicAWSCredentials creds = new BasicAWSCredentials(AccessKeyID, SecretAccessKey);
		AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(clientRegion)
				.withCredentials(new AWSStaticCredentialsProvider(creds)).build();
		// Closing AWS S3 bucket configuration..

		Presentation finalPresentation = new Presentation();
		ISlideCollection finalPresentationSlides = finalPresentation.getSlides();

		String dest = "finalpresentation.pptx";
		String[] keys = input.get("keys"); // collecting pptx file name i.e stored into s3 bucket
		for (int i = 0; i < keys.length; i++) {
			String key_name = keys[i];
			Presentation sourcePresentation = new Presentation(
					s3Client.getObject(bucket_name, key_name).getObjectContent());
			try {
finalPresentationSlides.addClone(sourcePresentation.getSlides().get_Item(0));

			} catch (Exception e) {
				e.printStackTrace();
			} finally {
				sourcePresentation.dispose();
			}

		}

		final StreamTransferManager manager = new StreamTransferManager(bucket_name, dest, s3Client);
		MultiPartOutputStream os = null;

		try {
			os = manager.getMultiPartOutputStreams().get(0);
			finalPresentation.save(os, SaveFormat.Pptx);

		} catch (Exception e) {
			e.printStackTrace();
		} finally {
			os.close();
		}
		manager.complete();
		// Finishing of
		return bucket_name + "/" + dest;
	}
}```

As you shared code is like that it consume ROM. Overall My requirement is if one slide is attached in new presentation obj., then at that time it should be write into S3 bucket and vice versa. I have shared my concept what i’m doing currently. But for a large file my concept becomes fail due to Lambda 3RAM limitation.

IMG_20190424_165515.jpg (463.1 KB)

@AbhiG,

I have observed your requirements and like to share as far as Aspose.Slides is concerned, it is capable for saving the presentation in the form of file on disk or stream. So, pushing file on S3 bucket is some thing out of scope of Aspose.Slides. As far as issues related to overcoming increased heap size was concerned, we propped to you the possible approach that you may try on your end and it may work well but may have issues if you use huge presentation files and merging them together.