Jump to content
Changes to the Jaspersoft community edition download ×

OutOfMemory error with large XMLDataSource


2004 IR Help

Recommended Posts

By: Angel - angel82

OutOfMemory error with large XMLDataSource

2005-09-02 03:54

I have problem with XML document > 4 mb (>35.000 articles). The error is of OutOfMemory.

 

The structure of the document is:

<REPORT NOMERPT="vprart">

<PAGBLK PBK="1">

<RIG1 PBK="1.1">

<RART> 1</RART>

<RDES>first article</RDES>

....

<RIG2 PBK="1.1.1">

<aaaa>des 1</aaaa>

<RIG3 PBK="1.1.1.1">

<bbbb>333des 1</bbbb>

</RIG3>

<RIG3 PBK="1.1.1.2">

<bbbb>333des 2</bbbb>

</RIG3>

</RIG2>

<RIG2 PBK="1.1.2">

<aaaa>des 2</aaaa>

</RIG2>

....

</RIG1>

<RIG1 PBK="1.2">

<RART> r</RART>

<RDES>article 2</RDES>

<RUNMI>l</RUNMI>

....

 

</RIG1>

...

</PAGBLK>

<PAGBLK PBK="2">

<RIG1 PBK="2.1">

<RART>tavolo</RART>

<RDES>article 3</RDES>

...

<RIG2 PBK="2.1.1">

<aaaa>des 1</aaaa>

<RIG3 PBK="2.1.1.1">

<bbbb>333des 1</bbbb>

</RIG3>

<RIG3 PBK="2.1.1.2">

<bbbb>333des 2</bbbb>

</RIG3>

</RIG2>

<...

</RIG1>

...

</PAGBLK>

...

</REPORT>

 

I use Master report (rig1) with subreport (rig2) thih sub-subreport(rig3).

I use the attribute PBK in groups for pagebreak, for example to view a page per article. In this case I don't use it because there are too many articles.

I read different post in the forum but I am not able to find a efficent solution.

I try to reduce the XML size, to reduce font height margin of my subreport, try different XPath and the virtualization.

I Improve the situation, but there are the same problem if the number of article encrease again (> 50.000).

 

Sombody can Help me? Thank you in advance, Angelo

 

 

 

 

By: Darren Davis - ddavis539

RE: OutOfMemory error with large XMLDataSourc

2005-09-12 21:25

By default, Jasper Reports using a DOM parser to process XML data sources, which is nice for some things (like xpath support), but the drawback is that for large xml data sets it is very inefficient and can lead to out of memory errors.

 

I had this same problem when I was trying to parse more than 1000 records (~30 MB). I would recommend that you implement a custom data source which implements the JRDataSource interface which is fairly simple:

 

public interface JRDataSource {

boolean next() throws JRException;

 

Object getFieldValue(JRField jrField) throws JRException;

}

 

With a custom data source, you could use the SAX parser to read through your data and populate an efficient data structure made up of Java objects. The SAX parser is much faster and better for large data sets.

 

There are examples on the web of how to use a SAX parser. The sample code that comes with the JasperReports source code contains an example of a custom data source.

Link to comment
Share on other sites

  • Replies 0
  • Created
  • Last Reply

Top Posters In This Topic

Popular Days

Top Posters In This Topic

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...