Category: | Bug report |
Priority: | Immediate |
Status: | New |
Project: | Severity: | Critical |
Resolution: | Open |
|
Component: | Reproducibility: | Always |
Assigned to: |
When we are trying to run regression on Jasper server 4.7/5.0 , it breaks after running for 3-4 hours.
The same application which runs in Weblogic10.3 on Unix environment does not gives any issue in Jasper4.0 but breaks in Jasper4.7/5.0
The reason we see is the CLOSE_WAIT keeps piling up and never close. We used netstat | grep CLOSE_WAIT to see and in the end it will break saying TOO MANY FILES OPEN. This is a critical issue which is blocking the development. Any help on the same is appreciated.
v4.7.0
CLOSE_WAIT issues in sockets
3 Comments:
When we are trying to run regression on Jasper server 4.7/5.0 , it breaks after running for 3-4 hours.
we are on Websphere 7.0 on aix running Jasper4.7.1
we see is the CLOSE_WAIT keeps piling up and never close. We used netstat | grep CLOSE_WAIT. This is a critical issue which is happening in production. Any help on the same is appreciated.
<strong><<<<<< This comment was blocked and unpublished because <a href="http://www.projecthoneypot.org/">Project Honeypot</a> indicates it came from a suspicious IP address.</strong>
Hi Do we know this issue ever got fixed, we are running into the same issue in production and pretty much dead end. Appreciate if you can share the solution to the problem.<strong><<<<<< This comment was blocked and unpublished because <a href="http://www.projecthoneypot.org/">Project Honeypot</a> indicates it came from a suspicious IP address.</strong>
This issue is caused when using WSClient class in conjunction with the iReport CommonsHTTPSender class for SOAP web service calls.
The iR CommonsHTTPSender actually has a memory/heap leak due to the IReportManager.getPreferences().addPreferenceChangeListener() call. That call stores a hard reference to the CommonsHTTPSender instance in a "global" object, which causes all CommonsHTTPSender instances to accumulate in memory. CommonsHTTPSender has references to MultiThreadedHttpConnectionManager and its connection pool, therefore the pooled connection objects are never garbage collected. This way the memory leak leads to an open socket leak. Note that the objects are small so the memory leak is not easily apparent, but it's there nonetheless.
The Axis CommonsHTTPSender does not have this problem, therefore when using it pooled connection objects eventually get garbage collected which results in the sockets getting closed. The connections get closed with a delay that comes from the garbage collection; since the objects are relatively small the garbage collection would not necessarily run very often so the connections can stay open for a while.
A workaround is to switch to the Axis CommonsHTTPSender class in
client-config.wsdd, java:org.apache.axis.transport.http.CommonsHTTPSender.