Jump to content
Changes to the Jaspersoft community edition download ×

rmiller

Members
  • Posts

    64
  • Joined

  • Last visited

rmiller's Achievements

  1. IssueWhen you export a report to PDF the report displays in the browser in PDF format. However, when the report contains a Charts Pro chart (fusion chart) the chart will not display in the browser with default browser plugin settings, they just display a white area where the chart should be. ResolutionAdobe Reader or Adobe Acrobat must be installed, during installation a plugin will be installed in all browsers. The browser will use this plugin to render the fusion charts. There is a difference between Windows and Macintosh browser's PDF behavior so I will treat them separately. WindowsChrome:In Chrome open a new tab and type "about:plugins."Locate the "Chrome PDF Viewer" plugin and disable it.Locate the "Adobe Reader" plugin and enable it. Firefox:Click on the three horizontal bars on the toolbar and select "Options."Click on the "Applications" tab.Scroll down and locate the "Portable Document Format (PDF)" content type. In the Action drop down menu select "Use Adobe Acrobat (in Firefox)." IE: (using IE 10)Click on the gear in the toolbar and choose "Manage add-ons."Click on "Toolbars and Extensions" in the left pain and choose "Run without permission" from the Show drop down menu.Enable the "Adobe PDF Reader" extension. Macintosh (using OS X 10.9)Fusion charts are not supported on the Mac in either Chrome or Firefox, but there is a workaround to export the PDF to disk. Chrome:In Chrome open a new tab and type "about:plugins."Locate the "Chrome PDF Viewer" plugin and disable it.Locate the "Adobe Reader" plugin and disable it. The PDF will now export to disk. Firefox:Click on the three horizontal bars on the toolbar and select "Preferences."Click on the "Applications" tab.Scroll down and locate the "Portable Document Format (PDF)" content type. In the Action drop down menu select "Use Adobe Reader (default)." The PDF will now export to disk. Safari:From the "Safari" menu choose "Preferences."Select the Security tab and click on "Manage Website Settings..." Select "Adobe Reader" in the left pane.Choose "Allow Always" in the drop down menu at the bottom right.
  2. Differences between the Monitoring reports and Audit reportsMonitoring ReportsMonitoring reports only contain information about report events. Monitoring is included with the standard JasperReports license. In order to have the Monitoring data available to the Monitoring reports you must enable both Audit and Monitoring in WEB-INF/js.config.properties setting the following properties to true. audit.records.enabled=true monitoring.records.enabled=true Audit ReportsAudit reports contain information about general and repository events, user events, role events, as well as report events. Audit reports can be enabled only if you have purchased Audit and it has been added to your license. See sections 10.3.1 and 10.4.1 in the TIBCO JasperReports Server Administrator Guide for a complete list of events for Audit and Monitoring respectively.
  3. IntroductionThe standard JVM arguments recommended by Jaspersoft (see the install guide) are adequate for most JasperReports Server (JRS) installations where concurrency is not high. However, with installations with high traffic requirements it is necessary to eke out as much performance as possible from the application server. This is done by fine tuning the young generation of the JVM to make garbage collection more efficient. Tuning the JVM is not an exact science, it must be done by trial and error, and since every environment is different (size and quantity of reports, size of repository, machine specifications, etc.) the tuning parameters will be slightly different for each environment. Thus, the tuning parameters described in this article should be considered a guide, not hard and fast specification. Garbage Collection OverviewThe Java Virtual Machine (JVM) consists of three areas where java objects live out their lives, the Permanent Generation, the Old Generation and Young Generation, collectively known as the heap. When the application server is started up most of the objects created are stored in the permanent generation and virtually all of them stay there until the application server is shut down. As objects are created while the application is used they first live in the young generation, and since most objects have a very short lifespan the are removed from the young generation during minor garbage collections. Objects that are longer lived get promoted (tenured) to the old generation. These are removed during full garbage collections, and are called stop the world collections because all activity in the JVM is stopped while the garbage collection is taking place. Since the old generation is larger and contains more objects full garbage collections are more costly in terms of time than minor collections. So, the objective of fine tuning the JVM is to ensure more minor collections and less full collections. The young generation consists of the eden space and two survivor spaces. Newly created objects are located in the eden space. When the eden space becomes full a garbage collection takes place and the surviving objects are copied to one of the survivor spaces. When that survivor space fills up the objects are copied to the other survivor space at the next minor collection along with any surviving objects from eden. The JVM keeps track of how many times each object is copied from one survivor space to the other and when an object reaches a certain number, called the tenuring threshold (dynamically set by the JVM), it is promoted to the old generation. So the main objective is to size the young generation spaces to ensure that more objects are collected in the young generation before being promoted to the old generation. The sizing is done using the NewRatio and SurvivorRatio JVM options. The NewRatio determines the size of the overall size of the young generation. For example, NewRatio=2 means that the ratio between the young and old generation is 1:2, or the young generation will be one third the size of the old generation. The SurvivorRatio determines the size of the eden space and the two survivor spaces with respect to the total young generation size. SurvivorRatio=6 sets the ratio of the eden and a survivor space to 1:6, meaning each survivor space will one sixth the size of the eden space, or one eighth the size of the young generation (there are two survivor spaces). MethodologyI determined the young generation sizes by generating load using JMeter. The JMeter scripts executed six sample reports and the Supermart Dashboard with 30 concurrent virtual users. The JRS was version 5.5 installed on an Ubuntu server with Tomcat 7, JDK 7, and Postgres database. First I ran baseline tests using the recommended JVM options with 8 GB maximum heap and 512 MB maxPermSize and ConcurrentMarkSweep garbage collection. I ran multiple tests with a one hour duration, restarting the application server before each test run. The primary metric was the average response time for all requests and I recorded the average of the average response time for each test. Then I ran tests using different values for -XX:NewRatio and -XX:SurvivorRatio, again recording the average of average response time for each test. The test results varied widely for the one hour test runs, so after determining the best tuning values for the young generation I ran a series of tests for an eight hour duration, four baseline tests and four tuned JVM tests. In the end I got a nearly 20% performance improvement using NewRatio=2 and SurvivorRatio=8. Test ResultsTest RunAverage (ms)Avg. of Avg.MaxMin% Diff.Baseline 8 hour 166370577366319.37 %Baseline 8 hour 2715 Baseline 8 hour 3773 Baseline 8 hour 4668 NR2 SR8 8 hour 1625568625564 NR2 SR8 8 hour 2581 NR2 SR8 8 hour 3564 NR2 SR8 8 hour 4503
  4. Issue:I have a report with a subreport, but when I run or compile the subreport it does not create a .jasper file to use in the main report. Resolution:In Jaspersoft Studio select the menu item Project->Build Automatically. Now Studio will create the .jasper file in the same location as the .jrxml file.
  5. Issue:When creating an object, e.g., an organization, using rest_v2 ehcache does not get updated across the cluster. Resolution:First IP Multicast must be enabled on each server in the cluster. The configuration is different for the two main Linux flavors. Ubuntu:Add or uncomment the following lines in /etc/sysctl.conf. net.ipv4.icmp_echo_ignore_broadcasts=false net.ipv4.ip_forward=1 Lookup the network device used for the multicast traffic by typing ifconfig. You’ll get a list of devices with their names on the left. Typically eth[0-9], e.g., eth0. Remember the name of the device you’ll want to use it in the next step. Add this line to /etc/network/interfaces. up route add -net 224.0.0.0/4 mask 240.0.0.0 dev eth[0-9] Reboot the servers. Fedora:Add or uncomment the following lines in /etc/sysctl.conf. net.ipv4.icmp_echo_ignore_broadcasts=0 net.ipv4.ip_forward=1 Lookup the network device used for the multicast traffic by typing ifconfig. You’ll get a list of devices with their names on the left. Typically eth[0-9], e.g., eth0. Remember the name of the device, you’ll use it in the next step. Create or edit the file /etc/sysconfig/network-scripts/route-eth[0-9] on each server and add the following lines. GATEWAY0=0.0.0.0 NETMASK0=240.0.0.0 ADDRESS0=224.0.0.0 If there is already a "0" appended to the configuration as above then append "1" to the elements. Reboot the servers. To test the network settings:Type "cat /proc/sys/net/ipv4/ip_forward." This must return 1. Type "cat /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts." This should return 0. Type "ping 224.0.0.1 | grep <IP address of other server in cluster>". You should see the IP address. Modify ehcache configuration files:Add the "hostname" property to the cacheManagerPeerProviderFactory and cacheManagerPeerListenerFactory beans in the following files: WEB-INF/ehcache.xml WEB-INF/ehcache_hibernate.xml WEB-INF/classes/ehcache_hibernate.xmlThe configuration will look like this: <cacheManagerPeerProviderFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerProviderFactory" properties="hostName=172.17.10.124,peerDiscovery=automatic, multicastGroupAddress=228.0.0.1, multicastGroupPort=4446, timeToLive=1"/> <cacheManagerPeerListenerFactory class="net.sf.ehcache.distribution.RMICacheManagerPeerListenerFactory" properties="hostName=172.17.10.124,port=40011,remoteObjectPort=40012,socketTimeoutMillis=120000"/> Restart the servers. To test the configuration first create an organization using the JasperReport Server UI. In the following examle the organization is testorg1. Using a REST client send the following URL to any other node in the cluster: http://172.17.10.125:8080/jasperserver-pro-561/rest_v2/organizations/testorg1/roles You should receive a 204 No Content HTTP response (there are no roles in the organization). In versions 5.x there is a bug that prevents the automatic replication across the cluster of ehcache. This is fixed in version 6.0. There is a patch for 5.5, 5.6 and 5.6.1. Open a case and request the patch from the support engineer.
  6. IssueUsing a Redshift datasource in a Virtual Datasource (VDS) in JasperReports Server v5.6.0 produces the following error when opening a Domain Designer using the VDS: [toc]ERROR BaseJdbcMetaDataFactoryImpl,http-bio-7560-exec-7:359 - Cannot get database meta info : /public/Samples/Data_Sources/Virtual_DB_Tester org.teiid.jdbc.TeiidSQLException: Error trying to obtain metadata information for the tables that match %: TEIID30489 Unable to load metadata for VDB name.[/code]ResolutionVirtual data sources are based on the Teiid engine to handle multiple data sources and combine the results from them. In JasperReports Server v5.6 the Teiid library was upgraded to the latest version, but an unforseen consequence of this is that Teiid is unable to get the foreign keys from the database metadata using the Postgres driver, thus producing the above error. Since JasperReports Server does not use this metadata the workaround for this issue is to configure the VDS to not retreive the metadata. Search for importPropertyMap in applicationContext-virtual-data-source.xml, uncomment it and add the following map. <property name="importPropertyMap"> <map> <entry key="REDSHIFT_DS_NAME"> <map> <entry key="importer.importKeys" value="false"/> <entry key="importer.importForeignKeys" value="false"/> <entry key="importer.importIndexes" value="false"/> <entry key="importer.importStatistics" value="false"/> </map> </entry> </map> </property>[/code]Replace REDSHIFT_DS_NAME with the name of your Redshift datasource. Note: If you created an alias for the Redshift datasource in the VDS you must use the alias instead of the datasource name in the entry key.
  7. IssueA customer reported that while creating an Ad Hoc crosstab report using a table with only 150,000 records they encountered and OutOfMemory exception with a 4 GB heap. Checking in the database they found that the table consumed only 6.5 MB of data, so what went wrong? ResolutionWhen you use a high-cardinality field (a field with a large number of unique values) in a crosstab as a group, there is the possibility of using a lot of memory. The large amount of memory is not storing the actual data values -- it's used for data structures that take part in crosstab calculations. This data is only generated when you use the field for grouping. By default, numeric fields are added to the crosstab as measures, which will not incur extra memory, but you can change any measure to a field (menu option "use as field") so that it can be used as a group. However, if you have a high-cardinality field of string type, it can be added as a group directly, and you may see this problem. Here are a number of things that can be done to reduce huge memory usage: Don't include high-cardinality string fields in the domain if they're not really needed; working with huge value lists can be cumbersome so you may want to examine the need for working with these particular fields. Use domain security at the column level so that only experienced users can get access to the fields. Find another way to get the results from fields with high cardinality, perhaps by defining a calculated field based on the high-cardinality field. Change the configuration for baseCategorizer bean to reduce maxMembers. In applicationContext-catFactory.xml the baseCategorizer bean limits the size of a dimension, currently set to 100000. If the number of values exceeds this, the rest of the values are grouped in a node called "Other". By decreasing this value, the user could get some protection against enormous crosstabs. <bean id="baseCategorizer" abstract="true" class="com.jaspersoft.commons.dimengine.Categorizer"> <property name="maxMembers" value="100000"/> </bean>
  8. Hmm... The attachment seems to have gone away. Here is a link to instance types with the same information, http://aws.amazon.com/ec2/instance-types/.
  9. I have attached a file with information about AWS instance sizes. I would say that the minimum you will need is is an m1.large with 7.5 GB RAM. BTW, AWS is moving to the M3 instances. You should consider for yourself. Here's what AWS says about it: "M3 instances provide better, more consistent performance than M1 instances for most use-cases. M3 instances also offer SSD-backed instance storage that delivers higher I/O performance. M3 instances are also less expensive than M1 instances. Due to these reasons, we recommend M3 for applications that require general purpose instances with a balance of compute, memory, and network resources." http://aws.amazon.com/ec2/previous-generation/
  10. You have only 4 GB RAM on an m1.medium instance, which is minimal for large data sets like yours. Most of the OOMs (OutOfMemoryError) before today are in the PermGen space so you will have to increase that, you probably have 512M at this time, increase it to 1024M. Starting today the OOMs are all in the heap space, which is what caused the server to freeze. You probably have only 2 GB maximum heap which is way to small to handle 11M row result sets. It looks like you will have to move to a larger EC2 instance with enough RAM to handle your needs.
  11. That depends on which version and which license. For the community version, not out of the box, though it would not be difficult to implement if you have programing chops. For the pro version you have to have auditing enabled in your license, then you can use the audit reports to see when the last time a report was rrun.
  12. OK, your Tomcat is running but the jasperserver application likely did not launch correctly. 1. Check <tomcat-install-dir>/logs/catalina.out and look for any errors on startup. 2. Enable the Manager app or PSI Probe (my preference, see http://community.jaspersoft.com/wiki/psi-probe-replacement-tomcat-manager) and check to see if the jasperserver app is running. Let us know what you find.
  13. PSI (Greek letter pronounced 'sai') Probe is an open source fork of Lambda Probe which has been inactive since 2006. PSI Probe does everything that Tomcat Manager does plus much, much more. Installation is as simple as adding the war file to the Tomcat webapps directory, adding roles to tomcat-users.xml, and then restarting the server. Installation details can be found here: https://code.google.com/p/psi-probe/wiki/InstallationApacheTomcat . Probe is also compatible with the JBoss and WSO2 application servers. Once installed and running probe is accessed using the URL similar to http://localhost:8080/probe. You will arrive at the home page with a list of applications and their current status. Click on the jasperserver-pro application and you will see the Summary page with a number of tabs on the right side of the page. The sessions page shows all active sessions, click on a session and you will see details about the session, including the ability to kill a session. Click on the Logs tab and you can examine a log and even tail one of the log files in real time. The Treads tab lists all active threads and provides information about their state, plus how many times they have been in Waiting and Blocked state. A very useful feature is the Memory utilization page under the System Information tab. Here you can monitor memory utilization in the various JVM memory spaces. The Quick check tab provides a sanity check on the application server health with respect to running out of database connections, running out of memory, or losing access to resources on the file system. Quick check will:Scan all available data sources and generate a maximum usage score for them.Allocate one megabyte of memory into a byte array as an attempt to push the memory usage over the high watermark.Create and then delete 10 files in a temporary directory.Quick Check will report failure when:At least one of the declared data sources is 100 percent used.A memory allocation test (1 MB) generates an OutOfMemory exception.A file-creation test encounters an IOException. For a full list of features, FAQ, downloads and more go to https://code.google.com/p/psi-probe/w/list.
  14. Issue:Logging long-running queries means having to tediously parse the logs and make sense of what's there, see http://community.jaspersoft.com/wiki/logging-long-running-queries-postgres-and-mysql-databases. Beginning with PostgreSQL version 8.4, pg_stat_statements was added to track metrics for queries such as the number of times a query was called, the total number of rows retrieved by a query, the total time spent in a statement and more. So now, everything is in the database! Resolution:Enabling pg_stat_statementsSu to user postgres Navigate to the postgres data directory (see the long-running query article above to find the location for various systems) Stop the database server pg_ctl stop -D /var/lib/pgsql/data -m fast Open postgresql.conf and search for "shared_preload_libraries." Uncomment the line and add 'pg_stat_statements' inside the single quotation marks. The line will look like: shared_preload_libraries = 'pg_stat_statements' # (change requires restart) Save the file and start the database pg_ctl start -D /var/lib/pgsql/data Connect to the repository database using a SQL client and run CREATE extension pg_stat_statements; This will ceate the pg_stat_statements view and you can run a variety of queries on it, such as SELECT (total_time / 60) as total_minutes, total_time, (total_time/calls) as average_time, calls, query FROM pg_stat_statements ORDER BY 1 DESC LIMIT 100; SELECT count(*), query FROM pg_stat_statements GROUP BY 2 ORDER BY 1 DESC LIMIT 10; Note that prior to 9.1 total_time was measured in seconds, from 9.2 on it is measured in milliseconds.
×
×
  • Create New...