Jump to content
We've recently updated our Privacy Statement, available here ×

mdahlman

Members
  • Posts

    1,332
  • Joined

  • Last visited

 Content Type 

Profiles

Forum

Events

Featured Visualizations

Knowledge Base

Documentation (PDF Downloads)

Blog

Documentation (Test Area)

Documentation

Dr. Jaspersoft Webinar Series

Downloads

Everything posted by mdahlman

  1. You need to post this new question as a new question. Please don't post it as an answer.
  2. Maybe you'll need to add some sample data to make it clear. You have two datasets that don't match. It's not clear to me what the BI tool could do.
  3. Great! Note that the MongoDB plugin is already available in JRS 4.7 and iReport 4.7. so you don't need to install it separately.
  4. You need to add some more details for someone to be able to answer. Why can't you run a single query that is simply "query1 union query2"?
  5. You need to install and configure Hive before you can use Hive. You'll find documentation on the Apache or Cloudera sites (or whatever distribution you are using).
  6. In general, locales are not used like that. You set the locale, then the UI and the reports etc. are rendered using that locale. But I suppose you could achieve it with a little bit of JavaScript. You should update your question with a more precise example of a link you want to conditionally hide.
  7. I would try something like this: ( $F{Field1}.getTime() == $F{Field2}.getTime() ) ? "some string" : "another string" Note that your current example is closer to this: ( some comparison ) ? "some string" : some_date That's bound to cause problems. Decide if you really want a String or if you really want a Date after the comparison is made.
  8. iReport is installed on a developer's machine. It can always see all folders. But if you're thinking of limiting what folders someone can see in JasperReports Server, then you just need to set permissions appropriately. Users will only see what they are allowed to see, and that applies whether you are using the web ui or using the JasperReports Server plugin from iReport.
  9. How did you deploy the .jar to JRS? Do you get any error message?
  10. You should look up Chart Themes and Chart Customizers. It's not clear that you'll be able to do what you're intending to do. You'll need to provide quite a bit more detail to get a solid answer.
  11. I cannot think of a simple solution. Input controls don't have the concept of a default value. JasperReports Server gives them a default value by taking the default value from the corresponding parameter in the .jrxml. That's good enough in many cases, but it doesn't offer much for your case. You should certainly log an issue on the tracker. Replacing that the built-in logic for getting the default values with your own logic is surely possible, but I suspect it's a big customization.
  12. The error doesn't seem to say that it cannot find jasperreports_extension.properties. It cannot instantiate the query executer. I guess the problem is more along the lines of some other dependency that is missing. You don't need js-hive-datasource-1.1.jar. Remove that and let us know if that fixes it.
  13. Can Jaspersoft Studio support HDFS as a data source? No. who can tell me? Me. But... as Massimo mentions, there is a built-in connector for Hadoop Hive. Hive converts a SQL query into a map reduce job which uses HDFS files as its source. In many cases this is a better solution; it's far more flexible. In this case we could say the answer is "Yes", but it imposes the constraint of using Hive. Also, I have seen custom data adapters for file data sources. These included files accessed via NTFS, Samba, HDFS, and others. So it's certainly possible to achieve this. Perhaps it will become a standard built-in data source for Jaspersoft Studio and JasperReports Server someday. Also, Jaspersoft ETL has connectors for HDFS. So you can use this to get files from HDFS and put them somewhere else (local file system, database, etc.) for use by Jaspersoft Studio. (It works in the reverse direction for loading data into HDFS as well.)
  14. If you aren't using Hadoop Hive, then consider removing js-hive-datasource-1.1.jar. Maybe that will solve your error.
  15. Please edit your question so that it includes a question.
  16. Don't use stretchType="RelativeToTallestObject". It's very unlikely that you want that for your field. Yes, using SansSerif is likely to have different values if JRS is running on different machines. SansSerif is a logical Java font, and it will get mapped to different physical fonts on different machines. This happens especially when you're using both Windows and Linux. Use Font Extensions. They exist to solve this problem. If you're using SansSerif, then you probably aren't overly concerned about which font to use. DejaVu is a good choice since it already ships as a Font Extension. Use that. (But you could package any font into a Font Extension and get the result you need.)
  17. I have groups in my tables, and the sorting and filtering work fine. Can you add details about how your table is structured?
  18. I wrote an article about JasperReports Server and Hibernate. It should point you in the right direction. It uses that same samples/customDataSource as a starting point... but then it goes much further with JRS.
  19. I'm not sure what explains that "No connection". What versions of iReport and Cassandra are you using?
  20. Normally it's best to start with iReport. I'm not sure why that connection is not working. Once that's working, then you can get the connection setup in JasperReports Server. The steps needed are listed here: http://community.jaspersoft.com/wiki/cassandra http://community.jaspersoft.com/wiki/jaspersoft-cassandra-jrs A few links were broken and items mangled in the move to the new community website yesterday. But we're looking into them. Basically, you just unzip that zip file and copy everyting into the JRS directory. Match folders with the same names so you get a new file in bundles and several .jars in the lib directory. (The one you mention is necessary but insufficient.) You also need to remove the the Hive jar files because of a Thrift conflict. I swear that info was on the wiki page... but I don't see it now. To be precise, here's how I "remove" the files on a Linux system. mv WEB-INF/lib/hive-common-0.8.1.jar WEB-INF/lib/hive-common-0.8.1.ja_mv WEB-INF/lib/hive-exec-0.8.1.jar WEB-INF/lib/hive-exec-0.8.1.ja_mv WEB-INF/lib/hive-jdbc-0.8.1.jar WEB-INF/lib/hive-jdbc-0.8.1.ja_mv WEB-INF/lib/hive-metastore-0.8.1.jar WEB-INF/lib/hive-metastore-0.8.1.ja_mv WEB-INF/lib/hive-service-0.8.1.jar WEB-INF/lib/hive-service-0.8.1.ja_
  21. The MongoDB connector works well with authentication. That applies to iReport and to JasperReports Server. Can you post your version numbers?
  22. This article was written with MongoDB in mind. But it should apply equally well to your JSON data source. Does that solve it for you?
  23. roshni, You'll get a better reponse on the JasperServer forum (instead of the JasperAnalysis forum where this is posted). The front page of the Big Data Project links straight to it. a) Cassandra 1.0 b) You write the parameterized CQL in a report definition. As the CQL JDBC driver improves, it should be possible to link Jaspersoft's metadata layer (Data Domains) to this. But it's not possible yet. Refer to the Cassandra Connector documentation for more details. Bonus question: "how can one publish the reports..." Use JasperReports Server. It's downloadable from jasperforge.org or jaspersoft.com (Community Edition or Professional Edition respectively).
  24. Log4jLog4j is a very widely used Java library to enable logging functionality. It allows an administrator to easily configure logging details like the logging format, the use of log files (use a single file, use a different file each day, use a file until it reaches a certain size, etc.) By default logs will appear in jasperserver.log (webappsjasperserver-proWEB-INFlogs) There are two ways to manipulate the log4j properties. Online/Temporary: This is to make temporary changes to log4j properties. The settings take effect immediately and will be reset after JasperReports Server is restarted. As a privileged user, browse to the web page 'http://<hostname>:8080/jasperserver-pro/log_settings.html' and make your desired changes. In JasperReports Server version v4.0 and above (logged in as superuser) simply click "Manage -> Log Settings". or Offline/Permanent: Edit log4j.properties located in webappsjasperserver-proWEB-INFlog4j.properties. Make sure to restart or reload the application server (eg Tomcat) in order for any changes in the properties file to take place.If you are troubleshooting an issue that prevents JasperReports Server from starting, then you won't be able to use the JasperReports Server logger. You need to refer to the application server log files instead. For Tomcat 5.5 these are found in <tomcat>/logs. JasperReports Server log4j recommendationsNormally you enable a particular logger by uncommenting the sample line included in log4j.properties.When debugging something difficult, the sledgehammer approach is to set log4j.rootLogger (the last line in log4j.properties) to 'debug' instead of 'warn'. This will probably give you what you need... but it will certainly give you many many things that you don't need. Use sledgehammers with care.Selected log4j.properties notesThis table is based on log4j.properties from JasperReports Server Enterprise. Most logger properties are commented out by default. This chart is not comprehensive; it is intended to provide comments on some existing options and to document some available loggers which do not appear by default (in either commented or uncommented form). SettingCommentlog4j.rootLogger=WARN, stdout, fileoutset global defaults#log4j.logger.org.springframework.aop.framework.autoproxy=DEBUG, stdout, fileout #log4j.logger.org.springframework.aop.framework.autoproxy.metadata=DEBUG, stdout, fileout #log4j.logger.org.springframework.aop.framework.autoproxy.target=DEBUG, stdout, fileout #log4j.logger.org.springframework.transaction.interceptor=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.intercept=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.intercept.method=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.intercept.web=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.afterinvocation=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.acl=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.acl.basic=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.taglibs.authz=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.ui.basicauth=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.ui.rememberme=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.ui=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.afterinvocation=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.ui.rmi=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.ui.httpinvoker=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.util=DEBUG, stdout, fileout #log4j.logger.org.acegisecurity.providers.dao=DEBUG, stdout, fileout #log4j.logger.org.springframework.webflow=DEBUG, stdout, fileout log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.conversionPattern=%d{ABSOLUTE} %5p %c{1},%t:%L - %m%n log4j.appender.fileout=org.apache.log4j.RollingFileAppender log4j.appender.fileout.File=${jasperserver.root}/WEB-INF/logs/jasperserver.log log4j.appender.fileout.MaxFileSize=1024KB log4j.appender.fileout.MaxBackupIndex=1 log4j.appender.fileout.layout=org.apache.log4j.PatternLayout log4j.appender.fileout.layout.conversionPattern=%d{ABSOLUTE} %5p %c{1},%t:%L - %m%n log4j.appender.jasperanalysis=org.apache.log4j.RollingFileAppender log4j.appender.jasperanalysis.File=${jasperserver.root}/WEB-INF/logs/jasperanalysis.log log4j.appender.jasperanalysis.MaxFileSize=1024KB log4j.appender.jasperanalysis.MaxBackupIndex=1 log4j.appender.jasperanalysis.layout=org.apache.log4j.PatternLayout log4j.appender.jasperanalysis.layout.conversionPattern=%d{ABSOLUTE} %5p %c{1},%t:%L - %m%n #log4j.logger.mondrian.mdx=DEBUG,jasperanalysis #log4j.logger.mondrian.sql=DEBUG,jasperanalysis #log4j.logger.jasperanalysis.drillthroughSQL=DEBUG,jasperanalysis #log4j.logger.com.tonbeller.jpivot.xmla.XMLA_SOAP=debug #log4j.logger.com.jaspersoft.jasperserver.war.xmla.XmlaHandlerImpl=debug #log4j.logger.com.jaspersoft.jasperserver.war.xmla.XmlaServletImpl=debug #log4j.logger.mondrian.xmla.XmlaServlet=debug #log4j.logger.mondrian.xmla.impl.DefaultXmlaServlet=debug #log4j.logger.mondrian.xmla.XmlaHandler=debug #log4j.logger.com.tonbeller.jpivot.mondrian.MondrianDrillThrough=debug #log4j.logger.com.tonbeller.jpivot.mondrian.MondrianModel=debug #log4j.logger.com.jaspersoft.jasperserver.war.OlapPrint=debug #log4j.logger.com.jaspersoft.jasperserver.war.PrintServlet=debug #log4j.logger.com.jaspersoft.jasperserver.war.ChartComponent=debug #log4j.logger.com.jaspersoft.jasperserver.war.MondrianDrillThroughTableModel=debug #log4j.logger.com.tonbeller.jpivot.olap.query.ExpandAllExt=debug #log4j.logger.com.tonbeller.wcf.controller.RequestFilter=debug #log4j.logger.mondrian.i18n.LocalizingDynamicSchemaProcessor=debug #log4j.logger.mondrian.rolap.sql.SqlQuery=debug #log4j.logger.net.sf.jasperreports.engine.query.JRJdbcQueryExecuter=debugThis is among the most commonly used items. It's useful for debugging report queries.log4j.appender.fileout.File=${jasperserver-pro.root}/WEB-INF/logs/jasperserver.log log4j.appender.jasperanalysis.File=${jasperserver-pro.root}/WEB-INF/logs/jasperanalysis.log #log4j.logger.org.springframework.orm.hibernate3.HibernateCallback=debug #log4j.logger.org.springframework.orm.hibernate3.HibernateTemplate=debug #log4j.logger.org.springframework.orm.hibernate3.support.HibernateDaoSupport=debug #log4j.logger.com.jaspersoft.jasperserver.api.metadata.common.service.impl .HibernateDaoImpl=debug #log4j.logger.com.jaspersoft.jasperserver.api.metadata.common.service.impl.hibernate .HibernateRepositoryServiceImpl=debugThis useful logger is not included in log4j.properties in JasperServer 3.7 by default. You have to add it manually.#log4j.logger.com.jaspersoft.ji.util.profiling.service.ProfilingServiceImpl=debug #log4j.logger.org.quartz=debugThis useful logger is not included in log4j.properties in JasperServer 3.7 by default. You have to add it manually.#log4j.logger.com.jaspersoft.jasperserver.ws.axis2=debugThis useful logger is not included in log4j.properties in JasperServer 3.7 by default. You have to add it manually.#log4j.logger.com.jaspersoft.ji.util.profiling.service.ProfilingAspect=debug #log4j.logger.com.jaspersoft.ji.util.profiling.service.ProfilingRecorder=debug #log4j.logger.com.jaspersoft.ji.util.profiling.service.GlobalProfilingState=debug log4j.logger.mondrian.olap.MondrianProperties=error log4j.logger.net.sf.jasperreports.engine.xml=error #log4j.logger.com.jaspersoft.ji.ja.i18n.I18NAspect=debug #log4j.logger.com.jaspersoft.ji.adhoc=debug #log4j.logger.com.jaspersoft.commons.datarator=debug #log4j.logger.com.jaspersoft.commons.semantic.datasource.impl .SemanticLayerSecurityResolverImpl=debug #log4j.logger.com.jaspersoft.commons.semantic.dsimpl.JdbcTableDataSet=debug #log4j.logger.com.jaspersoft.commons.util.JSControlledJdbcQueryExecuter=debug #log4j.logger.com.jaspersoft.commons.semantic.dsimpl.JdbcBaseDataSet=debugThis useful logger is not included in log4j.properties in JasperServer 3.5 by default. You have to add it manually.#log4j.logger.com.jaspersoft.jasperserver.api.metadata.user.service.impl .ObjectPermissionServiceImpl=debugThis useful logger is not included in log4j.properties in JasperServer 3.7 by default. You have to add it manually.log4j.appender.profile=com.jaspersoft.ji.util.profiling.service.ProfilingAppender log4j.appender.profile.layout=org.apache.log4j.PatternLayout log4j.appender.profile.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.rootLogger=warn, stdout, fileout, profile The above loggers represent most of the loggers available in JasperReports Server.
  25. Language ReferenceThe Jaspersoft HBase Query Language is a JSON-style declarative language for specifying what data to retrieve from HBase. The connector converts this query into the appropriate API calls and uses the HBase REST Server interface (Stargate) to query the HBase instance. { # The following parameters are mandatory "tableName" : "myTable", "deserializerClass" : "myDeserializer", Details Details # The following parameters are optional "filter" : { }, Details "startRow/endRow" : { }, Details "columnList" : { }, Details "sortFields" : { }, Details "rowsToProcess" : { }, Details "batchSize" : { }, Details "idField" : { }, Details "alias" : { }, Details "qualifiersGroup" : { } } Details tableNameSpecifies the table name. Exactly one table must be specified. # Hard-coded table name "tableName" : "myTable" # Collection name specified as a String Parameter "tableName" : $P{myTableParam} deserializerClassSpecifies how the data will be deserialized into Java objects that the report engine can process. HBase has no data type metadata. Instead it stores all values simply as arrays of bytes. In order for a query and for a report engine to make sense of these arrays of bytes, there must be a some definition of how these bytes will be interpreted. If you are already using HBase then you are already doing this (either explicitly or implicitly). The HBase connector ships with two sample deserializer classes. The DefaultDeserializer uses Java's built in serialization. If you have inserted data into HBase using Java serialization, then choose this deserializer class. The ShellDeserializer correctly interprets data that has been inserted using the HBase shell. It includes logic to determine if a byte array represents a Long (Integer), Double, or String. This is practical for using the Jaspersoft HBase connector with HBase tutorials. More generally, HBase users have their own system for serializing data. To reuse your current Serialization/Deserialization (SerDe) logic, you must create a class which implements the Jaspersoft Deserializer interface. The deserialization may be based solely on interpreting the array of bytes, or the method of deserialization may be based on the table name, column family, or qualifier. # Java's built-in serialization "deserializerClass" : "com.jaspersoft.hbase.deserialize.impl.DefaultDeserializer" # Retrieving data input using the HBase shell "deserializerClass" : "com.jaspersoft.hbase.deserialize.impl.ShellDeserializer" # Using a custom deserializer class "deserializerClass" : "com.MyCompany.MyHBaseDeserializer" filterThe filter provides methods for limiting what data is returned from the specified table. In principle any valid HBase filter may be used. In practice not all filter types are relevant to Business Intelligence queries. Refer to comprehensive HBase filter documentation here: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/Filter.html Valid compareOp values: EQUAL, GREATER, GREATER_OR_EQUAL, LESS, LESS_OR_EQUAL, NO_OP, NOT_EQUAL Reference: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/CompareFilter.CompareOp.html Common comparators: BinaryComparator, SubstringComparator, RegexStringComparator Complete list: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/WritableByteArrayComparable.html Comparator syntax: # Use a binary value "BinaryComparator" : { "value" : "myBinaryValue" } # Use a regular expression "RegexStringComparator" : { "expr" : "myExpression" } # Search for a substring "SubstringComparator" : { "substr" : "myString" } Commonly used filters include the following: SingleColumnValueFilterFilter rows based on the value in a specified column. For example, return only Canadian customers: "filter" : { "SingleColumnValueFilter" : { "family" : "column_family_1", "qualifier" : "billing_address_country", "compareOp" : "EQUAL", "comparator" : { "SubstringComparator" : { "substr" : "Canada" } } } } More information on this filter types is available: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/SingleColumnValueFilter.html Row FilterFilter based on the row key. For example, return only rows which have a RowID starting with "2012" and ending with "X". "filter" : { "RowFilter" : { "compareOp" : "EQUAL", "comparator" : { "RegexStringComparator" : { "expr" : "2012.*X" } } } } More information http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/RowFilter.html Filter ListMultiple filters may be used together in a filter list. Filters are "ANDed" together or "ORed" together using MUST_PASS_ALL or MUST_PASS_ONE. Here we return only the rows matching the specified RowID format which are for Canadian customers. "filter" : { "FilterList" : { "operator" : "MUST_PASS_ALL", "rowFilters" : [ { "RowFilter" : { "compareOp" : "EQUAL", "comparator" : { "RegexStringComparator" : { "expr" : "2012.*X" } } } }, { "SingleColumnValueFilter" : { "family" : "schema", "qualifier" : "billing_address_country", "dropDependentColumn" : true, "compareOp" : "EQUAL", "comparator" : { "RegexStringComparator" : { "expr" : "Canada" } } } } ] } } More information: http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/filter/FilterList.html startRow/endRowThese parameters may be used as a sort of specialized RowFilter. Performance is better when filtering with these options compared with the equivalent RowFilter. The startRow is the row to start scanner at or after (inclusive). The endRow is the row to stop scanner before (exclusive) # Return only the rows between the specified rowIDs # Note: row 1309913959672 is returned. row 1309913959675 is not returned. "startRow" : "1309913959672", "endRow" : "1309913959675" # Return all rows after the specified start row (includes the start row) "startRow" : "1309913959672" # Return all rows before the specified end row (excludes the end row) "endRow" : "1309913959675" columnListThis allows the query to select which fields will be part of the result set. If no value is specified for 'findFields' then all fields will be returned. To return only a subset of the fields available in the returned documents, specify a comma separated list of fields in the form ColumnFamily:Qualifier. # Return only these four fields # Note: the query will return these four fields plus the row id in the field named "_id_" "columnList" : "schema:billing_address_country, schema:billing_address_city, schema:account_type, schema:assigned_user_id" sortFieldsSpecifies the fields that will be used to sort. Specify a comma separated list of fields in the form ColumnFamily:Qualifier. "sortFields" : "schema:billing_address_country, schema:billing_address_city" rowsToProcessThe rowsToProcess key sets the number of rows that will be processed to determine the list of fields. This applies only at edit at time in iReport. It has no effect on reports when they are executed. The connector uses a default value of 10 records if this option is not specified. If a value of "0" is specified then the Fields Provider will iterate through all records in result set. "rowsToProcess" : "50" batchSizeThis parameter is optional and determines the size of the batch that retrieves result from HBase per request "batchSize" : 90 aliasIt's an optional entry that allows the user to rename field name for a better usability. It's expressed as a map where keys are the alias and values are the original field names. { "ALIAS_NAME" : "FIELD_NAME" } For instance: { "street" : "schema|billing_address_street" } qualifiersGroupThis provides an important pivot feature. It makes it possible to transform wide rows of data into multiple "shorter" rows. It groups a set of columns by a regular expression, and it will output the column names as a field name with their corresponding values as another field. The field type of the column names is String and for the values is Object The syntax is as follows { "qualifiersExpression" : Regex expression "qualifierJrField" : Name of the pivot JR field for the columns "valueJrField" : Name of the pivot JR field for the column values } Query snippet: "qualifiersGroup" : { "qualifiersExpression" : "street|schema|billing_.*", "qualifierJrField" : "billing", "valueJrField" : "billingValue" } Example of transforming data: # Original HBase data: rowID:row1, order-2012-01-01:$50, order-2012-01-03:$99 rowID:row2, order-2012-01-01:$25, order-2012-01-02:$66, order-2012-01-07:$130 # Pivoted result set: rowID fieldName fieldValue row1 order-2012-01-01 $50 row1 order-2012-01-03 $99 row2 order-2012-01-01 $25 row2 order-2012-01-02 $66 row2 order-2012-01-07 $130 # This pivoted data can be used more easily for reporting and analysis. idFieldBy default the rowID for each record is returned in the field $F{_id_}. You may override this name using the idField parameter. "idField" : "newIDFieldName" Sample queriesBasic Retrieve absolutely everything from a table { "tableName" : "accounts", "deserializerClass" : "com.jaspersoft.hbase.deserialize.impl.DefaultDeserializer", } Filters SingleColumnValueFilter (only customers in Canada) { "tableName": "accounts", "deserializerClass": "com.jaspersoft.hbase.deserialize.impl.DefaultDeserializer", "sortFields": "schema|billing_address_country, schema|billing_address_city", "filter": { "SingleColumnValueFilter": { "family": "schema", "qualifier": "billing_address_country", "compareOp": "EQUAL", "comparator": { "SubstringComparator": { "substr": "Canada" } } } } } Multiple SingleColumnValueFilters (Only a specified customer, only in the last 60 minutes) { "tableName": "transfer", "deserializerClass": "com.jaspersoft.hbase.deserialize.impl.ShellDeserializer", "filter": { "FilterList": { "operator": "MUST_PASS_ALL", "rowFilters": [ { "SingleColumnValueFilter": { "family": "Info", "qualifier": "id", "compareOp": "EQUAL", "comparator": { "SubstringComparator": { "substr": "$P{CUSTOMER}" } } } }, { "SingleColumnValueFilter": { "family": "Info", "qualifier": "time", "compareOp": "GREATER", "comparator": { "BinaryComparator": { "value": "$P{ONE_HOUR_AGO}" } } } } ] } } } Filter and Pivot { "tableName": "accounts", "deserializerClass": "com.jaspersoft.hbase.deserialize.impl.DefaultDeserializer", "filter": { "FilterList": { "operator": "MUST_PASS_ALL", "rowFilters": [ { "QualifierFilter": { "compareOp": "EQUAL", "comparator": { "RegexStringComparator": { "expr": "billing_.*" } } } }, { "SingleColumnValueFilter": { "family": "schema", "qualifier": "billing_address_country", "dropDependentColumn": true, "compareOp": "EQUAL", "comparator": { "RegexStringComparator": { "expr": "Mexico" } } } } ] } }, "qualifiersGroup": { "qualifiersExpression": "street|schema|billing_.*", "qualifierJrField": "billing", "valueJrField": "billingValue" }, "alias": { "street": "schema|billing_address_street", "customerID": "$P{CUSTOMER_NUMBER}" } }
×
×
  • Create New...