Deploying with Kubernetes

After configuring and building the Docker images, the next step is to configure Kubernetes and Helm charts to deploy the worker pods.

In the git project, jaspersoft-containers/K8s/scalableQueryEngine/helm/ contains the following files and folders:

File or Folder Description

Chart.yaml

Helm chart for the scalable query engine.

values.yaml

Configuration values for the Helm chart.

charts

Folder containing dependencies.

config/jndi.properties

Configuration file for JNDI data sources.

secret/keystore

Folder where you place a copy of the JasperReports Server keystore.

Creating a Secret

The configuration files in this section contain passwords for your server and databases. If you store passwords in files, you should manage your permissions carefully to prevent unwanted access. An alternative is to store passwords in a secret, a separate data structure managed by Kubernetes. For details about secrets, see the Kubernetes documentation.

Use the following command to create a secret containing your passwords. Note that commands are often stored in a history, therefore it is best to create a script to run these commands:

kubectl create secret generic jrs-credentials --from-literal=appCredentialsSecretName=password
        --from-literal=foodmart.password=password --from-literal=audit.password=password

You can then reference this secret inside the configuration files, for example:

appCredentialsSecretName=jrs-credentials

Configuring the Helm Chart

The tables in this section describe the properties you can set in the values.yaml file.

General settings:

Property Description

replicaCount

The number of workers to create, but has no effect if autoscaling is enabled below. The default is 1.

jrsVersion

The version of your JasperReports Server release, by default 8.0.0.

image.tag

Tag of the scalable query engine Docker image, by default 8.0.0.

image.name

Name of the scalable query engine Docker image, which should be scalable-query-engine.

image.pullPolicy

The Docker image pull policy, by default IfNotPresent.

image.PullSecrets

If you have customized your Docker image to pull from a private registry, specify the secret here, otherwise leave null.

image.nameOverride

Overrides the default image name; can be left as "".

image.fullnameOverride

Overrides the full image name; can be left as "".

The data source properties have default values for the foodmart and sugarcrm sample data sources that you can use with the sample dashboards. In production, you should redefine config/jndi.properties for your own data sources as shown in Specifying a JNDI Data Source, and then set the corresponding values here.

Property Description

foodmart.jdbcUrl

The default URL of the foodmart data source.

foodmart.username

The username to access the foodmart data source, by default postgres.

foodmart.password

The password for the foodmart data source user. The default is postgres, but it must be entered in base64 encoded format.

sugarcrm.jdbcUrl
sugarcrm.username
sugarcrm.password

URL, password, and username for the sugarcrm sample data source, in the same format as foodmart. The default values are also postgres.

audit.enabled

Whether audit monitoring is enabled on JasperReports Server, by default false. When set to true, workers will attempt to write audit events to the database specified below.

audit.jdbcUrl

The URL of the database for writing audit events, by default this is the same as the server's repository. If you have a split installation with a separate audit database, specify its URL instead.

audit.userName

The username to access the audit database.

audit.password

The password to access the audit database, in base64 encoded format.

appCredentialsSecretName

Instead of storing passwords in this file, you can manually create a secret to store the server password. See Creating a Secret.

The following table contains environment properties for the workers. For more information, see the Kubernetes documentation links in the descriptions. There are additional properties in the file templates/app-configmap.yml, mainly for the Spring configuration on the workers.

Property Description

timeZone

The default timezone that the workers use when processing reports, for example "America/Los_Angeles". The time zone names are those supported by java.time.ZoneID, which are defined in the tz database.

securityContext.
capabilities.drop

Drops the Linux host capabilities; the default value is ALL. See the Kubernetes documentation about the security context.

securityContext.
runAsNonRoot

Runs the worker application as non-root user when true (the default).

securityContext.
runAsUser

Specifies the userid to run the worker application; the default is 11099.

securityContext.
allowPrivilegeEscalation

Whether to allow the container to have host privileges; the default is false.

Service.type

The service type for workers should be ClusterIP.

Service.port

The service port should be set to 8080.

serviceAccount.enabled

Enables the service account on the workers; true by default.

serviceAccount.annotations

Annotations for the service account; empty {} by default.

serviceAccount.name

Name of the service account, by default query-engine.

rbac.create

Whether to create a role to use role-based access control; true by default.

rbac.name

Name of the role; the default should be query-engine-role.

extraEnv.javaopts

String to add JAVA_OPTS to the worker's environment variables.

extraEnv.normal

Additional key=value pairs to add to environment variables; there are none by default (null value).

extraEnv.secrets

Specify enviroment variables in secrets or configmaps; there are none by default (null value).

extraVolumeMounts

Adds volume mounts for storage volumes; empty {} by default.

extraVolumes

Adds storage volumes; empty {} by default.

Health check properties:

Property Description

healthcheck.enabled

Enables the health check so Kubernetes can detect when workers are busy or down; the default is true. See the Kubernetes documentation on liveness and readiness probes.

healthcheck.
livenessProbe.*

The liveness probe checks whether the worker is blocked or has crashed. It has the following properties:

port - Port of the worker, by default 8080.
initialDelaySeconds - Time after startup when the probe begins checks, by default 120 (2 minutes).
failureThreshold - How many failures before the worker is restarted (or marked unready), by default 24 times the period of the check.
periodSeconds - How often the check is performed, by default 10 seconds.
timeoutSeconds - How long the probe waits for a response before failing the check, by default 4 seconds.

healthcheck.
readinessProbe.*

The readiness probe checks when the worker is running but unable to process requests. It has the same properties and defaults as the liveness probe, except the value of initialDelaySeconds is 60 (one minute).

CPU and memory resources:

Property Description

resources.enabled

Whether or not the resource requests and limits are applied, by default true.

resources.limits.cpu

The most CPUs that each worker is allowed to use, by default 3.

resources.limits.memory

The most memory that each worker is allowed to use, by default, 4Gi (gibibytes).

resources.requests.cpu

The least number of CPUs that will be available to the worker, by default 2.

resources.requests.memory

The least amount of memory that will be available to the woker, by default 2 Gi (gibibytes).

engineProperties.
sharedCacheExpiration

The length of time that cache contents are valid, by default 20m (minutes). Set this value depending on your dataset size, data update intervals, and repeated report viewing. Longer cache expiration speeds up report display times, but it takes up cache space and may not display instantaneous data.

There is also an issue with saved report options that do not apply if the report has run with previous values and is still in the cache. If you are having issues with report options not being applied in a report, lower the cache expiration, for example to 5m. The new values will apply when the report is generated after the previous report expires in the cache.

Ingress load balancer:

Property Description

ingress.enabled

Whether the ingress load balancer is enabled, allowing the cluster to have multiple pods and implement stickyness, by default true.

ingress.hosts.host

Adds the valid DNS hostname to access the scalable query engine, by default null (no value).

ingress.hosts.paths.
path

Application context path, by default /query-engine.

ingress.hosts.paths.
pathType

The path type, by default Prefix.

ingress.tls[0].secretName

Adds TLS secret name to allow secure traffic, by default null (no value).

Redis properties:

Property Description

rediscluster.enabled

Enables the redis cluster for caching, by default true.

rediscluster.externalRedis
Clusteraddress

If you want to use an existing redis cluster, specify its address here, for example redis://redis-cluster:6379. By default, this is empty {} and the redis-cluster.* properties are used to create the redis pods.

rediscluster.externalRedis
Clusterpassword

If you specify an external redis cluster, specify its password here, otherwise empty {} by default.

redis-cluster.nameOverride

If no external redis cluster is given, Kubernetes will create a new one and give it this name for identification, by default query-engine-redis-cluster.

redis-cluster.
cluster.nodes

Number of nodes to create in the redis cluster, by default 6.

redis-cluster.
persistence.size

Size of each redis node, by default 8Gi (gibibytes).

global.redis.password

Create a password for the redis cluster.

Autoscaling properties:

Property Description

autoscaling.enabled

Enables the HorizontalPodAutoscaler (HPA) for the ; the default is true. The metrics should also be enabled so they are available for the autoscaler to work.

autoscaling.*

The following properties define the behavior of the autoscaler:

minReplicas - Minimum number of active workers, by default 2.
maxReplicas - Maximum number or active workers, by default 10.
targetCPUUtilizationPercentage - Minimum average CPU load of active workers to create a new worker (scale up), by default 50%.
targetMemoryUtilizationPercentage - Minimum average memory usage on active workers to create a new worker (scale up), by default not specified {}.
scaleDown.stabilizationWindowSeconds - Time to wait with no activity to remove a worker (scale down), by default 300 (5 minutes).

customMetricScaling.
enabled

If you want to implement the custom metrics-based autoscaling, set this to true; the default is false. This enables the Prometheus-based autoscaling that uses the number of queued Ad Hoc tasks for scaling up. It can also be further customized for other metrics, although that is beyond the scope of this document.

scalable-query-engine-scaling.*

Properies for the Prometheus-based autoscaler. The name and function of each property is the same as for autoscaling.*, except for the following:

averageQueuedExecutions - Minimum average queue length on active workers to create a new worker, by default 10.

Ingress controller properties that configure the load balancing. It is also possible to configure a different load balancer such as AWS by modifying templates/internal-ingress.yaml, but the details are beyond the scope of this document:

Property Description

ingressClass

By default intranet.

kubernetes-ingress.
nameOverride

By default query-engine-ingress.

kubernetes-ingress.
controller.replicaCount

Number of ingress controller replicas, by defualt 1.

kubernetes-ingress.
controller.service.type

By default LoadBalancer.

kubernetes-ingress.
controller.ingressClass

By default intranet.

kubernetes-ingress.
controller.config.
timeout-connect

By default 30s (seconds).

kubernetes-ingress.
controller.config.
timeout-check

By default 60s (seconds).

kubernetes-ingress.
controller.config.
timeout-client

By default 240s (seconds).

kubernetes-ingress.
controller.config.
timeout-server

By default 240s (seconds).

kubernetes-ingress.
defaultBackend.replicaCount

By default 1.

server.tomcat.
connectionTimeout

Timeout request in milliseconds for a Tomcat request, by default 300000 (5 minutes).

Server properties for the workers to access the JasperReports Server instance through the REST API:

Property Description

jrs.server.scheme

Protocol to access JasperReports Server, used to create the server URL, by default http.

jrs.server.host

String to create the hostname of your JasperReports Server instance to the workers within the cluster (behind the ingress load balancer).

jrs.server.port

Port number of your JasperReports Server instance, by default 80.

jrs.server.path

Path in the URL to access the server through the REST API, by default jasperserver-pro/rest_v2.

jrs.server.username

Username to access the server's REST API. By default this is jasperadmin, but you might need to change it if you have multiple organizations.

jrs.proxy.enabled

Enables the proxy for the scalable query engine, by default true.

jrs.proxy.scheme

Protocol for access through the proxy, by default http.

jrs.proxy.host

String to create the hostname of the proxy to the workers within the cluster (behind the ingress load balancer).

jrs.proxy.port

Port number of the proxy, by default 80.

jrs.proxy.path

Path in the URL of the proxy, by defualt rest_v2.

jrs.proxy.username

Username to reply to the proxy. By default this is jasperadmin, but you might need to change it if you have multiple organizations.

jrs.proxy.timedOut

Timeout in millisecons when replying to the proxy, by default 30000.

JDBC driver properties for copying them to the workers:

Property Description

drivers.enabled

Whether or not Kubernetes will copy the JDBC driver JARs to the workers, by default true.

drivers.image.tag

Tag of the drivers image to copy from, by default 8.0.0.

drivers.image.name

Name of drivers image, by default null (empty value).

drivers.image.
pullPolicy

Pull policy for the the drivers image, by default IfNotPresent.

drivers.jdbcDriversPath

Destination path where JDBC drivers are copied in the workers, by default /usr/lib/drivers.

Logging-related properties; for more information, see Logging and Debugging:

Property Description

metrics.enabled

Enables the metrics for Prometheus-based autoscaling, by default false.

kube-prometheus-stack.
prometheus-node-exporter.
hostRootFsMount

Whether or not to mount Prometheus in the host file system, by default false.

kube-prometheus-stack.
grafana.service.type

By default NodePort.

logging.enabled

Enables the Elasticsearch, Fluentd and Kibana (EFK) logging, by default false.

logging.level

Logging level in the workers, by default INFO.

logging.pretty

Logging format in the workers, by default false.

fluentd.imageName

By default fluent/fluentd-kubernetes-daemonset.

fluentd.imageTag

By default v1.12.3-debian-elasticsearch7-1.0.

fluentd.esClusterName

Elasticsearch cluter name, by default elasticsearch.

fluentd.esPort

Elasticsearch port number, by default 9200.

elasticsearch.replicas

Number of pods for Elasticsearch, by default 1.

elasticsearch.
volumeClaimTemplate.
resources.requests.storage

By default 10Gi (gibibytes).

kibana.service.type

By default NodePort.

Specifying a JNDI Data Source

In order to use a JNDI data source, you must specify its JDBC parameters in the file config/jndi.properties. Remove the sample databases, and add one or more JDNI data sources as follows:

Property Description

jndi.dataSources[i].*

Array of data source properties where i is zero-based.

.name

The name of JDBC database to use with JNDI, in the format jdbc/<jdbc-name>. .auth=The type of authentication, by default Container.

.factory

The Java factory class to use, by default com.jaspersoft.jasperserver.tomcat.jndi.JSCommonsBasicDataSourceFactory.

.driverClassName

JDBC driver class for this database. The JAR file for this driver must be copied into the Docker image.

.url

JDBC URL to acces the database, for exmple jdbc:postgresql://<hostname>:<port>/<dababase>.

.username

Database username.

.password

Database user password, or configure the password in a secret and provide its name here.

.accessToUnderlying
ConnectionAllowed

By default true.

.validationQuery

Simple query to test the connection, usually SELECT 1.

.testOnBorrow

By default true.

.maxActive

The maximum number of connections to allocate in the connection pool, for example 100.

.maxIdle

The maximum number of connections to maintain in the pool when they are idle, for example 30.

.maxWait

When all connections are in use, the duration in milliseconds that the pool will let a request wait before returning a timeout. The default is 10000 (10 seconds).

If you want to update the JNDI properties after having deployed the workers with Kubernetes, you will need to restart or redeploy the workers.

Deploying to Kubernetes

Use the following procedure to deploy the cluster of workers using Kubernetes:

1. Download the Docker images and Helm charts from github.com as described in Downloading the Software.
2. Configure and build the Docker images as described in Docker Configuration.
3. Add the helm dependencies with the following commands:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add haproxytech https://haproxytech.github.io/helm-charts
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add elastic https://helm.elastic.co
4. Update the helm dependencies when needed with the following commands:
cd <js-install>/jaspersoft-containers/K8s
helm dependencies update scalableQueryEngine/helm
5. If you haven't done so already, configure the Helm chart in values.yaml for the deployment of Redis, Ingress, and the workers as described in Configuring the Helm Chart. You can also set individual values by adding --set <parameter_name>=<paramter_value> to the Helm commands below.
6. If you haven't done so already, configure your JNDI data sources as described in Specifying a JNDI Data Source.
7. Now you can deploy the workers on Kubernetes with the following command:
helm install engine scalableQueryEngine/helm
8. Get the ingress external IP address or nodeport and check the workers' status at:

<ingress-IP>/query-engine/actuator/health

9. Configure your JasperReports Server instance as descirbed in Configuring JasperReports Server, then restart your server.

Once your server and query engine are running, you can test a dashboard that contains an Ad Hoc view. After it runs, you can check the logs as described in Logging and Debugging.