Docker container deployment of Pentaho Server
You can use Docker Compose to deploy Pentaho Server as a container and, optionally, to install Pentaho Server plugins during deployment in any of the following supported environments:
On‑premises
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
Microsoft Azure
Download Pentaho Server Docker files
To deploy Pentaho Server as a container using Docker Compose, you must first download both the .gz file that contains the Docker image and the ZIP file that contains the configuration files for your environment.
Complete the following steps to download the ZIP files that you need for deploying Pentaho Server as a container:
On the Support Portal home page, sign in using the Pentaho Support username and password provided in your Pentaho Welcome Packet.
In the Pentaho card, click Download. The Downloads page opens.
In the 11.x list, click Pentaho 11.0 GA Release.
Scroll to the bottom of the Pentaho 11.0 GA Release page.
In the file component section, navigate to the
Docker Image Configurator/Imagesdirectory.Download the
pentaho-server-11.0.0.0-<build number>.tar.gzfile.In the file component section, navigate to the
Docker Image Configurator/Environment Configdirectory.Download one of the following ZIP files that contain the configuration files for your environment:
aws-11.0.0.0-<build number>.zipazure-11.0.0.0-<build number>.zipgcp-11.0.0.0-<build number>.zipon-prem-11.0.0.0-<build number>.zip
What to do next:
If you want to install Pentaho Server plugins (Dashboard Designer, Interactive Reporting, and Pentaho Analyzer) during deployment, download the necessary plugin files.
If you want to deploy the Pentaho Server as a container without installing plugins, go to the instructions for deploying Pentaho Server in your environment:
Download plugin files
To install Pentaho Server plugins you must download the ZIP files that contain the plugin files.
Complete the following steps to download the ZIP files you need for installing plugins:
On the Support Portal home page, sign in using the Pentaho Support username and password provided in your Pentaho Welcome Packet.
In the Pentaho card, click Download. The Downloads page opens.
In the 11.x list, click Pentaho 11.0 GA Release.
Scroll to the bottom of the Pentaho 11.0 GA Release page.
In the file component section, navigate to
Pentaho Server/Archive Build (Suggested Installation Method).Download the files for one or more of the following plugins:
Dashboard Designer Plugin:
pdd-plugin-ee-11.0.0.0-<build number>.zipInteractive Reporting Plugin:
pir-plugin-ee-11.0.0.0-<build number>.zipPentaho Analyzer Plugin:
paz-plugin-ee-11.0.0.0-<build number>.zip
Extract the plugin ZIP files to a temporary directory. The extracted directories contain the following subdirectories are needed for installing the plugins:
pentaho-interactive-reportingdashboardsanalyzer
Note: If you do not install plugins during the first deployment of the Pentaho Server, you can install them later. For instructions, see Install Pentaho Server plugins after deployment.
Deploy Pentaho Server on premises
You can use Docker Compose to deploy Pentaho Server as a container on a local machine, AWS EC2 instance, Google Cloud Platform virtual machine (VM), or Microsoft Azure VM.
On-premises deployment is suitable for the following environments:
Evaluation environments.
Development and test environments.
Production environments with an external database, persistent volumes, backups, and security hardening.
Best practices for production environments
Use an external, managed database—such as Amazon Relational Database Service, Microsoft Azure Database, Google Cloud SQL, or a corporate-managed database—instead of the bundled database container.
Mount persistent storage for solutions, data, logs, and licenses.
Configure HTTPS.
Regularly back up your volumes and database.
Monitor your deployment with Docker logs, cloud metrics, and alerts.
Apply security patches to your operating system and Docker.
Before you begin
Before you can deploy to an on-premises host, you must complete the following tasks:
Create a Docker account.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
On the host, install Docker Engine and Docker Compose.
Download both the ZIP file that contains the Docker image and the ZIP file that contains the configuration files for your environment. For instructions downloading Docker files, see the previous section, Download Docker container files.
(Optional) Download plugin files for installing the plugins during deployment.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
Procedure
Complete the following steps to deploy Pentaho Server as a container, on premises:
On your host, extract the
on-prem-11.0.0.0-<build number>.zipZIP file to a temporary working directory.In your working directory, open a command prompt.
(Optional) If you are not logged in, log into Docker Hub using the following command:
docker loginLoad the Pentaho Server Docker image by running the following command, replacing
<build number>with the build number in the downloaded file:You can verify that the Pentaho Server image is loaded in Docker Desktop.
In your working directory, go to the
on-prem-11.0.0.0-<build number>/dist/on-prem/pentaho-serversubdirectory.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
In a text editor, open the
.envfile and configure variables for your environment.Important: You must enter the URL for your Pentaho license.
The contents of the
.envfile vary slightly for each database and include comments to assist you with editing the file. The following code is an example of the.envfile contents used for a PostgreSQL database.Save and close the
.envfile.(Optional) To install Pentaho Server plugins during deployment, place the directory for each plugin in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory for your database type. The following subdirectories are needed for installing the plugins:pentaho-interactive-reportingdashboardsanalyzer
Note: For instructions downloading plugin files, see Download plugin files.
(Optional) To change the default administrator password before deploying the Pentaho Server and database, go to the in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory and update thedefaultUser.spring.propertiesandrepository.spring.propertiesfiles. For instructions, see Change the default administrator password.In your command prompt, change to the directory for your database type.
Deploy the Pentaho Server as a container and create the database volume by running one of the following commands based on your database type:
docker compose -f docker-compose-mysql.yaml updocker compose -f docker-compose-postgres.yaml updocker compose -f docker-compose-sqlserver.yaml updocker compose -f docker-compose-oracle.yaml up
Verify that Pentaho Server is running by accessing it at
http://localhost:<port>/pentahoor your proxy URL.
Troubleshooting on-premises deployment
The following table lists the symptoms, causes, and suggested fixes for common issues related to on-premises deployments of Pentaho Server as a container.
Pentaho Server container restarts repeatedly
Insufficient memory
Increase the JVM -Xmx or VM size.
Cannot access web UI
Port blocked by firewall
Open the port for your local host in the security group, Network Security Group, or firewall.
Note: The default port is 8080.
DB connection errors
Wrong .env values
Verify the values for DB_HOST, DB_USER, DB_PASSWORD in the .env file.
Slow performance
Under-powered VM
Use recommended size, allocate more resources.
Deploy Pentaho Server on Kubernetes in the cloud
You can deploy the Pentaho Server as a container on Kubernetes in any of the following cloud environments:
Before you begin
Before you can deploy Pentaho Server using Kubernetes, you must create a Docker account.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
Deploy on Amazon Web Services
To deploy on Amazon Web Services (AWS) using Elastic Kubernetes Service (EKS), you must complete the following tasks, in order:
Before you begin, complete the following tasks:
Create a Docker account.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
Verify you have access to a standard Amazon EKS cluster.
Verify that you have an approved S3 CSI or FUSE-based approach. (Used for configs, logs, and overrides and required if your package YAMLs reference S3 buckets for configuration or storage paths.)
If you plan to pull Pentaho images from Amazon Elastic Container Registry (ECR), or a mirrored ECR, confirm that the node instance role or IRSA-enabled service account has permission to pull images.
Install
kubectland AWS CLI.Set up your local
kubeconfigso thatkubectlcan communicate with the cluster by running the following command, replacing<name>and<region>:Download both the ZIP file that contains the Docker image and the ZIP file that contains the configuration files for your environment. For instructions downloading Docker files, see the previous section, Download Docker container files.
Tag and push Pentaho Server Docker image to AWS
To tag the Pentaho Server Docker image and upload it to AWS, complete the following steps:
In your working directory, open a command prompt from that subdirectory.
(Optional) If you are not logged in, log into Docker Hub using the following command:
docker login.Authenticate to ECR by running the following command, replacing
<aws region>and<aws account id>:Tag and push the Pentaho Server Docker image by running the following command, replacing
<build number>,<aws account id>, and<region>with the values from the downloaded file and your AWS account:The Pentaho Server Docker image is tagged and pushed to AWS.
Create a database instance in AWS
To create a database instance in AWS, complete the following steps:
On your local workstation, extract the
aws-11.0.0.0-<build number>.zipfile to a temporary working directory.Note: For instructions on downloading the ZIP file, see the previous section, Download Docker container files.
In your working directory, go to the
aws-11.0.0.0-<build number>/dist/aws/pentaho-serversubdirectory.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Go to the
db_init_<database type>subdirectory and open a command prompt from that subdirectory.Send the
.sqlfiles in thedb_init_<database type>subdirectory to your EC2 instance by running the following commands, replacing<your private key>,<database type>,<ssh username>,<e2c instance public ip>, and<region>:Connect to RDS from EC2. The following command is an example for connecting to RDS from EC2 for a PostgreSQL database.
Run the
.sqlfiles from EC2. The following commands are examples for running .sqlfiles from EC2 for a PostgreSQL database.The database instance is created in AWS.
Configure storage in AWS S3 buckets
To configure storage in AWS S3 buckets, complete the following steps:
In your working directory, go to the
aws-11.0.0.0-<build number>/dist/aws/pentaho-serversubdirectory.Note: You downloaded and extracted the
aws-11.0.0.0-<build number>.zipfile in the previous sections, Download Docker container files and Create a database instance in AWS.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Open the
.yamlfile in a text editor and configure values for your environment. You must update the following values before deploying the Pentaho Server:The contents of the
.yamlfile vary slightly for each database and include comments to assist you with editing the file. The following code is an example of the.yamlfile contents used for a PostgreSQL database when deploying to AWS.Save and close the
.yamlfile.Go to the
softwareOverride/2_repository/tomcat/webapps/pentaho/META-INFsubdirectory.Open the
context.xmlfile in a text editor and update the database URL everywhere it appears. The following code is an example of the content in thecontext.xmlfile for a PostgreSQL database. In this example, the database instance URL isjdbc:postgresql://repository:5432/hibernate.(Optional) To install Pentaho Server plugins during deployment, place the directory for each plugin in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory for your database type. The following subdirectories that are needed for installing the plugins:pentaho-interactive-reportingdashboardsanalyzer
Note: For instructions downloading plugin files, see Download plugin files.
(Optional) To change the default administrator password before deploying the Pentaho Server, go to the in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory and update thedefaultUser.spring.propertiesandrepository.spring.propertiesfiles. For instructions, see Change the default administrator password.In your EKS cluster, create an S3 bucket for each of the following subdirectories that appear in the
aws-11.0.0.0-<build number>/dist/aws/pentaho-server/<database>directory:configlogssoftwareOverride
Upload the contents of each subdirectory to the corresponding S3 bucket that you created for them.
Go back to the
aws-11.0.0.0-<build number>/dist/aws/pentaho-server/<database>directory, which contains your database.yamlfile, and open a command prompt from that directory.In the command prompt, create the persistent volumes and persistent volume claims, then deploy the Pentaho Server to the Kubernetes cluster by running the following command. Replace
<database>with the database name defined in the.yamlfile:The Pentaho Server is deployed with the storage you configured in the AWS S3 bucket.
Check in logs in Kubernetes pods
To check in logs in Kubernetes pods, complete the following steps:
In a command prompt, check in logs in Kubernetes pods by running the following commands, in order:
Foward the port for the Pentaho Server by running the following command, replacing the
<pod name>,<port>, and<namespace>:Note: The default port is 8080.
Verify that Pentaho Server is running by accessing it at
http://<your external ip>:<port>/pentahoor your Application Load Balancer URL.
Deploy on Google Cloud Platform
To deploy on Google Cloud Platform (GCP) using Google Artifact Registry (GAR), you must complete the following tasks, in order:
Before you begin, complete the following tasks:
Create a Docker account.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
Verify that you have access to a standard mode GKE cluster (Autopilot may have memory constraints).
Install and authenticate to
gcloudCLI.Verify that you have a GCS bucket for persistent storage (mounted via GCS FUSE CSI driver) or an alternative persistent storage class.
Download both the ZIP file that contains the Docker image and the ZIP file that contains the configuration files for your environment. For instructions downloading Docker files, see the previous section, Download Docker container files.
Tag and push Pentaho Server Docker image to GAR
To tag and push the Pentaho Server Docker image to GAR, complete the following steps:
In your working directory, open a command prompt from that subdirectory.
(Optional) If you are not logged in, log into Docker Hub using the following command:
docker login.Authenticate to GAR with a specific Artifact Registry host by running the following commands, replacing
<region>with the value for your region:Tag and push the Pentaho Server Docker image by running the following command, replacing
<build number>,<region>,<project id>, and<repository>with the values in the downloaded file and your GAR account:The Pentaho Server Docker image is tagged and pushed to GAR.
Create a Google Cloud supported database in GCP
Create one of the following databases to be managed by Google Cloud: MySQL, PostgreSQL, or SQL Server.
Note: To use an Oracle database, you must provision and configure a Compute Engine (GCE) VM. For instructions creating an Oracle database, see Create an Oracle database instance for GCP.
To create a database for GCP that is managed by Google Cloud, complete the following steps:
On your local workstation, extract the
gcp-11.0.0.0-<build number>.zipfile to a temporary working directory.Note: For instructions on downloading the ZIP file, see the previous section, Download Docker container files.
In your working directory, go to the
gcp-11.0.0.0-<build number>/dist/gcp/pentaho-serversubdirectory.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-postgrespentaho-server-sqlserver
Go to the
db_init_<database type>subdirectory and open a command prompt from that subdirectory.(Optional) If you are not logged in, log into Google Cloud CLI using the following command:
gcloud auth login.Send the
.sqlfiles in thedb_init_<database type>subdirectory to your GCS bucket by running the following commands, replacing<database>and<bucket>:Run the
.sqlfiles from the GCS bucket. The following commands are examples for running .sqlfiles from the GCS bucket for a PostgreSQL database.The database instance is created and initialized in GCP.
Create an Oracle database instance for GCP
To create an Oracle database for use on Google Cloud Platform (GCP), you must deploy an Oracle XE Docker container on a Compute Engine (GCE) VM and then run the required SQL files to create and initialize the Pentaho database schemas and tables.
Note: If you want to use a MySQL, PostgreSQL, or SQL Server database, see Create a Google Cloud supported database in GCP.
To create an Oracle database, complete the following steps:
Provision a GCE VM with type e2-medium.
On your local machine, open a command prompt.
Connect to the GCE VM by running the following command, replacing
<instance name>,<project>, and<zone>:Install and enable Docker on the VM by running the following commands:
Download the Oracle XE image and start the Oracle XE container in the VM by running the following commands, replacing
<password>:Verify that SQL*Plus can connect from inside the container to the Oracle XE database in the same container by running the following command, replacing
<username>,<password>, and<service name>:Create a firewall rule in the Google Cloud project’s VPC network by running the following command:
Add a network tag to the VM so that firewall rules targeting that tag apply to it by running the following command, replacing
<zone>:You can now connect to the Oracle XE database from SQL Developer by using the VM’s external IP address.
Upload the
.sqlfiles to the VM by running the following command, replacing<vm name>,<vm user>, and<zone>:Open an SSH session to the VM by running the following command, replacing
<vm name>,<project id>, and<zone>:You are connected to the VM’s shell.
Copy the uploaded
.sqlfiles into the Docker container by running the following commands, replacing<vm user>:Create and initialize required Pentaho database schemas and tables by running the following command, replacing
<password>:The Oracle database is deployed and the Pentaho database schemas and tables are initialized.
Configure storage in GCS bucket
To configure storage in your Google Cloud Storage (GCS) bucket, complete the following steps:
In your working directory, go to the
gcp-11.0.0.0-<build number>/dist/aws/pentaho-serversubdirectory.Note: You downloaded and extracted the
aws-11.0.0.0-<build number>.zipfile in the previous sections, Download Docker container files and Create a database instance in AWS.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Open the
.yamlfile in a text editor and configure values for your environment. You must update the following values before deploying the Pentaho Server:The contents of the
.yamlfile vary slightly for each database and include comments to assist you with editing the file. The following code is an example of the.yamlfile contents used for a PostgreSQL database when deploying to GCP.Save and close the
.yamlfile.Go to the
softwareOverride/2_repository/tomcat/webapps/pentaho/META-INFsubdirectory.Open the
context.xmlfile in a text editor and update the database URL everywhere it appears. The following code is an example of the content in thecontext.xmlfile for a PostgreSQL database. In this example, the database instance URL isjdbc:postgresql://repository:5432/hibernate.(Optional) To install Pentaho Data Integration plugins during deployment, place the directory for each plugin in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory for your database type. The following subdirectories are needed for installing the plugins:pentaho-interactive-reportingdashboardsanalyzer
Note: For instructions downloading plugin files, see Download plugin files.
(Optional) To change the default administrator password before deploying the Pentaho Server and database, go to the in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory and update thedefaultUser.spring.propertiesandrepository.spring.propertiesfiles. For instructions, see Change the default administrator password.Add a firewall rule in the Google Cloud Console allowing the GKE nodes to communicate with the Pentaho Server pod, enabling the pod to reach the external database.
In your GKE cluster, create an GCS bucket for each of the following subdirectories that appear in the
gcp-11.0.0.0-<build number>/dist/gcp/pentaho-server/<database>directory:configlogssoftwareOverride
Upload the contents of each subdirectory to the corresponding GCS bucket that you created for them.
Go back to the
gcp-11.0.0.0-<build number>/dist/aws/pentaho-server/<database>directory, which contains your database.yamlfile, and open a command prompt from that directory.In the command prompt, create the persistent volumes and persistent volume claims, then deploy the Pentaho Server to the Kubernetes cluster by running the following command. Replace
<database>with the database name defined in the.yamlfile:The Pentaho Server is deployed with the storage you configured in the GCS bucket.
Verify Pentaho Server deployment to GCP
To verify Pentaho Server deployment to GCP, complete the following steps:
In a command prompt, run the following commands, in order:
Foward the port for the Pentaho Server by running the following command, replacing the
<pod name>,<port>, and<namespace>:Note: The default port is 8080.
Verify that Pentaho Server is running by accessing it at
http://<your external ip>:<port>/pentahoor your Application Load Balancer URL.
Deploy on Microsoft Azure
To deploy on Microsoft Azure using Azure Kubernetes Service (AKS), you must complete the following tasks, in order:
Best practices
Use Azure Managed Disks or Azure Files to provide persistent storage and mount these volumes into the container.
Deploy Pentaho Server behind an Azure Application Gateway to enable centralized TLS/SSL termination and traffic routing.
For production environments, use a managed database service such as Azure Database for PostgreSQL, MySQL, or SQL Server.
Before you begin
Before you can deploy to Microsoft Azure, complete the following tasks:
Create a Docker account.
Note: For help with Docker and Docker Hub, see the online documentation at https://docs.docker.com/.
Verify you have access to a standard Azure Kubernetes Service (AKS) cluster.
Verify that your Azure account has the required permissions to create resource groups, databases, container registries, storage accounts, namespaces, and Kubernetes services.
If you plan to pull Pentaho images from Azure Container Registry (ACR), or a mirrored ACR, verify that your Azure account is assigned the AcrPull role so it can pull images from ACR.
Install
kubectland Azure CLI.Set up your local
kubeconfigso thatkubectlcan communicate with the cluster by running the following command:Download both the ZIP file that contains the Docker image and the ZIP file that contains the configuration files for your environment. For instructions downloading Docker files, see the previous section, Download Docker container files.
Tag and push Pentaho Server Docker image to Microsoft ACR
To tag the Pentaho Server Docker image and upload it to ACR, complete the following steps:
In your working directory, open a command prompt.
(Optional) If you are not logged in, log into Docker Hub using the following command:
docker login.Authenticate to ACR by running the following commands, replacing
<registryName>with the name of your registry:Tag and push the Pentaho Server Docker image by running the following commands, replacing
<image name>,<registry name>, and<image name on acr>with the values from the downloaded file and your Azure account:Verify the Pentaho Server Docker image is uploaded to ACR by using the following command, replacing
<registry name>and<repository name>:The Pentaho Server Docker image is tagged and pushed to ACR.
Edit database and storage configuration files for Azure
To edit database and storage configuration files for Azure, complete the following steps:
On your local workstation, extract the
azure-11.0.0.0-<build number>.zipfile to a temporary working directory.Note: For instructions on downloading the ZIP file, see the previous section, Download Docker container files.
In your working directory, go to the
azure-11.0.0.0-<build number>/dist/azure/pentaho-serversubdirectory.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Open the
.yamlfile in a text editor and configure values for your environment. You must update the following values before deploying the Pentaho Server:The contents of the
.yamlfile vary slightly for each database and include comments to assist you with editing the file. The following code is an example of the.yamlfile contents used for a PostgreSQL database when deploying to Azure.Save and close the
.yamlfile.Go to the
/softwareOverride/2_repository/tomcat/webapps/pentaho/META-INF/directory.Open the
context.xmlfile in a text editor and update the database URL everywhere it appears. The following code is an example of the content in thecontext.xmlfile for a PostgreSQL database. In this example, the database instance URL isjdbc:postgresql://repository:5432/hibernate.Save and close the
context.xmlfile.(Optional) To install Pentaho Data Integration plugins during deployment, place the directory for each plugin in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory for your database type. The following subdirectories are needed for installing the plugins:pentaho-interactive-reportingdashboardsanalyzer
Note: For instructions downloading plugin files, see Download plugin files.
(Optional) To change the default administrator password before deploying the Pentaho Server and database, go to the in the
softwareOverride\2_repository\pentaho-solutions\systemsubdirectory and update thedefaultUser.spring.propertiesandrepository.spring.propertiesfiles. For instructions, see Change the default administrator password.
Upload database and storage configuration files to Azure
In Azure Storage Explorer upload the following directories from your database directory to the file share in your Azure storage account:
configdb_init_<database>logssoftwareOverride
Note: For information about working with Azure Storage Explorer, see the online documenation at https://learn.microsoft.com/azure.
Create a database instance in Azure
In the Microsoft Azure portal, create a database instance for your database type.
Run the
.sqlscripts in thedb_init_<database>directory that you uploaded in the previous section.
Note: For information about creating databases in Microsoft Azure, see the online documentation at https://learn.microsoft.com/azure.
Deploy the Pentaho Server as a container in Azure
To deploy the Pentaho Server as a container in Azure, complete the following steps:
Create the Pentaho Server container and database volume by running the following command, replacing
<database>:Verify that Pentaho Server is running by accessing it at
http://<vm ip>:<port>/pentahoor your Azure Application Gateway URL.Note: The default port is 8080.
Before the database container and volume are created, you can change the default administrator password by updating properties files that are referenced during deployment.
Notes:
After deployment, you can change the administrator password in the Pentaho User Console. For instructions, see Change your password.
If you have an existing Pentaho Server or Pentaho Data Integration installation, you can use the
encr.batcommand to generate an encrypted password. For details, see Encrypting a password.
To change the default administrator password, complete the following steps:
In your working directory, go to the
pentaho-serversubdirectory.Open one of the following directories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Go to the
systemdirectory:softwareOverride/4_others/pentaho-solutions/system.Open the
defaultUser.spring.propertiesfile in a text editor and edit thedefaultAdminUserPasswordproperty value.Save and close the
defaultUser.spring.propertiesfile.Open the
repository.spring.propertiesfile in a text editor and edit thesystemTenantAdminPasswordproperty value.Save and close the
repository.spring.propertiesfile.The
defaultUser.spring.propertiesandrepository.spring.propertiesfiles are now ready to upload during cloud deployment.
Install Pentaho Server plugins after deployment
After deploying the Pentaho Server as a container, the recommended way to install Pentaho Server plugins is to use the Plugin Manager. For instructions using the Plugin Manager to install plugins, see Step 4: Install Pentaho plugins.
However, if you cannot use the Plugin Manager, you can install plugins by placing the plugin files in the appropriate directory for your deployment and restarting the Pentaho Server.
Before you begin, you must download the Pentaho Server plugin files. For instructions downloading plugin files, see Download plugin files.
Note: Plugins installed outside of the Plugin Manager might not be listed in the Plugin Manager and must be maintained manually.
To install Pentaho Server plugins after the Pentaho Server has been deployed as a container, complete the following steps:
In your cloud deployment, go to the
pentaho-serverdirectory.Open one of the following subdirectories based on your database type:
pentaho-server-mysqlpentaho-server-oraclepentaho-server-postgrespentaho-server-sqlserver
Go to
softwareOverride/4_others/pentaho-solutions/systemsubdirectory.Copy one or more of following plugin directories and its contents to the
softwareOverride/2_repository/pentaho-solutions/systemdirectory in your cloud deployment:pentaho-interactive-reportingdashboardsanalyzer
Restart the Pentaho Server by using the commands for your environment:
On-premises
Amazon Web Services
Google Cloud Platform and Microsoft Azure
Software overrides and plugins
The ZIP file that you download to deploy Pentaho Server as a container includes a softwareOverride directory that you can use to inject or replace files at container startup.
The Pentaho Server softwareOverride directory, located at /path/dist/pentaho-server/<database>/softwareOverride, contains subdirectories whose contents are copied by the Docker entrypoint into the Pentaho installation directory, in numerical order.
For instance, the on-prem-11.0.0.0-<build number>/dist/on-prem/pentaho-server/pentaho-server-postgres/softwareOverride directory contains the following subdirectories:
1_drivers2_repository3_security4_others99_exchange
Any file placed in one of the subdirectories of the softwareOverride directory is layered into the container during startup. For example, if you place Pentaho Server plugin files in the /path/dist/pentaho-server/<database>/softwareOverride/2_repository/pentaho-solutions/system subdirectory, those plugins are installed when the Pentaho Server is restarted. You can also use the softwareOverrides directory to change the default administrator password before deployment.
For instructions about software overrides in your specific environment, see the README.md file that is included in the softwareOverride directory.
Last updated
Was this helpful?

