Manage the Pentaho Server
Manage the Pentaho Server by changing User Console behavior, removing sample content, importing email addresses for scheduling, and applying advanced configuration changes.
In this topic
Hide User Console Home perspective widgets
The User Console default Home perspective contains the Getting Started widget, which has easy instructions and tutorials for evaluators.
Perform the following steps to hide not only the Getting Started widget, but also other Home perspective widgets, as needed:
Shut down the Pentaho Server if it is currently running.
Choose one of the following options depending on your deployment status:
If you have not yet deployed, navigate to the
/pentaho-platform/user-console/source/org/pentaho/mantle/home/properties/config.propertiesfile.If you have manually deployed and want to hide widgets later, navigate to the
/pentaho-server/tomcat/webapps/pentaho/mantle/home/properties/config.propertiesfile.
Find the line that starts with
disabled-widgets=and type in the ID of the widget getting-started, as shown in the following example:disabled-widgets=getting-started,recents,favoritesYou can also hide the Recents and Favorites widgets using the same method, as shown here. Save and close the file.
Save and close the file.
Locate the
/pentaho-server/tomcat/webapps/pentaho/mantle/homedirectory and open theindex.jspfile with any text editor.Find the following line of code and comment it out, then save and close the file.
<script language='JavaScript' type='text/javascript' src='http://admin.brightcove.com/js/BrightcoveExperiences.js'></script>Start the Pentaho Server and log in to the User Console.
You now have a Home page without the specified widgets.
Turn off autocompletion of User Console sign-in credentials
The User Console sign-in settings have autocomplete turned on by default.
Perform the following steps to manually turn off the autocompletion functionality:
Stop the Pentaho Server.
Navigate to the
/pentaho-server/tomcat/webapps/pentaho/jspdirectory and open thePUCLogin.jspfile with any text editor.Find the following two sections of code and change the autocomplete entry to
off, as shown:Save and close the
PUCLogin.jspfile.Restart the Pentaho Server.
Autocompletion of usernames and passwords is now turned off for the User Console sign-in screen.
Remove sample data from the Pentaho Server
By default, you have access to a sample data source and a solution directory filled with example content. These samples are provided for testing with sample data. When you are ready to move from a testing scenario to development or production, you can remove the sample content.
Perform the following steps to completely remove the sample data and content:
Stop the Pentaho Server.
Delete the
samples.zipfile from the/pentaho-server/pentaho-solutions/system/default-contentdirectory.If you performed a manual WAR build and deployment, then the file path is:
/pentaho-server/pentaho-solutions/system.Edit the
/pentaho/WEB-INF/web.xmlfile inside of the deployedpentaho.war.If you used an archive installation method, the path to the WAR file should be
/pentaho-server/tomcat/webapps/pentaho/WEB-INF/web.xml. If you performed a manual WAR build and deployment, you must adjust the path to fit your specific configuration.Remove the
hsqldb-databasessection from the/pentaho/WEB-INF/web.xmlfile:Remove the
hsqldb-startersection from the/pentaho/WEB-INF/web.xmlfile:Remove the
SystemStatusFilter:Note: The
SystemStatusFilterfilter is not part of the samples. The filter shows error status messages that are only useful for development and testing purposes, and should be removed from a production system.Remove the filter mapping for the
SystemStatusFilter:Save and close the
web.xmlfile.Delete the
/pentaho-server/data/directory.The
/pentaho-server/data/directory contains a sample database, control scripts for that database, the environment settings it needs to run, and SQL scripts to initialize a new repository.Restart the Pentaho Server, then sign into the Pentaho User Console with the administrator username and password, go to the Browse Files page, and perform the following steps:
In the Folders pane, expand the Public folder and click to highlight the folder containing the Steel Wheels sample data. Click Move to Trash in the Folder Actions pane and confirm the deletion.
Highlight the folder containing the Pentaho Operations Mart sample data. Click Move to Trash in the Folder Actions pane and confirm the deletion.
The Pentaho Server instance is now cleaned of sample data and content.
Importing and updating email addresses used for scheduling from data sources
You can adjust and run sample transformations to import email addresses from LDAP or JDBC sources to be used for scheduling notifications from the Pentaho Server via the Pentaho User Console (PUC). Once you have initially imported the data, you can schedule the transformations to run periodically to update the email addresses based on the LDAP or JDBC sources.
You can find the following sample transformations in the server/pentaho-server/pentaho-solutions/email-import-samples directory:
For LDAP sources:
LDAPEmailImportV3.ktrFor JDBC sources:
JDBCEmailImportV3.ktr
You can also use an optional parameter defined for these transformations and a related column in the Pentaho email Hibernate database table to manage multiple sources. Using the parameter and related column helps to keep emails from different sources from interfering with each other. The transformations are designed to only act upon rows in the Hibernate table that match this optional parameter. Any inserts, deletions, or update only apply to those rows with the column that matches the parameter. For example, if you have multiple LDAP servers for different local or business units, such as LDAP-US, LDAP-EU, and LDAP-ASIA. You can adjust the transformation parameter for each one of these sources to import and maintain email addresses from each server without affecting the others.
Import and update email addresses
Perform the following steps in the Pentaho Data Integration (PDI) client to adjust the sample JDBC transformation, then run the transformation to import the email addresses:
Open the sample
JDBCEmailImportV3.ktrtransformation in the PDI client.See the Open a transformation section in the Pentaho Data Integration document for details.
Select Properties from the menu that appears when you right-click in an empty area of the PDI client canvas.
The Transformation properties dialog box opens.
Click the Parameters tab to access the
MANAGED_SOURCEandPENTAHO_BASE_URLtransformation parameters.Specify a source of the email address data for the
MANAGED_SOURCEparameter.As a best practice, you should use and maintain separate transformations for each source. For example, if you have multiple JDBC databases for different local or business units (such as
JDBC-USandJDBC-ASIA), you should save a version of theJDBCEmailImportV3transformation per each source withMANAGED_SOURCEset toJDBC-USfor one transformation andMANAGED_SOURCEset toJDBC-ASIAfor the other transformation.Specify the URL of your Pentaho User Console (PUC) for the
PENTAHO_BASE_URLparameter.By default, in a standard installation, the URL for PUC is
http://localhost:8080/pentaho, yet your Pentaho administrator can and may have configured it to be different from the default. Check with your Pentaho administrator if you are not sure of the URL used for your instance of PUC.Click OK to close the Transformation properties dialog box.
With the transformation parameters, you need to specify the related Pentaho email Hibernate database table columns before running the transformation.
Double click on the JDBC Input step in the transformation.
The Table input step properties dialog box opens.
Specify the column names used for the email source, last names, and first names in your JDBC source with the SQL statements shown in the SQL text box.
The column name specified for the email source should match the value you specified for the
MANAGED_SOURCEparameter.Click OK to save your specified values and close the Table input step properties dialog box.
Save the transformation as a filename specific to your JDBC source (
JDBCEmailImportForJDBC-US.ktrfor theJDBC-USmanaged source for example), then run the transformation.See the Run your transformation section in the Pentaho Data Integration document for details.
The email addresses from your JDBC source should now appear on the Email Groups page under the Administration perspective in your PUC instance. You can now use this same transformation to update the email addresses periodically by setting it up to run on a schedule. See the Schedule a transformation or job section in the Pentaho Data Integration document for details.
Perform the following steps in the Pentaho Data Integration (PDI) client to adjust the sample LDAP transformation, then run the transformation to import the email addresses:
Open the sample
LDAPEmailImportV3.ktrtransformation in the PDI client.See the Open a transformation section in the Pentaho Data Integration document for details.
Select Properties from the menu that appears when you right-click in an empty area of the PDI client canvas.
The Transformation properties dialog box opens.
Click the Parameters tab to access the
MANAGED_SOURCEandPENTAHO_BASE_URLtransformation parameters.Specify a source of the email address data for the
MANAGED_SOURCEparameter.As a best practice, you should use and maintain separate transformations for each source. For example, if you have multiple LDAP servers for different local or business units (such as
LDAP-USandLDAP-ASIA), you should save a version of theLDAPEmailImportV3transformation per each source withMANAGED_SOURCEset toLDAP-USfor one transformation andMANAGED_SOURCEset toLDAP-ASIAfor the other transformation.Specify the URL of your Pentaho User Console (PUC) for the
PENTAHO_BASE_URLparameter.By default, in a standard installation, the URL for PUC is
http://localhost:8080/pentaho, yet your Pentaho administrator can and may have configured it to be different from the default. Check with your Pentaho administrator if you are not sure of the URL used for your instance of PUC.Click OK to close the Transformation properties dialog box.
With the transformation parameters, you need to specify the related Pentaho email Hibernate database table columns before running the transformation.
Double click on the LDAP Input step in the transformation.
The LDAP Input step properties dialog box opens.
Click the Fields tab and specify the field names used for the email source, last names, and first names in your LDAP source.
The field name specified for the email source should match the value you specified for the
MANAGED_SOURCEparameter.Click OK to save your specified values and close the LDAP input step properties dialog box.
Save the transformation as a filename specific to your LDAP source (
LDAPEmailImportForJDBC-US.ktrfor theLDAP-USmanaged source for example), then run the transformation.See the Run your transformation section in the Pentaho Data Integration document for details.
The email addresses from your LDAAP source should now appear on the Email Groups page under the Administration perspective in your PUC instance. You can now use this same transformation to update the email addresses periodically by setting it up to run on a schedule. See the Schedule a transformation or job section in the Pentaho Data Integration document for details.
Advanced topics
Set up a cluster
A Pentaho node consists of a Tomcat Web App server and the Pentaho Server. Multiple nodes that are joined make up a cluster. You can create a cluster using any version of Pentaho Suite 6.x or later.
Step 1: Address prerequisites for clustering
Before you begin the process of clustering your servers, a few tasks need to be preformed and some specific requirements must be met to successfully implement a Pentaho deployment on a Tomcat cluster. The following table describes these tasks and requirements:
Make sure that all of your application nodes are set up with identical configurations and Pentaho deployments.
Your application nodes all need the same configurations and Pentaho deployments installed already in order for clustering to work.
Establish a load balancer.
This will make sure that computing resources are spread evenly among the nodes.
Each node and the load balancer must be time-synchronized via NTP.
All machines that make up the cluster have to have the same system time. If they do not, execution times of objects will be effected.
You must run only one node per machine (or NIC).
It is possible to run multiple application servers on each node with a modified configuration, but this scenario does not offer any benefit for load balancing (performance) or hardware failover (redundancy), and therefore is not covered in this guide. Refer to your application server's clustering documentation for more information.
You must use the supported version of Tomcat (either default archive installation or manual installation). See the Components reference in the Try Pentaho Data Integration and Analytics document for more information.
You may be able to use this guide as a basic blueprint for configuring other application servers or versions of Tomcat for a clustered environment, but Pentaho support will not be able to assist you if you run into any problems with the Pentaho Server.
You must have permission to install software and modify service configurations.
If you do not have permissions, you must have access to someone at your company who does have the correct permission levels, typically root access.
Only the Pentaho Server will be deployed to the cluster.
It is possible to modify the configuration to deploy other WARs or EARs. However, for ease of testing and support, Pentaho only supports deployment of the pentaho and pentaho-style WARs to the cluster.
You must use a single repository location.
Most people use a database-based solution repository. Keep in mind that you are not clustering the database server in this procedure, only the application server.
You must have sticky sessions enabled.
This will tie your session to a single node.
Step 2: Initialize and configure repository
After you have determined that your systems meet all of the requirements listed in the checklist, you need to first initialize and then configure the repository for clustering. Finally, you need to verify your clustering setup, before you move on to setting up the Jackrabbit journal:
Initialize your database using the appliable steps for your type of installation and database found in the Install Pentaho Data Integration and Analytics document.
After you have initialized your database, configure the data connections to the Pentaho Repository. Define data connections in the Install Pentaho Data Integration and Analytics document walks you through the steps for JDBC and JNDI connections for PostgreSQL, MySQL, and Oracle.
Configure your repository using the appliable steps for your type of installation and database found in the Install Pentaho Data Integration and Analytics document.
After you have initialized and configured your repository, you should clean up these files by following these steps.
Locate the
pentaho-server/tomcatdirectory and remove all files and folders from thetempfolder.Locate the
pentaho-server/tomcatdirectory and remove all files and folders from theworkfolder.Locate the
pentaho-server/pentaho-solutions/system/jackrabbit/repositorydirectory and remove all files and folders from the finalrepositoryfolder.Locate the
pentaho-server/pentaho-solutions/system/jackrabbit/repositorydirectory and remove all files and folders from theworkspacesfolder.
You now have a configured repository and are ready to move to the next step for clustering.
Step 3: Configure Jackrabbit Journal
These following steps show how to set up the Jackrabbit journal for your cluster (make sure that each node has a unique ID):
Locate the
repository.xmlfile in thepentaho-server/pentaho-solutions/system/jackrabbitdirectory and open it with any text editor.Scroll to the bottom of the file and replace the section that begins with
<!-- Run with a cluster journal -->with the correct code for your type of database repository.For PostgreSQL only:
For MySQL only:
For Oracle only:
For MS SQL Server only:
Save and close the file.
Jackrabbit journaling is now set up for your cluster. The Jackrabbit Wiki has additional information about journaling.
Next, you need to cluster the quartz tables to avoid duplicate scheduling on each node.
Step 4: Configure Quartz
You now need to make a few edits in the quartz.properties file to configure Quartz to work with your cluster.
Locate the
quartz.propertiesfile in thepentaho-server/pentaho-solutions/system/scheduler-plugin/quartzdirectory and open it with any text editor.Find the org.quartz.scheduler.instanceId = INSTANCE_ID line and change INSTANCE_ID to
AUTO.Find the
#_replace_jobstore_propertiessection and change the default value of org.quartz.jobStore.isClustered totrueas shown:Add this line just after the org.quartz.jobStore.isClustered =
trueline:
Quartz is now configured for your cluster. The Quartz Configuration Reference has additional information about clustering with Quartz.
Step 5: Start and test the cluster
Start the cluster and verify that it is working properly with the following steps:
Start the solution database.
Start the application server on each node.
Make sure that the load balancer is able to ping each node.
Repeat for each node that you have set up.
Test the cluster by accessing the Pentaho Server through the load balancer's IP address, hostname, or domain name.
Begin whatever test procedure you have designed for this scenario.
Customize the Pentaho Server
Activate view-only mode for Interactive reports
In the Pentaho User Console, you can create a view-only user that does not have access to edit Interactive reports. This requires you to remove the edit icon in the report view for view-only users.
Perform the following steps to remove the edit icon:
Stop the Pentaho Server.
Locate the
server\\pentaho-solutions\\systemdirectory.Open the
Pentaho.xmlfile with any text editor.Locate the
<edit-permission>tag and uncomment the following line:Save and close the
Pentaho.xmlfile.Start the Pentaho Server.
Change Ports and URLs
The Pentaho Server has associated default port numbers. You can change these port numbers to adjust the Pentaho Server to your system. Since the port number of the Pentaho Server is a part of its URL, you will also have to change that address.
List of server ports used by PDI
The following port numbers must be available internally on the machine that runs the Pentaho Server:
Pentaho Server
8080
H2 (SampleData)
9092
Embedded Pentaho Server (Jetty)
10000
The SampleData database is an exception. It is only for evaluation and demonstration purposes and is not necessary for production systems.
Note: The Embedded Pentaho Server (Jetty) server port is hard-coded in Pentaho Data Integration and cannot be changed. If port 10000 is unavailable, the system will increment by 1 until an available port is found.
Change Pentaho Server (Tomcat) port numbers
Edit the /pentaho/server/pentaho-server/tomcat/conf/server.xml file and change the port numbers as shown in the following example code:
Note: You may also have to change the SSL and SHUTDOWN ports in this file, depending on your configuration.
Change the Pentaho Server URL
You can change the Pentaho Server hostname from localhost to a specific IP address, hostname, or domain name by following these instructions. This procedure is also a requirement if you are changing the Pentaho Server port number.
Stop the Pentaho Server.
Navigate to the
pentaho/server/pentaho-server/pentaho-solutions/systemdirectory and open theserver.propertiesfile with any text editor.Modify the value of the
fully-qualified-server-urlelement appropriately.Save and close the file.
Start the Pentaho Server.
The Pentaho Server is now configured to reference itself at the specified URL.
Note: If you recently upgraded to Pentaho 6.x or higher from a version earlier than 6.0, you may need to remove the <context-param> entry for the fully-qualified-server-url from the /tomcat/webapps/pentaho/WEB-INF/web.xml. If so, restart the Pentaho Server after removing it.
Change the Karaf startup timeout setting
Upon start up, the system waits for Karaf to install all of its features before timing out. If you modify Karaf and it now takes longer to install during start up, you may need to extend the default timeout setting to allow Karaf more time to install. The current default timeout is: 120000 (milliseconds - about 2 minutes).
You can change this default timeout by editing the server.properties file.
Stop the Pentaho Server.
Navigate to the
/pentaho-server/pentaho-solutions/system directory.Open the
server.propertiesfile with any text editor, and search for the karafWaitForBoot parameter.Uncomment the line containing the parameter and set it to your desired wait time in milliseconds
Save and close the file.
Restart the Pentaho Server
Change the PDI home directory location
The default location for the Pentaho Data Integration home directory is the .kettle directory in your system user's home directory.
Windows:
C:\\Documents and Settings\\example_user\\.kettleLinux:
~/.kettle)
There will be a different .kettle directory, and therefore a different set of configuration files, for each system user that runs PDI.
The contents of this directory are listed in the following table:
kettle.properties
Main PDI properties file; contains global variables for low-level PDI settings
shared.xml
Shared objects file
db.cache
The database cache for metadata
repositories.xml
Connection details for PDI database or solution repositories
.spoonrc
User interface settings, including the last opened transformation/job
.languageChoice
Default language for the PDI client tool
Standalone PDI client tool deployments
You can specify a single, universal .kettle directory for all users by declaring a KETTLE_HOME environment variable in your operating system. When declaring the variable, leave out the .kettle portion of it; this is automatically added by PDI.
export KETTLE_HOME=/home/pentaho/examplepath/pdi
Pentaho Server deployments that run PDI content
If you followed a manual deployment or archive package installation path, you can set a system environment variable as explained above, but it must be declared before the Pentaho Server service starts. You can alternatively change the CATALINA_OPTS system variable to include the -D flag for KETTLE_HOME, or you can edit the script that runs the Pentaho Server and set the flag inline, as in this example from the start-pentaho.sh script:
export CATALINA_OPTS="--Xms2048m -Xmx2048m -XX:MaxPermSize=256m -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000" -DKETTLE_HOME=/home/pentaho/examplepath/pdi
Windows service modification
If you used the graphical utility to install the Pentaho Server, then you must modify the Java options flag that runs the Pentaho Server Tomcat service. Here is an example command that will change the value of KETTLE_HOME to C:\\<examplepath>\\pdi\\.kettle:
tomcat8.exe //US//pentahobiserver ++JvmOptions -DKETTLE_HOME=C:\\examplepath\\pdi
Change the Quartz misfire threshold
With Quartz, sometimes scheduled jobs, transformations, or reports might try to run several times when they are manually stopped and restarted, instead of running only once. This is typically caused by the misfireThreshold property in Quartz being set at too high of a number.
These steps show how to reset the misfireThresholdto a lower numerical value.
Stop the Pentaho Server.
Locate the
/pentaho-server/pentaho-solutions/system/scheduler-plugin/quartzdirectory.Open the
quartz.propertiesfile with any text editor.Find the property shown below and change the default to a smaller number, such as
5000. The default value represents the number of milliseconds.Save and close the
quartz.propertiesfile.Start the Pentaho Server.
Change the location of the server log file
Windows server log file location
To change the location of the pentaho.log file, edit log4j2.xml in /pentaho-server/tomcat/webapps/pentaho/WEB-INF/classes/.
Modify the location as shown in the following sample, using the appropriate path to your installation:
Linux server log file location
If you are using Linux, the log4j2.xml file is in /pentaho-server/tomcat/webapps/pentaho/WEB-INF/classes/.
Modify the location as shown in the following sample, using the appropriate path to your installation:
Change the port numbers for the Pentaho Server
Follow the instructions below to change the port through which the Pentaho Server runs:
Stop the Pentaho Server.
Navigate to the
/pentaho-server/tomcat/conf/directory.Open the
server.xmlfile with any text editor, and search for the value forDefine a non-SSL HTTP/1.1 Connector.Change the port number in the connector port element below from 8080 to your preferred port number.
Save and close the
server.xmlfile.Navigate to the
/pentaho-server/pentaho-solutions/systemdirectory and open theserver.propertiesfile with any text editor.Change the
fully-qualified-server-urlentry to match the new port number you specified inserver.xml.Save and close the file.
Restart the Pentaho Server.
If you recently upgraded to Pentaho 6.0, you may need to remove the <context-param> entry for the fully-qualified-server-url from the /tomcat/webapps/pentaho/WEB-INF/web.xml. If so, restart the Pentaho Server after removing it.
Change the staging database for CSV files
Hibernate is the default staging database for CSV files. Follow these instructions if you want to change the staging database.
Go to
/pentaho-solutions/system/data-accessand open thesettings.xmlfile with any text editor.Edit the
settings.xmlfile as needed.The default value is shown in the sample below:
This value can be a JNDI name or the name of a Pentaho database connection. See the Install Pentaho Data Integration and Analytics document for more information on database connections.
Save and close the file.
Restart the User Console
Change the web application name or port
The Pentaho Server and web application default port number is 8080. The default web application name is pentaho, which is the name of the WAR file archive, the name of the directory that your application server creates, and also part of the URL structure for all content in the User Console.
If you need to change the User Console application name to something else, or if your Web application server is running on a port other than 8080, follow these instructions.
Change the web application name on Tomcat
These instructions only work on Tomcat servers that are configured to accept context.xml overrides built into deployed WAR files. Some Tomcat deployments may not have this feature turned on. You can change the Tomcat configuration on your own, or consult your Tomcat documentation to learn about other methods of changing a web application context. Use the XML snippet in these instructions in whichever configuration file you end up creating or modifying.
Follow these instructions to change the web application context for a pentaho.war file that you deployed to a Tomcat server. While the example below uses sample as the context name, you can use whatever context name you choose.
Stop the server.
Open the
pentaho/server/pentaho-server/tomcat/webapps/pentaho/META-INF/context.xmlfile in a text editor, and change thepentahoreferences in thecontext pathtag to your preferred context name.For example, to specify a context name of
sample, modifycontext pathas follows.Save and close the file.
Navigate to the
pentaho/server/pentaho-server/tomcat/webappsfolder, and rename thepentahofolder to your preferred context name. In this example, rename thepentahofolder tosample.Edit the
pentaho/server/pentaho-server/tomcat/webapps/ROOT/index.jspfile to change thepentahoreference in theURLproperty to your preferred context name.In this example, use the following line of code to specify '
sample' as the new context name:Edit the
pentaho/server/pentaho-server/pentaho-solutions/system/server.propertiesfile to changepentahoin the value of thefully-qualified-server-urlsetting to your preferred context name.In this example, set the
fully-qualified-server-urlas follows.Start the server.
Increase the CSV file upload limit
You may find that you need to increase the size of the upload limit for your CSV files. These steps guide you through this process.
Go to
/pentaho-server/pentaho-solutions/systemand open thepentaho.xmlfile.Edit the XML as needed (sizes are measured in bytes):
Save your changes to the file.
In the User Console, go to Tools > Refresh System Settings to ensure that the change is implemented.
Restart the User Console.
Manage config files for the Scheduler plugin
The following configuration files for using the Scheduler plugin with LDBC and LDAP should only be customized by someone with the necessary qualifications and experience.
settings.xml
This file is located in the pentaho-server/pentaho-solutions/system/scheduler-plugin directory and contains properties that you can use to control the plugin cache for development purposes. In most cases, it is best not to modify any of the properties for cache-messages, max-age, or cache elements.
It also contains a section for defining the data source to be used for email group administration purposes in order to import existing emails and groups under the email-source element.
Example:
Valid value for the email-source element:
jdbc
Uses a JDBC data source to import emails and groups.
ldap
Uses an LDAP data source to import emails and groups.
pentaho
Does not import emails and groups. Instead, you can create emails and groups using the email group administration module in PUC.
The configuration properties for the JDBC and LDAP data sources are located in the applicationContext-email-import.properties file.
applicationContext-email-import.properties
This file is located in the pentaho-server/pentaho-solutions/system/scheduler-plugin directory and contains the configuration properties for the JDBC and LDAP data sources. These data sources can be changed in the settings.xml file under the email-source element.
Note: Only someone with sufficient understanding of JDBC and LDAP should make changes to this file.
File location: pentaho-server/pentaho-solutions/system/scheduler-plugin
The password is encrypted using the Encr utility. The Encr.bat and Encr.sh files are located in the same directory as the startup script for Pentaho Server.
The emails-imported property defines whether emails have already been imported. If true, then no more emails are imported. After initial import, this property is automatically set to true. If more imports are done from a different data source, then this property needs to be set back to false.
Example based on a JDBC data source.
The actual query depends on the source RDBMS.
Prerequisites:
The correct JDBC driver must be in the classpath for the JDBC configuration to work.
In the example, the <datasource>.emails-query must return first name, last name, and email in that order. Field names in the output are dependent on the table columns that are imported.
JDBC example:
Example based on a LDAP data source.
Prerequisites:
Attributes required for LDAP must be in the following order: <firstName>,<lastName>,<email>.
LDAP example:
For more information about JDBC, see https://www.oracle.com/java/technologies/javase/javase-tech-database.html
For more information about LDAP, see https://ldap.com/
quartz.properties
This file is located in the pentaho-server/pentaho-solutions/system/scheduler-plugin/quartz directory and is the properties file for the open source job scheduling framework, Quartz. Refer to the official Quartz documentation at http://www.quartz-scheduler.org/documentation/.
Set default SELECT DISTINCT for Interactive Reports
By default, Interactive Reports queries data with the SQL SELECT DISTINCT statement to return only distinct (different) data values, which may require an extensive sorting operation in the database. If you want to reduce the cost of a system-wide sorting operation, you can set new reports to open with the Select Distinct option cleared.
Perform the following steps to change the default setting of the Select Distinct option in the Query Setting dialog box of new Interactive Reports:
Stop the Pentaho Server.
Locate the
server/pentaho-server/pentaho-solutions/system/pentaho-interactive-reportingdirectory.Open the
settings.xmlfile with any text editor.Find the
<default-select-distinct>tag and change it to the desired setting.If you want the Select Distinct option cleared as the default, set the
<default-select-distinct>tag tofalseas shown in the following example code:Save and close the
settings.xmlfile.Start the Pentaho Server.
Set system max row limit for Interactive Reports
You can prevent too many resources from hitting your database server at once by setting a system-wide maximum row-limit for Pentaho Interactive Reports. Your users can still define their own design-time row limits in Interactive Reports, but they will never be able to go over the maximum number of rows that you have specified while designing their reports.
Shut down the Pentaho Server.
Locate the
/pentaho-server/pentaho-solutions/system/pentaho-interactive-reportingdirectory.Open the
settings.xmlfile with any text editor.Find the
<query-limit>tag and change the default number of 100000 within the tags to the maximum number of rows desired.Save and close the file.
Start the Pentaho Server.
If you are migrating content from a previous version, you will need to add the <query-limit> tag to your settings.xml for Interactive Reports.
Roll back system max row limit
These instructions show you how to return the system maximum row limit to the Pentaho 5.3 settings.
Shut down the Pentaho Server.
Locate the
/pentaho-server/pentaho-solutions/system/pentaho-interactive-reportingdirectory.Open the
settings.xmlfile with any text editor.To change the maximum number of rows that will be rendered in Pentaho Interactive Reports in edit or view mode, find the
<design-query-limit>tag and change the default number of 500 back to25.FROM:
TO:
To turn the
design-query-limitto be OFF by default, find the<design-query-limit-enabled>tags and change the value tofalse.
Save and close the
settings.xmlfile.Restart the server.
Last updated
Was this helpful?

