Managing transformations and jobs
Managing transformations and jobs includes managing files in Pipeline Designer, managing their connections, and performing tasks common to both transformations and jobs.
Manage transformations and jobs
Manage transformations and jobs on the Pipeline Designer main page. Available actions include marking transformations and jobs as favorites, downloading them as files, creating duplicates, moving them to different folders or to the trash, renaming them, and viewing their details.
To manage a transformation or job, complete the following steps:
Log into the Pentaho User Console.
Open Pipeline Designer:
If you are using the Modern Design, in the menu on the left side of the page, click Pipeline Designer.
If you are using the Classic Design, click Switch to the Modern Design, and then in the menu on the left side of the page, click Pipeline Designer.
Pipeline Designer opens with the Quick Access section expanded.
In the table at the bottom of the screen, select either the Recently opened or Favorites tab.
Search for or browse to the transformation or job you want to manage, then click the More Actions icon.
In the More Actions menu, select one of the following options:
Not favorite (click to add): Adds the transformation or job to the Favorites tab. It changes the heart icon in the Favorite column from empty to red.
Favorite (click to remove): Removes the transformation or job from the Favorites tab. It changes the heart icon in the Favorite column from red to empty.
Download: Downloads the transformation or job as a file to your default download folder.
Transformations download as KTR files.
Jobs download as KJB files.
Move: Opens the Move to window. Navigate to a folder (or create one), then click Move here.
Duplicate: Opens the Duplicate to window. Navigate to a folder (or create one), then click Paste here.
Send to trash: Opens the Send to trash? dialog box. Click Yes to move the transformation or job to the trash.
Info: Opens the Transformation Name or Job Name window. You can view details and copy the file path by clicking Copy source.
Rename: Makes the transformation or job name editable in the table.
After you rename a transformation or job, dashboards, schedules, and favorites linked to it become inactive.
Manage connections for transformations and jobs
While creating or editing a transformation or job in Pipeline Designer, you can define connections to multiple databases provided by multiple database vendors such as MySQL and Oracle. Pipeline Designer ships with the most suitable JDBC drivers for PostgreSQL, our default database.
Pentaho recommends avoiding ODBC connections. The ODBC to JDBC bridge driver does not always provide an exact match and adds another level of complexity, which affects performance. The only time you may have to use ODBC is if no JDBC driver is available. For details, see the Pentaho Community article on why you should avoid ODBC.
When you define a database connection in Pentaho Designer, the connection information (such as the user name, password, and port number) is stored in the Pentaho Repository and is available to other users when they connect to the repository. If you are not using the Pentaho Repository, the database connection information is stored in the XML file associated with your transformation or job. See the Pentaho Data Integration document for details on the Pentaho Repository.
You must have information about your database (such as your database type, port number, user name and password) before you define a JDBC connection. In PDI, you can also set connection properties as variables. Through such variables, your transformations and jobs can access data from multiple database types.
Make sure to use clean ANSI SQL that works on all the database types used.
You must have a transformation or job open to manage connections from within the Pipeline Designer. To see steps for opening a transformation or job, see Create a transformation, Create a job, or Edit a transformation or job.
Tasks
If you need to run standard SQL commands against a connection, see Use the SQL Editor.
Define a new database connection
While working on a transformation or job, you can define a new database connection to use.
Before you can create a connection, the appropriate driver must be installed for your particular data connection. Your IT administrator should be able to install the appropriate driver for you. For details, see Specify data connections for the Pentaho Server in the Install Pentaho Data Integration and Analytics guide.
To define a new database connection, complete the following steps:
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections list.
Find Database Connections, click the More Actions icon, and then select New. The Database Connection window opens.
Enter database connection information for your new Database Connection. The type of database connection information entered depends on your access protocol. Refer to the examples in the following sections of this topic for Native (JDBC) and OCI protocols:
Native (JDBC) protocol information
Create a Native (JDBC) connection in the Database Connection dialog box by completing the following steps:
In the Connection Name field, enter a name that uniquely describes this connection.
The name can have spaces, but it cannot have special characters (such as #, $, and %).
In the Connection Type list, select the database you want to use (for example, MySQL or Oracle).
In the Access Type list, select Native (JDBC). The access protocol which appears depends on the database type you select.
In the Settings section, enter the following information:
FieldDescriptionHost Name
The name of the server that hosts the database to which you are connecting. Alternatively, you can specify the host by IP address.
Database Name
The name of the database to which you are connecting. If you are using an ODBC connection, enter the Data Source Name (DSN) in this field.
Port Number
The TCP/IP port number (if it is different from the default)
Username
Optional user name used to connect to the database
Password
Optional password used to connect to the database
Click Test Connection. A success message appears if the connection is established.
Click OK to close the connection test dialog box.
To save the connection, click Save. The database connection is saved and appears in the Database Connections list.
OCI protocol information
Perform the following steps to create an OCI connection in the PDI Database Connection dialog box:
In the Connection Name field, enter a name that uniquely describes this connection.
The name can have spaces, but it cannot have special characters (such as #, $, and %).
In the Connection Type list, select Oracle.
In the Access list, select OCI. The access protocol which appears depends on the database type you select.
In the Settings section, enter the following information as directed by the Oracle OCI documentation.
FieldDescriptionSID
The Oracle system ID that uniquely identifies the database on the system
Tablespace for Data
The name of the tablespace where the data is stored
Tablespace for Indices
The name of the tablespace where the indices is stored
User Name
The user name used to connect to the database
Password
The password used to connect to the database
Click Test Connection.
A success message appears if the connection is established.
Click OK to close the connection test dialog box.
To save the connection, click OK to close the Database Connection dialog box.
If you want to use Advanced, Options, or Pooling for your OCI connection, refer to the Oracle OCI documentation to understand how to specify these settings.
Connect to Snowflake using strong authentication
If you are defining a data connection to Pentaho Data Integration and Analytics from a Snowflake data warehouse in the cloud, you can improve connection security by applying strong authentication.
You can apply strong authentication to your defined Pentaho data connection from Snowflake through a key pair.
Configure key pair strong authentication for your Snowflake data connection by completing the following steps:
After entering the information for your Snowflake data connection in the General tab of the Database Connection dialog box, select the Options tab.
Set the key pair parameters as indicated in the following table:
ParameterValueauthenticatorsnowflake_jwtprivate_key_fileSpecify the name of the private key file you use in your environment. For example,
/rsa_key.p8private_key_file_pwdSpecify the password for accessing the private key file you use in your environment. For example,
PentahoSnowFlake123See https://docs.snowflake.com/en/developer-guide/jdbc/jdbc-configure#private-key-file-name-and-password-as-connection-properties for details on the private key file and its password.
Click Test Connection to verify your connection. A success message appears if the connection is established.
Click OK to close the connection test dialog box.
To save the connection, click OK to close the Database Connection dialog box.
You have applied key pair authentication to your defined data connection between Pentaho and Snowflake.
Connect to an Azure SQL database
You can use an Azure SQL database as a data source with the Pipeline Designer. This connection is required if you want to bulk load into Azure SQL DB job entry to load data into your Azure SQL database from Azure Data Lake Storage. Pentaho supports the Always Encrypted option, dynamic masking, and multiple authentication methods for connecting to an Azure SQL database.
Because one physical server may host databases for multiple customers, keep in mind that SQL for Azure is different from MSSQL. For more information regarding the differences between Azure SQL and MSSQL, see https://docs.microsoft.com/en-us/azure/azure-sql/database/features-comparison
Before you begin
You must have an Azure account with an active subscription and an instance of an Azure SQL database. You also need to install the Azure SQL database drivers. For help installing your drivers, see your Microsoft documentation for details.
Additionally, you need to obtain the following information from your system administrator:
Host name
Database name
Port number
Authentication method
Username
Password
If you use the Always Encryption Enabled option, you also need to obtain the Client id and Client Secret Key.
Authentication method
Pentaho supports four authentication methods for connecting to the Azure SQL DB instance:
SQL Authentication
Connect using the Azure SQL Server username and password.
Azure Active Directory
Connect using Multi Factor Authentication (MFA). The MFA password must be entered on the displayed webpage.
Azure Active Directory with password
Connect using an Azure AD username and password.
Azure Active Directory with integrated authentication
Connect using the federated on-premises Active Directory Federation Services (ADFS) with Azure Active Directory in the cloud.
Connect to an Azure database
In the Connection Name field, enter a name that uniquely describes this connection. The name can have spaces, but it cannot have special characters (such as #, $, and %).
In the Connection Type list, select Azure SQL DB.
In the Access list, select Native (JDBC).
Enter your database connection information.
FieldDescriptionHost Name
The name of the Azure SQL server instance.
Database Name
The name of the Azure SQL database to which you are connecting.
Port Number
The TCP/IP port number. The Azure SQL Database service is only available through TCP port 1433. You must set your firewall to allow outgoing TCP communication on port 1433.
Authentication method
The authentication method used to connect to the Azure SQL DB instance. The default is SQL Authentication.
Username
The username used to connect to the database.
Password
The password used to connect to the database.
Always Encryption Enabled
Select to use encryption. See Use the Always Encryption Enabled option for instructions on using this option.
Client id
The unique client identifier, used to identify and set up a durable connection path to the server.
Client Secret Key
The unique name of the key value in the Azure Key Vault.
Click Test Connection to verify your connection.
Use the Always Encryption Enabled option
Before you can use the Always Encryption Enabled option, you must perform the following steps. Consult the Microsoft Azure SQL documentation for assistance with your Azure SQL tools.
Generate a column master key in the Azure Key Vault.
Encrypt the column using the column master key.
Register the app under Azure Active Directory and obtain both the Client id and Client Secret Key.
Grant permissions to the Client id for accessing the Azure Key Vault.
Select Always Encryption Enabled and provide the Client id and Client Secret Key.
The Azure Always Encrypted feature is now active.
Clear cached database metadata
When working with complex transformations or jobs, Pipeline Designer might accumulate outdated or incorrect metadata due to changes in the underlying database. You can use the Clear Complete DB Cache option to clear out the outdated or incorrect metadata the next time you access the transformation or job.
Cached metadata might include information about:
Table structures
Column types
Indexes
Primary and foreign keys
Other schema-related metadata
Note: Clearing cached database metadata does not delete any data from your database, affect transformation or job files, or clear runtime data caches that are used during execution.
To clear cached database metadata, complete the following steps:
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections list.
Find Database Connections, click the More Actions icon, and then select Clear Complete DB Cache. The cache is cleared, and a Success message is displayed. Fresh metadata is retrieved from the database the next time you access it.
Edit a database connection
You can edit an existing database connection to refine and change aspects of the connection.
To edit a database connection, complete the following steps:
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections.
Expand Database Connections, find the database connection you want to edit, and click the More Actions icon.
Select Edit. The Database Connection window opens.
Configure the options in each tab of the Database Connections window:
(Optional) To view features of the database connection, click Feature List.
(Optional) To explore configured database connections, click Explore. For details, see Explore configured database connections.
Click Test Connection. If the connection is established, a success message is displayed.
Click OK to close the success message.
Click Save. The connection is saved and the Database Connections window closes.
General
In the General tab, the options you have to edit depend on the type of database connection you are editing. Connection information depends on your access protocol. For details about general connection settings, refer to examples in Define a new database connection.
Advanced
The Advanced tab contains options for configuring properties associated with how SQL is generated. With these properties, you can set a standard across all your SQL tools, ETL tools, and design tools.
Supports the Boolean data type
Instructs Pipeline Designer to use native Boolean data types supported by the database.
Supports the timestamp data type
Instructs Pipeline Designer to use the timestamp data type supported by the database.
Quote all in database
Enables case-sensitive table names. For example, MySQL is case-sensitive on Linux, but not case-sensitive on Microsoft Windows. If you quote the identifiers, the database uses a case-sensitive table name.
Force all to lower-case
Enables the system to change the case of all identifiers to lower-case.
Force all to upper-case
Enables the system to change the case of all identifiers to upper-case.
Preserve case of reserved words
Instructs Pipeline Designer to use a list of reserved words supported by the database.
The Preferred Schema name where no schema is used
For Pipeline Designer, enter the preferred schema name (for example, MYSCHEMA).
SQL Code Editor
Enter the SQL statements to execute right after connecting.
Options
Use the Options tab to add or delete parameters. Parameters enable you to control database-specific behavior.
To add more parameters to the list, click Add Row.
To delete rows, click the Delete icon next to the row.
Pooling
Configure options in the Pooling tab to set up a connection pool and define options like the initial pool size, maximum pool size, and connection pool parameters. By default, a connection remains open for each individual report or set of reports in PUC and for each individual step in a transformation in PDI. For example, you might start by specifying a pool of ten or fifteen connections, and as you run reports in PUC or transformations in PDI, the unused connections drop off. Pooling helps control database access, especially if you have dashboards that contain many reports and require a large number of connections. Pooling can also be implemented when your database licensing restricts the number of active concurrent connections.
You can take the following action in the parameters section:
To add a new parameter, click Add Row and then enter the Parameter name and Value.
To delete a parameter, click the Delete icon.
To change how many parameters are shown at one time, select a new Items per page value.
If there are multiple pages of parameters, scroll through the pages using the left and right arrow that appear under the list of parameters.
The following table shows an example of Pooling options that might be available in a typical JDBC driver. Check your driver documentation for driver-specific pooling details.
Enable Connection Pooling
Enables connection pooling.
Pool Size
Initial
Set the initial size of the connection pool.
Maximum
Set the maximum number of connections in the connection pool.
Parameters
You can define additional custom pool parameters. Click on any parameter to view a short description of that parameter. Click Restore Defaults when to restore the default values for selected parameters.The most commonly-used parameter is validationQuery. The parameter differs slightly depending on your RDBMS connection. The basic set of Pentaho databases use the following values for validationQuery:
For Oracle and PostgreSQL, use Select 1 from dual.
For MS SQL Server and MySQL, use Select 1.
Description
Enter a description for your parameters.
Clustering
Use the Clustering options to cluster the database connection and create connections to data partitions in Pipeline Designer. To create a new connection to a data partition, enter a Partition ID, the Host Name, the Port, the Database Name, User Name, and Password for the connection.
If you have the Pentaho Server configured in a cluster of servers, and use the Data Source Wizard (DSW) in PUC to add a new data source, the new data source will only be seen on the cluster node where the user has a session. For the new data source to be seen by all the cluster nodes, you must disable DSW data source caching. This may cause the loading of the data source list to be slower since the list is not cached.
To disable the cache, navigate to the server/pentaho-server/pentaho-solutions/system folder and set the enableDomainIdCache value in the system.properties file to false.
Delete a database connection
Delete a database connection you no longer need.
Deleting a connection affects all reports, charts, dashboards, and other content that use the connection.
To delete a database connection, complete the following steps:
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections.
Expand Database Connections, find the database connection you want to delete, and click the More Actions icon.
Select Delete. The Confirm deletion dialog box opens.
Click Yes to confirm deletion. The database connection is deleted.
Explore configured database connections
The Database Explorer allows you to explore configured database connections. The Database Explorer also supports tables, views, and synonyms along with the catalog, schema, or both to which the table belongs.
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections list.
Expand Database Connections, find the database connection you want to explore, and click the More Actions icon.
Select Explore. The Database Explorer window opens.
(Optional) Click the refresh icon to refresh the list.
Expand the folders and find the item you want to review.
Click Actions, and then select one of the following features:
FeatureDescriptionPreview first 100
Returns the first 100 rows from the selected table.
Preview x Rows
Prompts you for the number of rows to return from the selected table.
Row Count
Specifies the total number of rows in the selected table.
Show Layout
Displays a list of column names, data types, and so on from the selected table.
DDL
Generates the DDL to create the selected table based on the current connection type, the drop-down.
View SQL
Launches the Simple SQL Editor for the selected table.
Truncate Table
Generates a TRUNCATE table statement for the current table.Note: The statement is commented out by default to prevent users from accidentally deleting the table data
Data Profile
Provides basic information about the data.
When you finish exploring the database connection, click OK. The Database Explorer window closes.
Show dependencies
Expand the connection to display a list of dependencies across the platform, including transformations and jobs.
To show the dependencies for a database connection, complete the following steps:
With a transformation or job open, on the left side of the Pipeline Designer interface, click the View icon. The View pane opens with the Transformations folder expanded, containing the Database Connections list.
Expand Database Connections, find the database connection you want to explore, and click the More Actions icon.
Select Show dependencies. The database connection is expanded to show the transformations and jobs that depend on that connection.
Edit a transformation or job
Edit an existing transformation or job to rename it, add new steps, edit steps, configure hops, edit properties, reset it, or save it in a different location.
To edit a transformation or job, complete the following steps:
Log into the Pentaho User Console.
Open Pipeline Designer:
If you are using the Modern Design, in the menu on the left side of the page, click Pipeline Designer.
If you are using the Classic Design, click Switch to the Modern Design, and then in the menu on the left side of the page, click Pipeline Designer.
Pipeline Designer opens with the Quick Access section expanded.
In the table at the bottom of the screen, select either the Recently opened or Favorites tab.
Open a transformation or job:
Search for or browse to the transformation or job, and then click Open.
Click Open files, and then in the Select File or Directory dialog box, search for or browse to the transformation or job and click Open.
Edit the transformation or job by taking one or more of the following actions:
Rename a transformation or job
In the transformation or job tab, click the name to make it editable.
Type a new name and press Enter.
Note: You can also change the name of a transformation or job in the properties window. For details see Configure transformation properties or Configure job properties.
Add a new step
In the Design pane, search for or browse to a step you want to use in the transformation. You may need to expand sections in the Design pane to find steps.
Drag the step you want to add onto the canvas.
Edit steps
Hover over a step to open the step menu, and then select one of the following options for either a transformation or job:
Step optionDescriptionDelete
Deletes the step from the canvas.
Edit
Opens the Step Name window where you can configure the properties of the step. Step properties may appear in multiple sections, tabs, or both.
Note: To learn more about the step you're configuring, in the lower-left corner of the Step Name window, click Help.
Duplicate
Adds a copy of the step to the canvas.
For transformations only, you can select one of the following additional options:
Step optionDescriptionMore Actions > Change Number of Copies
Opens the Number of copies dialog box, where you can enter a number or a variable to specify how many copies of the step are processed in parallel when the transformation or job is run. To find a variable, in the Number of copies (1 or higher) box, click the Select variable to insert icon.
More Actions > Data Movement
Opens a list of data movement options for you to select from to specify how data rows are distributed to the next steps of the transformation or job. Round-Robin is the default setting.
Round-Robin: Distributes rows evenly across all parallel step copies using round-robin logic. This setting optimizes load balancing when the transformation includes multiple instances of the next step.
Load Balance: Routes rows to the step copy with the lightest processing load. This setting can improve performance when processing times vary across parallel step instances.
Copy Data to Next Steps: Sends each row to all parallel step copies. Use this setting when every downstream branch must process the complete dataset independently.
Configure hops
To add hops between steps, hover over a step’s handle until a plus sign (+) appears, then drag the connection to the handle of another step.
Disable a hop by selecting it and clicking the Disable icon.
Delete the hop by selecting it and clicking the Delete icon.
Edit transformation or job properties
In the Canvas Action toolbar, click the Settings icon. The Transformation Properties or Job Properties window opens.
Change properties in one or more tabs. For details, see Transformation properties or Job properties.
Reset the transformation or job
Note: Resetting a transformation or job clears all steps and connections from the transformation or job and cannot be undone.
In the Canvas Action toolbar, click the Reset icon. The Confirm Reset window opens.
Click Reset. All steps and connections are removed from the transformation or job.
Save transformation or job in a different location
Click Save as. The Select File or Directory dialog box opens.
Search for or browse to the folder in the repository where you want to save the transformation or job.
(Optional) To create a new folder in the repository, click the New Folder icon, and then in the New folder dialog box, enter a New folder name and click Save.
(Optional) To delete a folder from the repository, select the folder and click the Delete icon.
In the Select File or Directory dialog box, click Save. The Save Change dialog box opens.
Click Yes to confirm that you want to save the transformation or job.
Use the SQL Editor
The SQL Editor is a good tool to use when you must execute standard SQL commands for tasks such as creating tables, dropping indexes, and modifying fields. The SQL Editor is used to preview and execute DDL (Data Definition Language) generated by the Pipeline Designer such as create and alter table, create index, and create sequence SQL commands. For example, if you add a Table Output step to a transformation and click the SQL button at the bottom of the Table Input dialog box, Pipeline Designer automatically generates the necessary DDL for the output step to function properly and presents it to the end user through the SQL Editor.
Below are some points to consider:
Multiple SQL Statements must be separated by semi-colons.
Before SQL Statements are sent to the database to be executed, Pipeline Designer removes returns, line-feeds, and separating semi-colons.
Pentaho Data Integration clears the database cache for the database connection on which you launch DDL statements.
The SQL Editor does not recognize the dialects of all supported databases. That means that creating stored procedures, triggers, and other database-specific objects may pose problems. Consider using the tools that came with the database in these instances.
Use notes on the canvas
While creating or editing a transformation or job, you can add notes to the canvas to help you and others understand the structure, design decisions, business rules, dependencies, and other aspects of your transformations and jobs.
For instructions on creating and editing transformations and jobs, see Create a job, Create a transformation, and Edit a transformation or job.
With a transformation or job open, take one or more of the following actions.
Add a note
In the canvas toolbar, click the Add Note icon. The Notes dialog box opens.
In the Note box, enter the content for the note.
To format the note, click Style, and then enter the font, color, and shadow options you want to use for the note.
Click Save. The note appears on the canvas.
Edit a note
Hover over the note on the canvas. The Delete and Edit icons are displayed above the note.
Click the Edit icon. The Notes dialog box opens.
In the Note box, edit the content of the note.
To edit the format of the note, click Style, and then edit the font, color, and shadow options for the note.
Click Save. The edited note is saved.
Reposition a note
To reposition a note, click and drag the note on the canvas to the location where you want it to appear.
Delete a note
Hover over the note on the canvas. The Delete and Edit icons are displayed above the note.
Click the Delete icon. The note is deleted.
Run modifiers
Run modifiers include parameters, variables, and arguments, which you can use to modify how you run transformations and jobs.
Parameters
Parameters are local variables that apply only to a specific transformation where you define it. When defining a parameter, you can assign it a default value to use in the event that one is not fetched for it. This feature makes it unique among dynamic input types in Pipeline Designer.
Note: If there is a name collision between a parameter and a variable, the parameter will take precedence.
Variables
A variable in Pipeline Designer is a piece of user-supplied information that you can use dynamically and programmatically in a variety of different scopes. A variable can be local to a single step, or be available to the entire JVM (Java Virtual Machine) that Pipeline Designer is running in.
Pipeline Designer variables can be used in both transformation steps and job entries. For details, see Basic concepts of ETL in Pipeline Designer. You can define variables by using the Set Variables step and Set Session Variables steps in a transformation or by editing the kettle.properties file.
The Get Variable step can explicitly retrieve a value from a variable, or you can use it in any Pipeline Designer text field which has the dollar sign icon next to it by using a metadata string in either the Unix or Windows formats:
${VARIABLE}%%VARIABLE%%
Both formats can be used and even mixed. In fact, you can create variable recursion by alternating between the Unix and Windows syntax. For example, if you wanted to resolve a variable that depends on another variable, then you could use this example: ${%%inner_var%%}.
Note: If there is a name collision with a parameter or argument, variables will defer.
You can also use ASCII or hexadecimal character codes in place of variables, using the same format: $[hex value]. This makes it possible to escape the variable syntax in instances where you need to put variable-like text into a variable. For instance if you wanted to use ${foobar} in your data stream, then you can escape it like this: $[24]{foobar}. Pipeline Designer will replace $[24] with a $ without resolving it as a variable.
Environment variables
This is the traditional variable type in Pipeline Designer. You define an environment variable through the Set Environment Variables dialog box in the Edit menu, or by hand by passing it as an option to the Java Virtual Machine (JVM) with the -D flag.
Environment variables are an easy way to specify the location of temporary files in a platform-independent way; for example, the ${java.io.tmpdir} variable points to the /tmp/ directory on Unix/Linux/OS X and to the C:\Documents and Settings\<username>\Local Settings\Temp\ directory on Windows.
The only problem with using environment variables is that they cannot be used dynamically. For example, if you run two or more transformations or jobs at the same time on the same application server, you may get conflicts. Changes to the environment variables are visible to all software running on the virtual machine.
Kettle Variables
Kettle variables provide a way to store small pieces of information dynamically in a narrower scope than environment variables. A Kettle variable is local to Kettle, and can be scoped down to the job or transformation in which it is set, or up to a related job. The Set Session Variables step in a transformation allows you to specify the related job that you want to limit the scope to (for example, the parent job, grandparent job, or the root job).
Kettle variables configure various Pipeline Designer-specific options such as the location of the shared object file for transformations and jobs or the log size limit.
To edit Kettle variables manually, complete these steps.
Open the
kettle.propertiesfile in a text editor. By default, thekettle.propertiesfile is typically stored in your home directory or the.pentahodirectory.Edit the file.
When complete, close and save the file.
Set the LAZY_REPOSITORY variable
The LAZY_REPOSITORY variable restores the directory-loading behavior of the repository to be as it was before Pentaho 6.1.
Note: Changing this variable to false will make repository loading more expensive.
To set the LAZY_REPOSITORY variable, complete these steps.
Open the
kettle.propertiesfile in a text editor. By default, thekettle.propertiesfile is typically stored in your home directory or the.pentahodirectory.Look for KETTLE_LAZY_REPOSITORY and, if it is set to
false, change the value totrue.When complete, close and save the file.
Arguments
An argument is a named, user-supplied, single-value input given as a command line argument (running a transformation or job manually from Pan or Kitchen, or as part of a script). Each transformation or job can have a maximum of 10 arguments. Each argument is declared as space-separated values given after the rest of the Pan or Kitchen line:
In the above example, the values argOne, argTwo, and argThree are passed into the transformation, where they will be handled according to the way the transformation is designed. If it was not designed to handle arguments, nothing will happen. Typically, these values would be numbers, words (strings), or variables (system or script variables, not Pipeline Designer variables).
In Pipeline Designer, you can test argument handling by defining a set of arguments when you run a transformation or job. For details, see Run a transformation or Run a job.
Export a transformation or job
Export transformations and jobs to save backup copies, migrate them to other systems, share them with other users, or use them in other tools. You can export a transformation as a .ktr file or a job as a .kjb files.
To export a transformation or job, complete the following steps:
Log into the Pentaho User Console.
Open Pipeline Designer:
If you are using the Modern Design, in the menu on the left side of the page, click Pipeline Designer.
If you are using the Classic Design, click Switch to the Modern Design, and then in the menu on the left side of the page, click Pipeline Designer.
Pipeline Designer opens with the Quick Access section expanded.
In the table at the bottom of the screen, select either the Recently opened or Favorites tab.
Open a transformation or job:
Search for or browse to the transformation or job, and then click Open.
Click Open files, and then in the Select File or Directory dialog box, search for or browse to the transformation or job and click Open.
In the Canvas Action toolbar, click the Export icon. The transformation or job downloads to your default download folder as either a
.ktrfile or.kjbfile, respectively.
Stop transformations and jobs
There are two different methods you can use to stop a transformation or job running in the Pipeline Designer. The method you use depends on the processing requirements of your ETL task. Most transformations and jobs can be stopped immediately without concern. However, since some transformations and jobs are ingesting records using messaging or streaming data, such incoming data might need to be stopped safely so that the potential for data loss is avoided.
In the Canvas Action toolbar, take one of the following actions:
To stop processing all data immediately, click Stop.
To make the transformation or job finish any records that were initiated or retrieved before it stops processing data, click the arrow next to Stop, and then select Stop input processing.
Last updated
Was this helpful?

