Select an Engine
You can select the Pentaho Engine to run transformations in the default Pentaho (Kettle) environment.
You can also use the Spark Submit job entry to run big data transformations on your Hadoop cluster to coordinate large amounts of data over multiple nodes. See Spark Submit for details.
Pentaho Engine
The Pentaho engine does not execute sub-transformations or sub-jobs when you select the Pentaho server or Slave server option. If you want to run a sub-transformation on the same server where your parent job runs, select Local for the Run Configuration type.

The Settings section of the Run configuration dialog box contains the following options when Pentaho is selected as the Engine for running a transformation:
Option
Description
Local
Select this option to use the Pentaho engine to run a transformation on your local machine.
Pentaho server
Select this option to run your transformation on the Pentaho Server. This option only appears if you are connected to a Pentaho Repository.
Slave server
Select this option to send your transformation to a slave (remote) server or Carte cluster.
Location
If you select Slave server, specify its location.
If you have set up a Carte cluster, you can specify Clustered. See Use Carte Clusters for more details.
Send resources to the server
If you specified a remote server for your remote Location, select to send your transformation to the specified server before running it. Select this option to run the transformation locally on the server. Any related resources, such as other referenced files, are also included in the information sent to the server.
Log remote execution locally
If you specified Clustered for your remote Location, select to show the logs from the cluster nodes.
Show transformations
If you specified Clustered for your remote Location, select to show the other transformations that are generated when you run on a cluster.
Last updated
Was this helpful?