Pentaho Operations Mart

circle-info

This page is archived. See Performance monitoring.

Pentaho Operations Mart helps you monitor Pentaho Server performance.

It includes two marts:

  • Business Analytics (BA) Operations Mart. Aggregates Pentaho Server audit data.

  • Data Integration (DI) Operations Mart. Stores and summarizes PDI logging data.

Business Analytics Operations Mart

Pentaho can automatically set up Operations Mart content for Analyzer, Interactive Reports, Report Designer, and dashboards.

BA Operations Mart aggregates data from Pentaho Server log files into pre-built audit reports.

If the pre-built reports do not meet your needs, use the Data Integration Operations Mart to change them.

Download and install Operations Mart files

Operation Mart files are stored in a pre-packaged ZIP file.

To install the files, stop the Pentaho Server, download and unpack the ZIP file, then restart the Pentaho Server to import the files.

  1. Stop the Pentaho Server.

    circle-info

    See the Install Pentaho Data Integration and Analytics document for start/stop steps.

  2. Download the pentaho-operations-mart-11.0.0.0-<build number>.zip file from the Support Portalarrow-up-right.

    1. On the Support Portal home page, sign in using the Pentaho support credentials from your welcome packet.

    2. In the Pentaho card, select Download.

    3. In the 11.x list, select Pentaho 11.0 GA Release.

    4. Scroll to the bottom of the Pentaho 11.0 GA Release page.

    5. In the file component section, select the Operations Mart folder.

    6. Download the pentaho-operations-mart-11.0.0.0-<build number>.zip file.

  3. Unpack the ZIP file to a temporary directory.

  4. Move the unpacked files to pentaho/server/pentaho-server/pentaho-solutions/system/default-content.

  5. In default-content, delete any files you do not need for your repository database type.

    The following table lists the files to keep for each type of Pentaho Repository database.

    Repository database type
    Files to keep

    PostgresSQL

    • pentaho-operations-mart-clean-11.0.0.0-<build number>.zip *

    • pentaho-operations-mart-etl-11.0.0.0-<build number>.zip

    • ​pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

    MySQL

    • pentaho-operations-mart-clean-mysql5-11.0.0.0-<build number>.zip *

    • pentaho-operations-mart-etl-mysql5-11.0.0.0-<build number>.zip

    • pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

    Oracle

    • pentaho-operations-mart-clean-oracle10g-11.0.0.0-<build number>.zip*

    • pentaho-operations-mart-etl-oracle10g-11.0.0.0-<build number>.zip

    • ​pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

    MS SQL Server

    • pentaho-operations-mart-clean-mssql-11.0.0.0-<build number>.zip *

    • pentaho-operations-mart-etl-mssql-11.0.0.0-<build number>.zip

    • ​pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

    * Keep the pentaho-operations-mart-clean-<database>-11.0.0.0-<build number>.zip file only if you plan for Pentaho to automatically delete entries from the Operations Mart on a regular schedule. For details, see Clean up the BA Operations Mart.

  6. Restart the Pentaho Server.

The Operations Mart is ready to be used.

Increase the maximum character length in audit table fields

You can increase the maximum number of characters permitted in an audit table field from 200 to 1024 characters.

The scripts in this procedure adjust the column width and re-index the Operations Mart tables.

  1. Stop the Pentaho Server.

    circle-info

    See the Install Pentaho Data Integration and Analytics document for start/stop steps.

  2. Download the pentaho-server-ee-11.0.0.0-<build number>.zip file from the Support Portalarrow-up-right.

    1. On the Support Portal home page, sign in using the Pentaho support credentials from your welcome packet.

    2. In the Pentaho card, select Download.

    3. In the 11.x list, select Pentaho 11.0 GA Release.

    4. Scroll to the bottom of the release page.

    5. In the file component section, navigate to Pentaho Server/Archive Build (Suggested Installation Method).

    6. Download the pentaho-server-ee-11.0.0.0-<build number>.zip file.

  3. Unpack the ZIP file to a temporary directory.

  4. In the temporary directory, navigate to pentaho\server\pentaho-server\data.

  5. In data, open the directory for your repository database type:

    • mysql

    • oracle

    • postgresql

    • sqlserver

  6. Locate these files:

    • alter_script_<repository database type>_BISERVER-13674.sql

    • pentaho_mart_upgrade_audit_<repository database type>.sql

  7. Use the SQL client tool for the repository type to run these scripts, in order:

    1. alter_script_<repository database type>_BISERVER-13674.sql

    2. pentaho_mart_upgrade_audit_<repository database type>.sql

  8. Delete the temporary directory.

  9. Start the Pentaho Server.

The maximum field length for audit tables is 1024 characters.

Choose a pre-built Operations Mart report

Pentaho provides pre-built Interactive Reports and a data mart called Pentaho Operations Mart.

The Operations Mart contains all the data from system log files.

Choose the report that fits your needs, then follow the steps in View and edit Operations Mart reports.

Information shown
Report name

Amount of time it takes a report to run

Content Duration

File names of all content failures within a defined length of time

Content Failures

Compare login metrics for days within a specified month

Content Request Day of Month

Compare login metrics for a days in a week within a specified year

Day of Week Request and Login Metrics

List of content sorted by type, used within a defined length of time

Content Type Usage

List of content usage within a defined length of time

Content Usage

Compare login metrics by hour for a specified day

Hours in Day Request and Login Metrics

Length of time for logins and the number of logins per user for specified length of time

Session Duration Counts

View and edit Operations Mart reports

  1. Identify the report you want to open in Choose a pre-built Operations Mart report.

  2. Double-click Browse Files.

  3. Select public > Pentaho Operations Mart > BA Audit Reports.

  4. Select the file and click Open.

  5. Optional: Edit the report in the tool you used to open it.

See the Pentaho Business Analytics document for editing in Analyzer or Interactive Reports.

See the Pentaho Report Designer document for editing in Report Designer.

Create Operations Mart reports

If the pre-built reports do not meet your needs, create your own reports.

  1. In the User Console, select Create New.

  2. Select the report type.

  3. Select the data source you need.

    Information shown
    Data source

    Detail information related to the execution of the .xactions. that run the reports on the Pentaho Server

    pentaho_operations_mart: BA Operations Mart - Component

    Information related to the execution of content, such as which tool or which user ran the content on the Pentaho Server

    pentaho_operations_mart: BA Operations Mart - Content

    Information for the Pentaho Server related to a user, such as number of sessions, how long, and what time

    pentaho_operations_mart: BA Operations Mart - User Session

    Information about individual job entry executions on the Pentaho Server

    pentaho_operations_mart: PDI Operations Mart - Job Entry

    Detailed performance information for the Pentaho Server

    pentaho_operations_mart: - PDI Operations Mart - Performance

    Detailed information about individual step executions on the Pentaho Server

    pentaho_operations_mart: PDI Operations Mart - Step

    Information related to transformations and jobs run on the Pentaho Server

    pentaho_operations_mart: PDI_Operations_Mart

Update the Operations Mart

You may need to update the report date and time, or the report data.

  1. In the User Console, select Browse Files > public > Pentaho Operations Mart > Update Audit Mart.

  2. Double-click Update Operations Mart Date & Time or Update BA Operations Mart Data.

  3. View a report to confirm the updates.

Clean up the BA Operations Mart

The operational data in Operations Mart tables is set to automatically remove entries older than 365 days.

To change the schedule, add the ba.cleanup.max.age variable to kettle.properties in {user.home}/.kettle.

  1. Stop the Pentaho Server.

  2. Open {user.home}/.kettle/kettle.properties.

  3. Add ba.cleanup.max.age.

    Example. Delete entries older than 30 days:

  4. Save the file.

  5. Restart the Pentaho Server.

Data Integration Operations Mart

The Data Integration (DI) Operations Mart stores and summarizes PDI log data.

Use it to monitor jobs and transformations, and build reports and dashboards.

Set up the Data Integration Operations Mart

Setup depends on how you installed Pentaho. Use the instructions that match your installation method.

Follow these instructions if you installed Pentaho using the archive method. See the Install Pentaho Data Integration and Analytics document for archive installation.

Before you begin

Installation depends on these prerequisites.

Archive installation of the Pentaho Server

These steps assume a Pentaho Server archive installation. See the Install Pentaho Data Integration and Analytics document if needed.

Required database

Ensure your Pentaho Server and Pentaho Repository use one of these databases:

  • PostgreSQL

  • MySQL or MariaDB

  • Oracle

  • Microsoft SQL Server

Data Integration Operations Mart scripts

You need these scripts:

  • pentaho_logging_<databasename>.sql

  • pentaho_mart_<databasename>.sql

<databasename> matches your repository database type:

  • postgresql

  • mysql15 (MariaDB only)

  • oracle10g or oracle12c

  • sqlserver

Process overview

Step 1: Get the Data Integration Operations Mart files

Verify the required content ZIP files are available for Step 6.

Verify the following information before proceeding:

  • If you performed an archive installation with a PostgreSQL repository, skip to Step 2: Run the setup scripts.

  • If you performed an archive installation with a MySQL, Microsoft SQL Server, or Oracle repository, and you do not have pentaho-operations-mart-11.0.0.0-<build number>.zip, download it from the Support Portalarrow-up-right.

Perform the following steps:

  1. Download the pentaho-server-ee-11.0.0.0-<build number>.zip file from the Support Portalarrow-up-right.

    1. On the Support Portal home page, sign in using the support credentials from your welcome packet.

    2. In the Pentaho card, select Download.

    3. In the 11.x list, select Pentaho 11.0 GA Release.

    4. Scroll to the bottom of the release page.

    5. In the file component section, select the Operations Mart folder.

    6. Download the pentaho-operations-mart-11.0.0.0-<build number>.zip file.

  2. Unpack the ZIP file to a temporary directory.

  3. Verify these files exist in the temporary directory:

    • pentaho-operations-mart-operations-di-11.0.0.0-<build number>.zip

    • pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

  4. Verify you also have the two files required for your repository database type:

    Repository database type
    Required files

    Oracle

    pentaho-operations-mart-etl-oracle10g-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-oracle10g-11.0.0.0-<build number>.zip

    MySQL5

    pentaho-operations-mart-etl-mysql5-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-mysql5-11.0.0.0-<build number>.zip

    Microsoft SQL Server

    pentaho-operations-mart-etl-mssql-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-mssql-11.0.0.0-<build number>.zip

Step 2: Run the setup scripts

The required setup scripts are in:

<install-directory>/pentaho-server/data/<databasename>

<databasename> is one of:

  • postgresql

  • mysql5 (MariaDB only)

  • oracle10g or oracle12c

  • sqlserver

Run these scripts in order:

  1. pentaho_logging_<databasename>.sql

  2. pentaho_mart_<databasename>.sql

Step 3: Set the global Kettle logging variables

Perform this step on the machine where you installed the PDI client and Pentaho Server.

When you run PDI for the first time, PDI creates kettle.properties in:

$USER_HOME/.kettle/kettle.properties

  1. In Spoon, select Edit > Edit the kettle.properties file.

  2. Add or edit the variables for the log tables.

    If you customized these values in the SQL scripts, use your customized values.

    circle-info

    For Oracle and Microsoft SQL Server, leave Value blank for variables that contain _SCHEMA_.

    Variable
    Value

    KETTLE_CHANNEL_LOG_DB

    live_logging_info

    KETTLE_CHANNEL_LOG_TABLE

    channel_logs

    KETTLE_CHANNEL_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_JOBENTRY_LOG_DB

    live_logging_info

    KETTLE_JOBENTRY_LOG_TABLE

    jobentry_logs

    KETTLE_JOBENTRY_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_JOB_LOG_DB

    live_logging_info

    KETTLE_JOB_LOG_TABLE

    job_logs

    KETTLE_JOB_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_METRICS_LOG_DB

    live_logging_info

    KETTLE_METRICS_LOG_TABLE

    metrics_logs

    KETTLE_METRICS_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_STEP_LOG_DB

    live_logging_info

    KETTLE_STEP_LOG_TABLE

    step_logs

    KETTLE_STEP_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_TRANS_LOG_DB

    live_logging_info

    KETTLE_TRANS_LOG_TABLE

    trans_logs

    KETTLE_TRANS_LOG_SCHEMA

    pentaho_dilogs

    Variable
    Value

    KETTLE_TRANS_PERFORMANCE_LOG_DB

    live_logging_info

    KETTLE_TRANS_PERFORMANCE_LOG_TABLE

    transperf_logs

    KETTLE_TRANS_PERFORMANCE_LOG_SCHEMA

    pentaho_dilogs

Step 4: Add the JNDI connections for logging

This section explains how to add the logging (live_logging_info) and Operations Mart (PDI_Operations_Mart) connections for a PDI client.

  1. Go to <pdi-install>/data-integration/simple-jndi.

  2. Open jdbc.properties.

  3. Update the properties for your database type.

PostgreSQL:

MySQL:

MariaDB:

Oracle:

Microsoft SQL Server:

Step 5: Add a JDBC connection for the Pentaho Server

Perform this task on the machine where you installed the Pentaho Server.

  1. Go to <pentaho-server>/tomcat/webapps/pentaho/META-INF/.

  2. Open context.xml.

  3. Update the resources for your database type.

  4. Optional: To use encrypted passwords, replace any factory setting with:

    factory="org.pentaho.di.core.database.util.DecryptingDataSourceFactory"

PostgreSQL:

MySQL:

MariaDB:

Oracle:

Microsoft SQL Server:

Step 6: Add the DI Operations Mart ETL solutions to the Pentaho Repository default content folder

If you use PostgreSQL for the repository database, skip to Step 7: Initialize the DI Operations Mart.

  1. Stop the Pentaho Server.

  2. Locate the ZIP files containing the ETL solution and sample reports:

    • pentaho-operations-mart-operations-di-11.0.0.0-<build number>.zip

    • pentaho-operations-mart-operations-bi-11.0.0.0-<build number>.zip

  3. Locate the two ZIP files that are specific to your repository type:

    Directory
    File names

    /oracle (10g or 12c)

    pentaho-operations-mart-etl-oracle10g-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-oracle10g-11.0.0.0-<build number>.zip

    /mysql5

    pentaho-operations-mart-etl-mysql5-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-mysql5-11.0.0.0-<build number>.zip

    /sqlserver

    pentaho-operations-mart-etl-mssql-11.0.0.0-<build number>.zip pentaho-operations-mart-clean-mssql-11.0.0.0-<build number>.zip

  4. Copy all four ZIP files (DI, BI, mart-etl, mart-clean) into:

    $PENTAHO_HOME/pentaho-server/pentaho-solution/system/default-content

  5. Start the Pentaho Server.

    circle-info

    After the server processes these files, it renames them with a timestamp.

    Keep the renamed files in this directory.

Step 7: Initialize the DI Operations Mart

  1. Launch Spoon.

  2. Connect to the Pentaho Repository through the Pentaho Server.

  3. Select File > Open.

  4. Select Browse Files > Public > Pentaho Operations Mart > DI Ops Mart ETL.

  5. Open each transformation and job.

  6. In each transformation or job, open Properties and select the Logging tab.

    Logging requires values for each Log connection field. A common approach is to use the global variables.

    For job logging, set values for Job log table, Job entry log table, and Logging channel log table.

    For transformation logging, set values for Transformation, Step, Performance, Logging channels, and Metrics.

  7. Select File > Open.

  8. Run Fill_in_DIM_DATE_and_DIM_TIME.

  9. Run a few sample KTRs to generate log activity.

  10. Select File > Open.

  11. Run Update_Dimensions_then_Logging_Data.

Step 8: Verify the DI Operations Mart is working

  1. In the Pentaho User Console, select Browse Files > Public > Pentaho Operations Mart > DI Audit Reports > Last_Run.

  2. Verify the jobs and transformations ran.

Charts, reports, and dashboards

Once you create and populate your DI Operations Mart with log data, use the Pentaho User Console to examine this data and create reports, charts, and dashboards.

Pentaho provides pre-built reports, charts, and dashboards you can modify.

To understand the log fields, see Logging dimensions and metrics.

Logging tables status

The DI Operations Mart includes these tables.

Transformation log tables

The status values and their descriptions:

Status display
Description

start

Indicates the transformation was started and retains this status until the transformation ends when no logging interval is set.

end

Transformation ended successfully.

stop

Indicates the user stopped the transformation.

error

Indicates an error occurred when attempting to run the transformation.

running

A transformation is only in this status directly after starting and does not appear without a logging interval.

paused

Indicates the user paused the transformation and does not appear without a logging interval.

Job log tables

The status values and their descriptions:

Status display
Description

start

Indicates the job was started and retains this status until the job ends, and when no logging interval is set.

end

Job ended successfully.

stop

Indicates the user stopped the job.

error

Indicates an error occurred when attempting to run the job.

running

A job is only in this status directly after starting and does not appear without a logging interval.

paused

Indicates the user paused the job and it does not appear without a logging interval.

Logging dimensions and metrics

These tables identify the dimensions and metrics used to create ETL log charts and reports.

Fact table

(fact_execution)

Field name
Description

execution_date_tk

A technical key (TK) linking the fact to the date when the transformation/job was executed.

execution_time_tk

A technical key (TK) linking the fact to the time-of-day when the transformation/job was executed.

batch_tk

A technical key (TK) linking the fact to batch information for the transformation/job.

execution_tk

A technical key (TK) linking the fact to execution information about the transformation/job.

executor_tk

A technical key (TK) linking the fact to information about the executor (transformation or job).

parent_executor_tk

A technical key (TK) linking the fact to information about the parent transformation/job.

root_executor_tk

A technical key (TK) linking the fact to information about the root transformation/job.

execution_timestamp

The date and time when the transformation/job was executed.

duration

The length of time (in seconds) between when the transformation was logged (LOGDATE) and the maximum dependency date (DEPDATE).

rows_input

The number of lines read from disk or the network by the specified step. Can be input from files, databases, etc.

rows_output

The number of rows output during the execution of the transformation/job.

rows_read

The number of rows read in from the input stream of the specified step.

rows_written

The number of rows written during the execution of the transformation/job.

rows_rejected

The number of rows rejected during the execution of the transformation/job.

errors

The number of errors that occurred during the execution of the transformation/job.

Batch dimension

(dim_batch)

Field name
Description

batch_tk

A technical key (TK) for linking facts to batch information.

batch_id

The ID number for the batch.

logchannel_id

A string representing the identifier for the logging channel used by the batch.

parent_logchannel_id

A string representing the identifier for the parent logging channel used by the batch.

Date dimension

(dim_date)

Field name
Description

date_tk

A technical key (TK) for linking facts to date information.

date_field

A Date object representing a particular day (year, month, day).

ymd

A string representing the date value in year-month-day format.

ym

A string representing the date value in year-month format.

year

An integer representing the year value.

quarter

An integer representing the number of the quarter (1-4) to which this date belongs.

quarter_code

A 2-character string representing the quarter (Q1-Q4) to which this date belongs.

month

An integer representing the number of the month (1-12) to which this date belongs.

month_desc

A string representing the month (for example, January) to which this date belongs.

month_code

A string representing the shortened month code (for example, JAN) to which this date belongs.

day

An integer representing the day of the month (1-31) to which this date belongs.

day_of_year

An integer representing the day of the year (1-366) to which this date belongs.

day_of_week

An integer representing the day of the week (1-7) to which this date belongs.

day_of_week_desc

A string representing the day of the week (for example, Sunday) to which this date belongs.

day_of_week_code

A string representing the shortened day-of-week code (for example, SUN) to which this date belongs.

week

An integer representing the week of the year (1-53) to which this date belongs.

Execution dimension

(dim_execution)

Field name
Description

execution_tk

A technical key (TK) for linking facts to execution information.

execution_id

A unique string identifier for the execution.

server_name

The name of the server associated with the execution.

server_host

The name of the server associated with the execution.

executing_user

The name of the user who initiated the execution.

execution_status

The status of the execution (start, stop, end, error).

Executor dimension

The dim_executor table includes information about a job or transformation.

Field name
Description

executor_tk

A technical key (TK) for linking facts to executor information.

version

An integer corresponding to the version of the executor.

date_from

A date representing the minimum date for which the executor is valid.

date_to

A date representing the maximum date for which the executor is valid.

executor_id

A string identifier for the executor.

executor_source

The source location (either file- or repository-relative) for the executor.

executor_environment *

File server, repository name, related to the executor_source.

executor_type

The executor type (for example, job or transformation).

executor_name

The name of the executor (for example, transformation name).

executor_desc

A string description of the executor (for example, job description).

executor_revision

A string representing the revision of the executor (for example, 1.3).

executor_version_label

A string representing a description of the revision (change comments).

exec_enabled_table_logging

Whether table logging is enabled for this executor. Values are Y if enabled; N otherwise.

exec_enabled_detailed_logging

Whether detailed (step or job entry) logging is enabled for this executor. Values are Y if enabled; N otherwise.

exec_enabled_perf_logging

Whether performance logging is enabled for this executor. Values are Y if enabled; N otherwise.

exec_enabled_history_logging

Whether historical logging is enabled for this executor. Values are Y if enabled; N otherwise.

last_updated_date

The date the executor was last updated.

last_updated_user

The name of the user who last updated the executor.

* Reserved for future use.

Log table

The dim_log_table table contains data used by Pentaho Operations Mart.

circle-exclamation
Field name
Description

log_table_tk

A technical key (TK) for linking.

object_type

The type of PDI object being logged (for example, job, transformation, or step).

table_connection_name

The name of the database connection corresponding to the location of the transformation/job logging table.

table_name

The name of the table containing the transformation/job logging information.

schema_name

The name of the database schema corresponding to the location of the transformation/job logging table.

step_entry_table_conn_name

The name of the database connection corresponding to the location of the step/entry logging table.

step_entry_table_name

The name of the table containing the step/entry logging information.

step_entry_schema_name

The name of the database schema corresponding to the location of the step/entry logging table.

perf_table_conn_name

The name of the database connection corresponding to the location of the performance logging table.

perf_table_name

The name of the table containing the performance logging information.

perf_schema_name

The name of the database schema corresponding to the location of the performance logging table.

Time-of-day dimension

The dim_time table contains entries for every second of a day.

Field name
Description

time_tk

A technical key (TK) for linking facts to time-of-day information.

hms

A string representing the time of day as hours-minutes-seconds (for example, 00:01:35).

hm

A string representing the time of day as hours-minutes (for example, 23:59).

ampm

Indicates whether the time is AM or PM. Values: am or pm.

hour

The hour of the day (0-23).

hour12

The hour of the day with respect to AM/PM (1-12).

minute

The minute of the hour (0-59).

second

The second of the minute (0-59).

Step fact table

The fact_step_execution table contains facts about individual step executions.

Field name
Description

execution_date_tk

A technical key (TK) linking the fact to the date when the step was executed.

execution_time_tk

A technical key (TK) linking the fact to the time-of-day when the step was executed.

batch_tk

A technical key (TK) linking the fact to batch information for the step.

executor_tk

A technical key (TK) linking the fact to information about the executor (transformation).

parent_executor_tk

A technical key (TK) linking the fact to information about the parent transformation.

root_executor_tk

A technical key (TK) linking the fact to information about the root transformation/job.

execution_timestamp

The date and time when the step was executed.

step_tk

A technical key (TK) linking the fact to information about the step.

step_copy

The step copy number. This is zero if there is only one copy of the step, or (0 to N-1) if N copies are executed.

rows_input

The number of lines read from disk or the network by the step.

rows_output

The number of lines written to disk or the network by the step.

rows_read

The number of rows read from previous steps during the interval.

rows_written

The number of rows written to following steps during the interval.

rows_rejected

The number of rows rejected by the step’s error handling during the interval.

errors

The number of errors that occurred during the execution of the step.

Step dimension

The dim_step table contains information about individual steps and job entries.

Field name
Description

step_tk

A technical key (TK) for linking facts to step/entry information.

step_id

The string name of the step/entry.

original_step_name *

The name of the step/entry template used to create this step/entry (for example, Table Input).

* Reserved for future use.

Job entry fact table

The fact_jobentry_execution table contains facts about individual job entry executions.

Field name
Description

execution_date_tk

A technical key (TK) linking the fact to the date when the job entry was executed.

execution_time_tk

A technical key (TK) linking the fact to the time-of-day when the job entry was executed.

batch_tk

A technical key (TK) linking the fact to batch information for the job entry.

executor_tk

A technical key (TK) linking the fact to information about the executor (transformation or job).

parent_executor_tk

A technical key (TK) linking the fact to information about the parent transformation/job.

root_executor_tk

A technical key (TK) linking the fact to information about the root transformation/job.

step_tk

A technical key (TK) linking the fact to information about the job entry.

execution_timestamp

The date and time when the job entry was executed.

rows_input

The number of lines read from disk or the network by the job entry.

rows_output

The number of lines written to disk or the network by the job entry.

rows_read

The number of rows read in from the input stream of the job entry.

rows_written

The number of rows written to the output stream of the job entry.

rows_rejected

The number of rows rejected during the execution of the job entry.

errors

The number of errors that occurred during the execution of the job entry.

result

Whether the job entry finished successfully. Values: Y (successful) or N (otherwise).

nr_result_rows

The number of result rows after execution.

nr_result_files

The number of result files after execution.

Execution performance fact table

The fact_perf_execution table contains facts about performance snapshots for transformation executions.

Field name
Description

execution_date_tk

A technical key (TK) linking the fact to the date when the transformation was executed.

execution_time_tk

A technical key (TK) linking the fact to the time-of-day when the transformation was executed.

batch_tk

A technical key (TK) linking the fact to batch information for the transformation.

executor_tk

A technical key (TK) linking the fact to information about the executor (transformation).

parent_executor_tk

A technical key (TK) linking the fact to information about the parent transformation/job.

root_executor_tk

A technical key (TK) linking the fact to information about the root transformation/job.

step_tk

A technical key (TK) linking the fact to information about the transformation/job.

seq_nr

The sequence number differentiating snapshots for a single execution.

step_copy

The step copy number. This is zero if there is only one copy, or (0 to N-1) if N copies are executed.

execution_timestamp

The date and time when the transformation was executed.

rows_input

The number of rows read from the input file, database, or network during the interval.

rows_output

The number of rows written to output file, database, or network during the interval.

rows_read

The number of rows read from previous steps during the interval.

rows_written

The number of rows written to following steps during the interval.

rows_rejected

The number of rows rejected by the steps error handling during the interval.

errors

The number of errors that occurred during the execution of the transformation/job.

input_buffer_rows

The size of the step’s input buffer in rows at the time of the snapshot.

output_buffer_rows

The size of the output buffer in rows at the time of the snapshot.

Clean up the DI Operations Mart tables

Cleaning the DI Operations Mart consists of running either a job or transformation that deletes data older than a specified maximum age.

The cleanup job and transformation are in the etl folder.

  1. In Spoon, open one of the following:

    • Clean_up_PDI_Operations_Mart.kjb (job)

    • Clean_up_PDI_Operations_Mart_fact_table.ktr (transformation)

  2. Set these parameters:

    • max.age.days (required): Maximum data age, in days.

    • schema.prefix (optional): For PostgreSQL, set <schema>. (with trailing period). For other databases, leave blank.

  3. Run the job or transformation.

Running the job or transformation deletes data older than the maximum age.

See the Pentaho Data Integration document for details on scheduling regular cleanup.

Last updated

Was this helpful?