Bulk load into Amazon Redshift
The Bulk load into Amazon Redshift job entry leverages Amazon Redshift’s COPY command to automate loading your data into an Amazon Redshift cluster. Using COPY enables parallel loading from cloud storage for high performance, and reduces the need for repetitive SQL scripting.
Before you begin
Before using this job entry in PDI, do the following to download the Redshift driver and copy it into the Pentaho distribution:
Download the latest JDBC 4.0 driver from Download the Amazon Redshift JDBC Driver.
Copy the downloaded JAR file into
server/pentaho-server/tomcat/libto update the Pentaho Server.Copy the downloaded JAR file into
design-tools/data-integration/libto update PDI.Verify that your cluster is configured to support the
COPYcommand.See Credentials and Access Permissions for details.
General
The following field is available on the General tab:
Entry Name: Specify the unique name of the job entry on the canvas. You can customize the name or leave it as the default.
Options
The Bulk load into Amazon Redshift job entry includes several tabs to define the data input source, the output database and location, and other COPY parameters.
Input tab

Use the options in this tab to define your input source for the Redshift COPY command:
Source
Specify the file, folder, prefix, or variable of the S3 bucket to use as the input for the Redshift COPY command. See Syntax of the COPY from Amazon S3 for details.
What file type is your source
Select the file type of the input source. You can select one of the following types:- Delimited text
The input source is character-delimited UTF-8 text.
Avro
The input source is an Avro data serialization protocol.
JSON
The input source is a JavaScript Object Notation (JSON) data file containing a set of either objects or arrays.
ORC
The input source is an Optimized Row Columnar (ORC) file containing Hive data.
Parquet
The input source is a Parquet file of nested data structures in a flat columnar format.
Fixed width
The input source is a file where each column width is a fixed length, rather than columns separated by a delimiter.
Compression
Select the type of compression applied to your input source:- None
No compression is applied.
BZIP2
The Burrows–Wheeler algorithm is used to compress the input source.
GZIP
The DEFLATE algorithm is used as a basis for compressing the input source.
LZOP
The Lempel–Ziv–Oberhumer lossless algorithm is used as a basis for compressing the input source.
Depending on the file type selected in What file type is your source, additional file settings appear at the bottom of the tab:
File Type
File Settings
Delimited text
Specify the following settings for a delimited text file:- Leading rows to skip
Specify the number of rows to use as an offset from the beginning of the file.
Delimiter
Specify the character used to separate a data field.
Quote character
Specify the character used to enclose a data field.
Remove quotes
Select one of the following values to indicate whether quotation characters should be removed from a data field during the bulk load:
Ignore blank lines
Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:
Empty as null
Select one of the following values to indicate whether empty data values should be set to null during the bulk load:
Avro
Select one of the following values for the Empty as null option to indicate whether empty data values should be set to null during the bulk load:- Yes: sets empty data values to null.
No: leaves data values as empty.
JSON
Select one of the following values for the Empty as null option to indicate whether empty data values should be set to null during the bulk load:- Yes: sets empty data values to null.
No: leaves data values as empty.
ORC
Additional file settings handled by the ORC file.
Parquet
Additional file settings handled by Parquet file.
Fixed width
Specify the following settings for a fixed width file:- Columns:Width
Specify the columns and their fixed widths.
Leading rows to skip
Specify the number of rows to use as an offset from the beginning of the file.
Ignore blank lines
Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:
Empty as null
Select one of the following values to indicate whether empty data values should be set to null during the bulk load:
Output tab

Use the options in this tab to define how the data is loaded using the Redshift COPY command:
Database connection
Select your database connection from a list of existing Redshift connections.If you do not have an existing connection, click New. If you need to modify an existing connection, click Edit.
An exclamation mark appears in the title of the Output tab if a connection cannot be established to the selected database. In this case, click Edit to modify your connection settings.
You must specify your IAM-role based access or credentials in your database connection for Redshift. Within the Database Connection dialog box, select either Role or Credentials. For your S3 role, specify the IAM-role in Role Name. For your S3 credentials, specify the Access Key ID and Secret Access Key.
Schema
Select the schema to use for the bulk load.
Table name
Select the name of the table to bulk load. You can only load one table at a time. You will need to use multiple Bulk load into Amazon Redshift entries or loop over a single entry to load multiple tables.
Columns
Preview columns within your selected Table name.
Options tab

Use the option in this tab to define how tabular data is loaded using the Redshift COPY command:
Truncate table
Select to remove all the data in the table before bulk loading the current data. When Truncate table is cleared, the data is appended during a new data load. The default is cleared.
Parameters tab

Use this tab to configure additional parameters for the Redshift COPY command. See Redshift’s COPY Parameter Reference for details.
Last updated
Was this helpful?

