# Input tab

![Input tab](https://3411831820-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FAYwCj9fPr1B2pjC11IOQ%2Fuploads%2Fgit-blob-85eb37b41040bd32caa47e345620c40b46d20bbb%2FssPDIBulkLoadIntoAmazonReshiftInputTab.png?alt=media)

Use the options in this tab to define your input source for the Redshift `COPY` command:

| Option                            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Source**                        | Specify the file, folder, prefix, or variable of the S3 bucket to use as the input for the Redshift `COPY` command. See [Syntax of the COPY from Amazon S3](https://docs.aws.amazon.com/redshift/latest/dg/copy-parameters-data-source-s3.html#copy-parameters-data-source-s3-syntax) for more details on how to specify this option.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| **What file type is your source** | <p>Select the file type of the input source. You can select one of the following types:- <strong>Delimited text</strong></p><p>The input source is character-delimited UTF-8 text.</p><ul><li><strong>Avro</strong></li></ul><p>The input source is an Avro data serialization protocol.</p><ul><li><strong>JSON</strong></li></ul><p>The input source is a JavaScript Object Notation (JSON) data file containing a set of either objects or arrays.</p><ul><li><strong>ORC</strong></li></ul><p>The input source is an Optimized Row Columnar (ORC) file containing Hive data. See the <strong>Administer Pentaho Data Integration and Analytics</strong> document for further configuration information when using Hive with Spark on AEL.</p><ul><li><strong>Parquet</strong></li></ul><p>The input source is a Parquet file of nested data structures in a flat columnar format.</p><ul><li><strong>Fixed width</strong></li></ul><p>The input source is a file where each column width is a fixed length, rather than columns separated by a delimiter.</p> |
| **Compression**                   | <p>Select the type of compression applied to your input source:- <strong>None</strong></p><p>No compression is applied.</p><ul><li><strong>BZIP2</strong></li></ul><p>The Burrows–Wheeler algorithm is used to compress the input source.</p><ul><li><strong>GZIP</strong></li></ul><p>The DEFLATE algorithm is used as a basis for compressing the input source.</p><ul><li><strong>LZOP</strong></li></ul><p>The Lempel–Ziv–Oberhumer lossless algorithm is used as a basis for compressing the input source.</p>                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |

Depending on what file type you selected for \*\*What file type is your source\*\*, the following file settings appear at the bottom of this tab:

<table data-header-hidden><thead><tr><th></th><th></th></tr></thead><tbody><tr><td>File Type</td><td>File Settings</td></tr><tr><td><strong>Delimited text</strong></td><td><p>Specify the following settings for a delimited text file:- <strong>Leading rows to skip</strong></p><p>Specify the number of rows to use as an offset from the beginning of the file.</p><ul><li><strong>Delimiter</strong></li></ul><p>Specify the character used to separate a data field.</p><ul><li><strong>Quote character</strong></li></ul><p>Specify the character used to enclose a data field.</p><ul><li><strong>Remove quotes</strong></li></ul><p>Select one of the following values to indicate whether quotation characters should be removed from a data field during the bulk load:</p><pre><code>-   **Yes**: removes the quotation characters.
-   **No**: retains the quotation characters.
</code></pre><ul><li><strong>Ignore blank lines</strong></li></ul><p>Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:</p><pre><code>-   **Yes**: removes the blank lines.
-   **No**: retains the blank lines.
</code></pre><ul><li><strong>Empty as null</strong></li></ul><p>Select one of the following values to indicate whether empty data values should be set to null during the bulk load:</p><pre><code>-   **Yes**: sets empty data values to null.
-   **No**: leaves data values as empty.
</code></pre></td></tr><tr><td><strong>Avro</strong></td><td><p>Select one of the following values for the <strong>Empty as null</strong> option to indicate whether empty data values should be set to null during the bulk load:- <strong>Yes</strong>: sets empty data values to null.</p><ul><li><strong>No</strong>: leaves data values as empty.</li></ul></td></tr><tr><td><strong>JSON</strong></td><td><p>Select one of the following values for the <strong>Empty as null</strong> option to indicate whether empty data values should be set to null during the bulk load:- <strong>Yes</strong>: sets empty data values to null.</p><ul><li><strong>No</strong>: leaves data values as empty.</li></ul></td></tr><tr><td><strong>ORC</strong></td><td>Additional file settings handled by the ORC file.</td></tr><tr><td><strong>Parquet</strong></td><td>Additional file settings handled by Parquet file.</td></tr><tr><td><strong>Fixed width</strong></td><td><p>Specify the following settings for a fixed width file:- <strong>Columns:Width</strong></p><p>Specify the columns and their fixed widths.</p><ul><li><strong>Leading rows to skip</strong></li></ul><p>Specify the number of rows to use as an offset from the beginning of the file.</p><ul><li><strong>Ignore blank lines</strong></li></ul><p>Select one of the following values to indicate whether blank lines should be removed from the data during the bulk load:</p><pre><code>-   **Yes**: removes the blank lines.
-   **No**: retains the blank lines.
</code></pre><ul><li><strong>Empty as null</strong></li></ul><p>Select one of the following values to indicate whether empty data values should be set to null during the bulk load:</p><pre><code>-   **Yes**: sets empty data values to null.
-   **No**: leaves data values as empty.
</code></pre></td></tr></tbody></table>
