# Bulk load into Azure SQL DB

Use the **Bulk load into Azure SQL DB** job entry to load CSV data from Azure Data Lake Storage into an Azure SQL Database.

For more information about Azure SQL, see the [Azure SQL documentation](https://docs.microsoft.com/en-us/azure/azure-sql/).

### Before you begin

To use this job entry, you need:

* An Azure account.
* A connection to the Azure SQL database where you want to load data.
* A VFS connection to Azure Data Lake Storage.
* A table and schema in the target database. The table must include all required columns.

{% hint style="warning" %}
You need the **INSERT** and **ADMINISTER DATABASE BULK OPERATIONS** permissions in the Azure SQL database.
{% endhint %}

### General

The following field is available on the **General** tab:

* **Entry Name**: Specify the unique name of the job entry on the canvas. You can customize the name or leave it as the default.

### Options

This job entry includes four tabs. Each tab is described in the following sections.

#### Input tab

![Bulk load into Azure SQL DB Input tab](/files/byWkVrlRmTwHhk3zfjs3)

Use the options in this tab to define your input source:

| Option                             | Description                                                                                                                                                                                                                                                                                                                                                                                                |
| ---------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Source**                         | Specify the VFS URL of the data to import. The URL can point to a file or a folder in Azure Data Lake Storage. The URL must begin with `pvfs://` and must specify the bucket and object you want to load. See [Pentaho address to a VFS connection](/pdia-data-integration/archived-merged-pages/connecting-to-virtual-file-systems-archive/pentaho-address-to-a-vfs-connection-open-a-transformation.md). |
| **What file type is your source?** | Select the file type of the input source. The source must be a CSV file.                                                                                                                                                                                                                                                                                                                                   |
| **Delimited text settings**        | <p>Specify the following settings for a delimited text file:</p><ul><li><strong>Delimiter</strong></li></ul><p>Specify the character used to separate a data field. The default is semicolon (<code>;</code>).</p><ul><li><strong>First line is column names</strong></li></ul><p>Select <strong>Yes</strong> if the first line in the file contains the column names.</p>                                 |

#### Output tab

![Bulk load into Azure SQL DB Output tab](/files/XGT6Iw3zbKJ2K5FVmHka)

Use the options on the **Output** tab to specify the destination:

| Option                  | Description                                                                                                                                                                                                                                      |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| **Database connection** | <p>Select your database connection from a list of existing Azure SQL DB connections.</p><p>If you do not have an existing connection, click <strong>New</strong>. If you need to modify an existing connection, click <strong>Edit</strong>.</p> |
| **Schema**              | Select the schema to use for the bulk load. The job entry reads the schemas that exist in the database to populate this list.                                                                                                                    |
| **Table name**          | Select the name of the table to bulk load. The job entry reads the table names from the selected schema to populate this list.                                                                                                                   |
| **Columns**             | Preview the column names and data types in the selected table.                                                                                                                                                                                   |

{% hint style="info" %}
You can use variables in the **Schema** and **Table name** fields.
{% endhint %}

#### Options tab

![Bulk load into Azure SQL DB Options tab](/files/FbZmGEKeoh7onwgccYaM)

Use the options on the **Options** tab to define how data is loaded into the destination table:

| Option             | Description                                                                                                                    |
| ------------------ | ------------------------------------------------------------------------------------------------------------------------------ |
| **Truncate table** | Select to remove all data in the table before loading the current data. Clear to append data during a new data load (default). |

#### Advanced options tab

![Bulk load into Azure SQL DB Advanced options tab](/files/nJeVSegwXpDyThZDsKpl)

Use this tab to configure additional parameters for the bulk load.

Any **Name/Value** pair that you add is passed to the Azure SQL database as a parameter. You are responsible for validating parameter names and values.

The `Force` parameter is provided as an example.

| Option    | Description                                                                                                                                                                                                                                                                                                                                                |
| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Force** | <p>Specify whether to force loading files into a database:</p><ul><li><strong>True</strong></li></ul><p>Loads data to the table even if the data was already loaded from that file before. This option can duplicate data in a table.</p><ul><li><strong>False</strong></li></ul><p>Ignores staged data files already loaded into the table (default).</p> |


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.pentaho.com/pdia-data-integration/pdi-job-entries-reference-overview/bulk-load-into-azure-sql-db.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
