Amazon Hive Job Executor
The Amazon Hive Job Executor job entry runs Hive jobs in Amazon Elastic MapReduce (EMR). You can use this entry to access job flows in your Amazon Web Services (AWS) account.
Before you begin
You must have an AWS account configured for EMR.
You must have a Hive script created to control the remote job.
Entry name
Entry name specifies the unique name of the job entry on the canvas. You can change it.
Configure the entry (tabs)
Hive settings tab

Use this tab to connect to your AWS account and select or create the EMR cluster.
AWS connection
Access key
Unique identifier for your AWS account. The access key and secret key are used to sign requests, identify the sender, and help prevent request tampering.
Secret key
Secret key associated with the access key. The access key and secret key are used to sign requests, identify the sender, and help prevent request tampering.
Region
Amazon EC2 region where the job flow runs. Available regions depend on your AWS account. See the AWS documentation for regions and availability zones.
Select Connect to establish the connection.
Cluster
Select New to create a new job flow (cluster), or Existing if you already have a job flow ID.
If you select New, configure these options:
EC2 role
Amazon EC2 instance profile role for the cluster. Processes running on cluster instances use this role when calling other AWS services. Available roles depend on your AWS account.
EMR role
Role that permits Amazon EMR to call other AWS services (for example, Amazon EC2) on your behalf. See the AWS documentation for EMR IAM roles. Available roles depend on your AWS account.
Master instance type
Amazon EC2 instance type used as the Hadoop master (handles task distribution).
Slave instance type
Amazon EC2 instance type used as one or more Hadoop workers. Valid only when Number of instances is greater than 1.
EMR release
EMR release version (defines service components and versions).
Number of instances
Number of EC2 instances for the job flow.
Bootstrap actions
References to scripts that run before the node begins processing data. See the AWS documentation for bootstrap actions.
If you select Existing, specify the existing ID in Existing JobFlow ID.
Job settings tab

Hive job flow name
Name of the Hive job flow to execute.
S3 staging directory
Amazon S3 location (bucket/path) where job flow logs are stored. Artifacts required for execution (for example, the Hive script) are also stored here before execution.
Hive script
Location of the Hive script to execute (Amazon S3 or local file system).
Command line arguments
Command-line arguments passed to the Hive script. Separate multiple arguments with spaces.
Keep job flow alive
Keeps the job flow active after the entry finishes. If not selected, the job flow terminates when the entry finishes.
Enable blocking
Waits for the EMR Hive job to complete before continuing to the next entry. Blocking is required for PDI to track job status and to support error handling and routing. If cleared, the job is submitted and PDI continues immediately.
Logging interval
When Enable blocking is selected, number of seconds between status log messages.
Last updated
Was this helpful?

