LogoLogo
search
Ctrlk
Try Pentaho! - Start your 30 day evaluation today
  • Pentaho Documentation
LogoLogo
  • Pentaho Data Integration 11.0
  • Basic concepts of ETL in PDI
  • Understanding PDI data types and field metadata
  • Starting the PDI client
  • Use the PDI client perspectives
  • Pipeline Designer
  • Organizing ETL with projects
  • Extracting data into PDI
    • Defining PDI database connections
    • Edit database connections in PDI
    • Specify advanced configuration of PDI database connections
    • Quoting PDI database connections
    • Set specific options for PDI database connections
    • Define PDI database connection pooling
    • Connect to clusters (PDI only)
    • Modify connections from PDI
    • PDI and Hitachi Content Platform (HCP)
    • Hierarchical data
    • PDI and Snowflake
    • Copybook steps in PDI
    • Work with the Streamlined Data Refinery
    • Connecting to a Hadoop cluster with the PDI client
    • Connecting to Virtual File Systems
      • Before you begin
      • Create a VFS connection
      • Edit a VFS connection
      • Delete a VFS connection
      • Access files with a VFS connection
      • Pentaho address to a VFS connection
      • Create a VFS metastore
      • Steps and entries supporting VFS connections
      • VFS browser
    • Streaming analytics
    • Web services steps
  • Transforming data with PDI
  • Loading data from PDI
  • Collaborating with PDI
  • Customizing PDI
  • Use Plugin Manager
  • Troubleshooting possible data integration issues
  • PDI transformation steps
  • PDI job entries
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. Extracting data into PDIchevron-right
  2. Connecting to Virtual File Systems

Steps and entries supporting VFS connections

You may have a transformation or a job containing a step or entry that accesses a file on a Virtual File System.

The following steps and entries support VFS connections:

  • Avro Input

  • Avro Output

  • Bulk load from MySQL into filearrow-up-right

  • Bulk load into MSSQLarrow-up-right

  • Bulk load into MySQLarrow-up-right

  • Copybook Input

  • CSV File Input

  • De-serialize from filearrow-up-right

  • Fixed file inputarrow-up-right

  • Get data from XMLarrow-up-right

  • Get File Namesarrow-up-right

  • Get Files Rows Countarrow-up-right

  • Get SubFolder namesarrow-up-right

  • Google Analyticsarrow-up-right

  • GZIP CSV Inputarrow-up-right

  • Job (job entry)

  • JSON Input

  • JSON outputarrow-up-right

  • ORC Input

  • ORC Output

  • Parquet Input

  • Parquet Output

  • Query HCP

  • Read metadata from Copybook

  • Read metadata from HCP

  • Text File Output

  • Transformation (job entry)

  • Write metadata to HCP

PreviousCreate a VFS metastorechevron-leftNextVFS browserchevron-right

Last updated 5 months ago

Was this helpful?

LogoLogo

About

  • Pentaho.com

Support

  • Pentaho Support

Resources

  • Privacy

© 2025 Hitachi Vantara LLC

Was this helpful?