Navigation
- Introduction
- Getting Started
- Step-by-Step Guide
- NiFi Features
- Cookbook Recipes
-
Community Recipes
- Run Data Hub-4 Input Flow
- Run Data Hub-4 Harmonize Flow
- Run Data Hub-4 Flows (Input and Harmonize)
- Read Files from Directory, Write to MarkLogic
- Extract values from JSON data
- Extract values from XML data
- Convert Data To UTF-8
- Handling Multiple Types of Content
- Ingest Line-Delimited JSON
- Split XML Files Into Multiple Documents
- Loading Documents From Compressed Files
- Generate Documents from CSV Files
- Load PDF as Binary and Extracted Metadata as JSON
- Call a Web Service
- Augment XML content with data from a Web Service
- Modify NiFi Attributes with Custom Scripting
- Get Files by FTP
- Extract Text from PDFs and Office Documents
- Get Data from a Relational Database
- Create View, use GenerateTableFetch
- Count Rows, Construct Paged SQL SELECTs
- Execute a SQL Query for a Nested Array
- Use ExecuteSQLToColumnMaps
- Column Maps with Child Query
- Loading Content and Metadata from an mlcp Archive File
- Load Triples
- Transform JSON
- Load Data from SharePoint
- Invoke HTML Tidy on HTML Content
- Decrypt Input CSVs
- Read Parquet Files
- Export MarkLogic Database Content to the File System
- Read MarkLogic XML, Write to CSV
- Kafka Integration
- Error Handling in NiFi Flows
- Error Handling in PutMarkLogic
- Error Resolutions
- Performance Considerations
- FAQs
MarkLogic NiFi
With Apache NiFi, you can use out-of-the-box processors to create data flows from relational databases such as MySQL or Postgres, Apache Kafka data streams and other sources in the Hadoop ecosystem, and many other data sources. If a processor doesn’t exist, you can build your own or you can create templates for common data flow patterns, then publish and share them for others to benefit and collaborate.
The MarkLogic NiFi bundle simplifies integrating MarkLogic into your data management and orchestration strategy.
Found a bug? Have a comment?
File an issue on Github and we will see it.