Please note that, at this time, the Processor assumes that all records that are retrieved from a given partition have the same schema. The complementary NiFi processor for sending messages is PublishKafkaRecord_0_10. ConsumeKafkaRecord_0_10Ĭonsumes messages from Apache Kafka specifically built against the Kafka 0.10.x Consumer API. The complementary NiFi processor for sending messages is PublishKafka_1_0. ConsumeKafka_1_0Ĭonsumes messages from Apache Kafka specifically built against the Kafka 1.0 Consumer API. The complementary NiFi processor for sending messages is PublishKafka_0_11. ConsumeKafka_0_11Ĭonsumes messages from Apache Kafka specifically built against the Kafka 0.11.x Consumer API. The complementary NiFi processor for sending messages is PublishKafka_0_10. ConsumeKafka_0_10Ĭonsumes messages from Apache Kafka specifically built against the Kafka 0.10.x Consumer API. The complementary NiFi processor for sending messages is PublishKafka. In the mean time it is possible to enter states where the only resolution will be to restart the JVM NiFi runs on.
We are closely monitoring how this evolves in the Kafka community and will take advantage of those fixes as soon as we can. Please note there are cases where the publisher can get into an indefinite stuck state. ConsumeKafkaĬonsumes messages from Apache Kafka specifically built against the Kafka 0.9.x Consumer API. JMS attributes such as headers and properties will be copied as FlowFile attributes. The raw-bytes of each received email message are written as contents of the FlowFile ConsumeJMSĬonsumes JMS Message of type BytesMessage or TextMessage transforming its content to a FlowFile and transitioning it to ‘success’ relationship. The raw-bytes of each received email message are written as contents of the FlowFile ConsumeIMAPĬonsumes messages from Email Server using IMAP protocol. ConsumeEWSĬonsumes messages from Microsoft Exchange using Exchange Web Services. Receives messages from a Microsoft Azure Event Hub, writing the contents of the Azure message to the content of the FlowFile. ConsumeAMQPĬonsumes AMQP Message transforming its content to a FlowFile and transitioning it to ‘success’ relationship ConsumeAzureEventHub FlowFiles are transferred to downstream relationships according to received message types as WebSocket client configured with this processor receives messages from remote WebSocket server. CompressContentĬompresses or decompresses the contents of FlowFiles using a user-specified compression algorithm and updates the mime.type attribute as appropriate ConnectWebSocketĪcts as a WebSocket client endpoint to interact with a remote WebSocket server. CompareFuzzyHashĬompares an attribute containing a Fuzzy Hash against a file containing a list of fuzzy hashes, appending an attribute to the FlowFile in case of a successful match. Events are output as individual flow files ordered by the time at which the operation occurred. CDC Events include INSERT, UPDATE, DELETE operations. Retrieves Change Data Capture (CDC) events from a MySQL database.
Base64EncodeContentĮncodes or decodes content to and from base64 CaptureChangeMySQL The resulting JSON can be written to either a new Attribute ‘JSONAttributes’ or written to the FlowFile as content. Generates a JSON representation of the input FlowFile Attributes. Each FlowFile will be emitted with the count of FlowFiles and total aggregate value of values processed in the current time window. Track a Rolling Window based on evaluating an Expression Language expression on each FlowFile and add that value to the processor’s state. For other nifi versions, please reference our default processors post.Ĭheck the Apache nifi site for downloads or any nifi version or for current version docs.