Quantcast
Channel: Active questions tagged crash - Stack Overflow
Viewing all articles
Browse latest Browse all 7190

Handling crash in Kafka producer application

$
0
0

We are working on sending messages to Kafka asynchronously using Confluent.Kafka 1.4.0 for .Net.

The messages will be sent in batches (using BatchNumMessages and LingerMS config). So, when the 'Produce' API would be called for sending a message, the message would be added to the batch but not actually sent to Kafka. The batch would be sent a little later based on the config.

We will delete relevant data in the source once the 'Produce' API is called for sending the message. If later, in the callback we get to know that there was a failure while sending the message, we shall take appropriate action to gather the data and resend the message or mark it as a permanent failure.

So in case of crashing of the application, we would not know if few messages were added to the batch but actually not sent to Kafka. And also as we will not get a callback, we would not know if there was a failure for the messages that were sent.

It seems like we need to rely on callback to consider 'done' and then delete the data in source. But we do not want to keep the data in the source for long.

Is there another way to handle this?


Viewing all articles
Browse latest Browse all 7190

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>