-
Notifications
You must be signed in to change notification settings - Fork 0
Docs/kafka producer default stricky partitioner #2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
edsonwade
merged 10 commits into
main
from
docs/Kafka-Producer-Default-Stricky-Partitioner
Feb 20, 2025
Merged
Docs/kafka producer default stricky partitioner #2
edsonwade
merged 10 commits into
main
from
docs/Kafka-Producer-Default-Stricky-Partitioner
Feb 20, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Added instructions for setting up and running the Kafka consumer - Included prerequisites and configuration steps - Provided an overview of the code and key classes - Added commands for verifying message delivery and managing consumer groups - Included references and additional Kafka commands
- Refactor `PropertiesUtils.java` to improve property configuration and add detailed comments. - Update `ConsumerDemo.java` to use the refactored properties configuration. - Implement proper shutdown handling in `ConsumerDemoWithShutdown.java`: - Add shutdown hook to handle `WakeupException` gracefully. - Ensure consumer is closed properly during shutdown. - Add detailed comments for better understanding of the Kafka consumer setup and shutdown process.
- Refactor `PropertiesUtils.java` to enhance property configuration. - Add detailed comments for better understanding of each property. - Refactor `PropertiesUtils.java` to enhance property configuration. - Add detailed comments for better understanding of each property. - Configure `CooperativeStickyAssignor` in `PropertiesUtils.java` for better partition assignment. .
- Explain the difference between synchronous and asynchronous offset commits. - Provide examples of synchronous and asynchronous commit configurations. - Highlight the benefits and drawbacks of each method.
- Explain the purpose and usage of `ConsumerDemoThreads`. - Detail the configuration and implementation of Kafka consumers in threads. - Highlight the advantages and disadvantages of using multiple threads for Kafka consumers.
- Add method to start Kafka producer and connect to Wikimedia RecentChange API - Implement event handler to process and send events to Kafka topic - Update Javadoc comments to accurately describe the functionality
- Set the number of retries to 3 in case of transient errors. - Configure a backoff time of 100 ms between retry attempts. - Enable idempotence to avoid duplicate messages. - Set the delivery timeout to 120000 ms (2 minutes) to ensure messages are not lost.
- Explain the default configurations in Kafka 3.x. - Provide examples of properties that need explicit configuration in versions prior to 3.x. - Include code snippets for enabling idempotence, setting retries, retry backoff, and delivery timeout.
- Add explanation and configuration for `linger.ms` and `batch.size`. - Provide code example for setting `linger.ms` and `batch.size` in Kafka producer. - Discuss advantages and disadvantages of adjusting these properties.
… context of Kafka. Here is a brief explanation of how Kafka handles messages: 1. **Producing Messages**: Producers send messages to Kafka topics. Each message consists of a key, value, and optional metadata. 2. **Partitioning**: Messages are distributed across partitions within a topic. The key determines the partition, ensuring messages with the same key go to the same partition. 3. **Consuming Messages**: Consumers read messages from Kafka topics. They can be configured to read from specific partitions or the entire topic. 4. **Acknowledgments**: Producers can configure acknowledgment settings (`acks`) to ensure messages are properly received by Kafka brokers. 5. **Compression**: Messages can be compressed to save bandwidth and storage. The consumer must decompress the messages. If you need more specific information or code examples, please provide additional details.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
bug
Something isn't working
documentation
Improvements or additions to documentation
enhancement
New feature or request
good first issue
Good for newcomers
help wanted
Extra attention is needed
invalid
This doesn't seem right
question
Further information is requested
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.