This is an IBM Automation portal for Digital Business Automation products. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Cloud Pak for Business Automation - including Business Automation Studio and App Designer, Business Automation Insights
Business Automation Workflow (BAW) - including BAW, Business Process Manager, Workstream Services, Business Performance Center, Advanced Case Management
Content Services - FileNet Content Manager
Content Services - Content Manager OnDemand
Content Services - Daeja Virtual Viewer
Content Services - Navigator
Content Services - Content Collector for Email, Sharepoint, Files
Content Services - Content Collector for SAP
Content Services - Enterprise Records
Content Services - Content Manager (CM8)
Datacap
Automation Document Processing
Automation Decision Services (ADS)
Operational Decision Manager
Robotic Process Automation
Robotic Process Automation with Automation Anywhere
Blueworks Live
Business Automation Manager Open Edition
IBM Process Mining
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Thanks for sharing your feedback in great detail, and for clarifying why this feature impacts your usage of the solution.
We have assessed the possibility of delivering this requirement, and currently, this request tentative release status is:
Committed for Q2, as part of the 8.0.x release of the quarter.
Since Q1's release scope was closed, this request status for Q1 is tentative.
We'll keep you updated through this same channel, and please feel free to get in touch if needed.
Ideas for implementing this:
### Kafka Producer ###
The existing implementation for publishing Kafka messages (in org.kie.server.services.jbpm.kafka.KafkaServerProducer) takes a value Object as input.
As I see no way of adding additional input parameters here, the extra Kafka message key and headers to be used by the Producer would need to be part of the input value object itself, I guess.
I haven't found a suitable kie/jbpm "Event"/"EventContainer"/"EventWrapper" object. Therefore, either a new such class needs to be created or a Map with predefined keys would need to be used.
The new class would be better placed in a more generic module than the specific kie-server-services-kafka module, I guess.
The Map could have keys with a common prefix like "org.jbpm.process.core.event." (customizable via a SystemProperty) and the suffixes "data", "kafka.key", "kafka.headers". Also other kinds of metadata entries could be supported, that the CouldEventWriter could then map to cloudevent (extension) properties.
When the KafkaServerProducer.processEvent() method gets an object of that class or such a Map as value input parameter, it extracts value/key/headers from it and uses it for publishing the Kafka message.
### Kafka Consumer ###
In the existing Kafka consumer implementation (in org.kie.server.services.jbpm.kafka.KafkaServerConsumer), the idea would be that based on some input information, the consumer could be made to not only pass on the received Kafka message value, but also key and headers.
The corresponding "processEvent" method takes the received Kafka message and some info about the signalling node (name and output type class) as input.
Maybe there would a way to access the metadata of the signalling node and let some kind of "passOnKeyAndHeaders"/"passOnValueAndMetadata" flag be defined there.
I haven't seen a straightforward way to implement this, though.
An alternative would be to let the KafkaServerConsumer handle a special "Event"/"EventContainer"/"EventWrapper" output type class in such a way that it initializes this object with the received message value plus key and headers.
It could be the same class also used by the producer.
---
In general, concerning cloud event support:
The current Producer/Consumer implementation does not expose the CloudEvent object to the process but only uses it for the (Cloud)EventWriter/Reader internally.
I think this approach can be kept in place. The Kafka message key and headers don't necessarily fit into the cloudEvent attributes anyway. (The Kafka message key could be mapped to the cloudEvent "partitionkey", but that is not a necessity.)
Therefore, the new "Event"/"EventContainer"/"EventWrapper" object could be kept more generic, instead of it being a CloudEventContainer. It could contain the Kafka message key/headers and could in the future be extended with specific CloudEvent properties.