Fixed Various Issues with Copy Node Function #
- There was an issue during outbound processing when copying nodes with the same mapping levels to generate a JSON payload. The JSON data levels were incorrectly generated as parent and child records instead of sibling records. This issue has now been resolved.
- When you use the function to copy a node the field of value is generated incorrectly and gets a null value while having an array of records.
Fixed Issue Did not Get field map on mapping tool #
We encountered an issue when creating message types with the same name and sObject type but at different levels. This caused the mapping display to not show the child-level fields for one of them. This issue has now been fixed.
Fixed Various Issues Ilog2 #
- If Ilog2 has more than 50,000 records, you will receive the alert error message: ‘Too many query rows: 50001’. We have now fixed this by limiting the display to only 50,000 records.
- Add pagination to display Ilog2 records, with ten records per page.
- Implemented batch Apex to delete Ilog2 records when they exceed three thousand records.
Resolving DML Row Limit Errors in MessageDeletionBatch Apex #
Users set the DoMaintenance scheduler to delete messages. There is an issue with the MessageDeletionBatch batch Apex, which raises the error ‘Too many DML rows: 10001’ when deleting parent and child messages in batches of 200. If the 200 records are all root messages, it queries their child messages again, potentially exceeding the DML row limit of 10,000 records. This issue has been fixed by preventing the unnecessary query for child messages.
Resolving Count and Order By Conflicts in Outbound Processing #
We use the ‘Count’ logic before executing the outbound processing, which results in the error message: ‘Count() and Order by may not be used together.’ because you added a WHERE clause with ORDER BY in the filter field of the sObject. To avoid this error, you must use a query filter field on the Interface sObject to add your WHERE condition in the previous version. However, this issue was fixed in this version by applying ORDER BY after executing the query.