We have created a process which must work asynchronously (limits and architecture reasons) and it was working fine.
Some time later we have started to work with BigObject. And as long as writing to BigObject is callout, this operation must be done before any DML operation or in async process.
That where we have faced a problem. And its name is – System.LimitException: Too many queueable jobs added to the queue: 2
Step by step description:
- Some process calls future method
- update opportunity inside of future method
- Inside of Opportunity trigger Queueable1 is called (old process)
- Inside future method Queueable2 is called (save to BigObect)
- System.LimitException: Too many queueable jobs added to the queue: 2 is
P.S.: Old and new processes must process successfully.
Is there any best practice, process or workaround to avoid System.LimitException: Too many queueable jobs added to the queue: 2 ?
We was thinking about creating Queueable class (name it like OpportunityTriggerQueueable) where all async work can be done. But in that case not clear how to make checks in trigger. Doubling code also is not a good idea.
I won’t try to go in Future/Queueable chains, it can get messy.,
What I would do is to break the chain.
Your requirement of
save to BigObect can be done by just firing a platform event. Inside the platform event trigger you can save it to the big objects,
We use platform events and big objects to log things, so I am sure about this approach. What platform event gives you is the flexibility to retry.