Catching Limit Exception: “Attempted to schedule too many concurrent batch jobs”

I’ve got a Visualforce page that allows the user to fire a batch job manually, using this action method:

public void FireBatch()
{
    try
    {
        Database.ExecuteBatch(new AutobotBatch());
    }
    catch (System.LimitException e)
    {
        ApexPages.AddMessage(new ApexPages.Message(ApexPages.Severity.Info, 'There are too many jobs queued to run.'));
    }
    catch (Exception e)
    {
        ApexPages.AddMessage(new ApexPages.Message(ApexPages.Severity.Error, 'Oh snap: ' + e.GetMessage()));
    }
}

Yet despite the try and catch blocks, if I play the part of an over-keen user and keep hitting the commandButton over and over I get the usual error:

“Attempted to schedule too many concurrent batch jobs”

I know what this means, and I know I could probably count invocations myself; but what I want to do know is why I can’t seem to catch this exception. Perhaps the exception is fired in the context of the asynchronous job, but in that case it shouldn’t be redirecting the user’s browser to the error page.

Anybody know how to catch this and simply display a message without resorting to tracking the jobs manually?

Answer

You can’t catch LimitException. They are a special class of fatal error that simply cause your code to blow up the minute you hit one.

So your only strategy is going to be avoidance. Counting invocations of this code is one way, but that will only work if this is the only place batch jobs can be scheduled. Keeping a system-wide counter (e.g. batch jobs scheduled in last N minutes) in a custom setting would be another way but that’s also flawed.

This is one of those silly limits that seems hard to justify – why limit the number of queued batch jobs, especially to an outrageously small number like 5? If SFDC just managed the queue properly, the cost of queuing a job should be negligible. But I digress.

If you think this is going to be a real problem for normal usage, the only solution I can offer is one that we came up with to avoid a slightly different limit – “Total number of classes that can be scheduled concurrently”. But the solution is the same.

  • Create a custom object, Queued_Job__c, that holds a class name, an executed flag, and maybe a log field and/or some optional parameters. Instead of calling executeBatch directly, you save a new instance of this object.
  • Have a background worker job, as a scheduled Apex job, whose job is to wake up, check the table, and if anything exists in an unexecuted state it executes the earliest requested job as an executeBatch call.
  • At the end of the worker job, it reschedules itself to execute again in a few minutes.

This still has the issue that if not all code in the org is using this approach, you can still run into issues of course, but it will scale to a near-unlimited number of concurrently queued jobs.

Attribution
Source : Link , Question Author : Matt Lacey , Answer Author : jkraybill

Leave a Comment