I realize there are plenty of ways to try to avoid hitting limits in the first place, but when it comes to features with 24 hour limit plans, how can you know if you’re getting close to them? And since managed packages also share these limits with non-managed code, how can you determine if it’s “ok” to fire off more emails or “ok” to launch additional @Future calls?
In a clean dev org, in an anonymous Apex window, I executed the following:
system.debug('Total Future Calls Allowed: ' + system.limits.getLimitFutureCalls()); system.debug('Total Emails Allowed: ' + system.limits.getLimitEmailInvocations());
14:06:16.023 (23004000)|USER_DEBUG||DEBUG|Total Future Calls Allowed: 10 14:06:16.023 (23130000)|USER_DEBUG||DEBUG|Total Emails Allowed: 10
So the total email and future calls allowed relates to the current execution context, not the org. Also, batch Apex permits 250,000 batch executions in a 24 hour period but doesn’t even have a System.Limits method (probably because the 5-concurrent execution limit is self limiting org-wide).
I’ve looked around at the Organization object and elsewhere but can’t find anything. Is there a way to determine total allowed and currently available futures, emails and batch Apex executions?
The Limits class as you have started to discover is only for the current request and does not provide any org wide limit information. Salesforce have partially addressed this with emails, but as far as I can see there is nothing for Batch Apex or @future. Though I have included some thoughts below on both.
Emails. There is a couple of methods on the Messaging class called reserveSingleEmailCapacity and reserveBulkEmailCapacity. They don’t tell you how much is left, but will stop your app if your about to exceed it. The downside is that it throws uncatchable exceptions, if that would be an issue to you check out this answer.
Batch Apex / @Future. There is no equivalent to the reserve methods above here, nor any way to query the current value. So here are some general thoughts to consider, both in terms of your answer and also avoiding the limit…
- Schedulable If you are issuing a lot of Batch Apex or @future calls from triggers or buttons, consider implementing a Scheduler to aggregate the work into a single job and time slots throughout the day. You can use ‘processed’ indicators on your records to implement your own queue approach for records. This does have an impact on user experience, though under heavy load the platform queues and they don’t always get a rapid response from the job completing anyway.
- Custom Throttler. Similar to above but layers your own Queuing around Batch Apex. Create a Custom Object to register your jobs and allow the Throttler scheduled job (goes off every 15 mins say) to read a certain amount from the queue and start the jobs itself, then go back to sleep. So instead of calling Database.executeBatch directly from your triggers or buttons, you insert a record to this object with the required details. You could probably make a reasonable job of this being quite generic, via the new Type.forName and Type.newInstance methods actually. The downside with this is that this only applies to your use of jobs, and do not throttle others, so depends on what else is in the org.
- Iterator If you have lots of Batch Apex jobs for different objects, consider if it might make sense to aggregate some of them using Iterator as the source of records to process. This will allow you to handle data of a mixed type in a single job.
- AsyncApexJob object. You may have some joy in querying the AsyncJob object to determine activity over the last 24 hour period. I’ve not done this before, but seen it suggested a few times. Both Batch Apex and @future jobs appear in this object. This also contains ALL jobs, and not just yours, e.g. from other applications installed. I would say this is semi-reliable and needs testing as I am not sure when the system truncates it.
- Custom Settings. This has also been suggest a few times, but as per the Custom Throttler, really only help if your sure your the only one contributing to the limit. There is obviously some more date and time logic here to implement as well.
Summary. My personal view is try to consider which jobs need to be ‘system / app level’ and which are handling ‘end user requests’. The former can be scheduled to throttle job usage through aggregation of work into one job. The later, is somewhat harder, but often is negotiable with end users into a more scheduleable approach (say every 15 mins), particularly as they are already used to not getting an immediate response anyway. What remains after taking these two considerations are the jobs that really do need to be invoked more aggressively on user demand.
Ok, I feel like this has turned into a bit of best practice answer, rather than a direct one, which unfortunately is a no. Anyway I hope this helps in some way!