Batch Apex class failure – CPU time limit exceeded

I am facing a sudden bug that arose from a custom scheduled batch class in the system. This scheduled job has been running smoothly for over a year now, but it seems as though a recent influx of records (~2,000) for which the batch class iterates over has thrown the process over the CPU time limit.

Every time the job runs, the ending “Status” (Setup > Monitor > Jobs > Apex Jobs) is the same as it always has been – “Completed”. However, in the “Status Detail” section, there is now a “First error: Apex CPU time limit” message that appears after every run.

After researching potential solutions, I have tried modifying the batch class to avoid this timeout by:

  1. reducing the batch size (down to as low as 10)
  2. modifying the initial query (Start method) – removed sub-fields, etc.
  3. breaking apart/moving around the data processing operations within the Execute method

Nothing has seemed to help at all. When looking at similar posts around the topic, it seems as though the problem lies in the initial query. I’ve used the Query Plan tool (developer console) to try and optimize/debug the initial query, and it tells me that the two filtering fields are “Not considering filter for optimization because unindexed”. Both fields are custom picklist fields.

Am I going down the right path with this? If Salesforce support could make both fields “custom index” fields, would this likely solve the problem / speed things up (i.e. not timeout)? Are there other things/places I should be looking at?

Any help would be greatly appreciated…

Here is the query from the Start method:

return Database.getQueryLocator(
        'SELECT Id, Lookup_1__c, Lookup_2__c, Lookup_1__r.Name, Lookup_2__r.Name, External_Id_Field__c, '+
        'Field_A__c, Name, Lookup_3__c, CreatedDate '+
        'FROM Custom_Object__c '+
        'WHERE Custom_Picklist_Field_1__c = \'Value A\' '+
        'AND Custom_Picklist_Field_2__c = \'Value B\'');

Answer

I think there a few post about this in SFSE, but in summary try to checkout the following :

Make sure that your query is selective. I recommend you to have a read at Salesforce article here. But mainly you need to use index fields, a field will be index if :

If the filter is on a standard field, it’ll have an index if it is a
primary key (Id, Name, OwnerId), a foreign key (CreatedById,
LastModifiedById, lookup, master-detail relationship), and an audit
field (CreatedDate, SystemModstamp).

Custom fields will have an index if they have been marked as Unique or External Id

Purge your recycle bin, if you regularly delete records for that particular object, Salesforce take that in consideration, you can do that in apex if required using Database.emptyRecycleBin()

If your query is indexed, make sure it doesn’t exceed the threshold. For standard object, it’s 30% of the first million and for custom object it’s 10% of the first million.

Make sure you don’t schedule too many complex batch on the same object at the same time.

If your batch is failing randomly it might also be because Salesforce is having a downtime on their server just at the time your batch is running. Also if their server reset for any reason at that time, your apex job will stop.

Attribution
Source : Link , Question Author : zwizard , Answer Author : brovasi

Leave a Comment