Is there a maximum result size for @AuraEnabled Apex methods for Lightning Aura components?

I have a Lightning Aura Component with an Apex Controller selecting a (kind of) huge amount of data. It is querying a custom object elfBCProject__c with a sub select on a child object mueGPMOrderCard__c.

The total JSON payload is about ~140kB. The component is used in a Community, but I’m pretty sure my issue would happen exactly the same without community just in Lightning Experience.

In Apex there is a method like this:

@AuraEnabled public static Map<String,Object> LoadProjects(Map<String,Object> params) { // ...

It queries the records and is returning the records and some more stuff put together in a Map

The (simplified!) SOQL-query looks like this

            +' select * '
            +' ( '
                +' select * '
                + 'from mueGPMOrderCards__r '
            +' )'
            +' from elfBCProject__c '
            +' where Id!=null '
                +' and  ' // ... some filter come here

(please imagine the * as ALL fields)

The SOQL-query returns this in APEX (serialized and send by email).
It is 1 elfBCProject__c having 79 mueGPMOrderCard__c inside in the list mueBCOrderCard__r

  "attributes" : {
     "type" : "elfBCProject__c",
     "url" : "/services/data/v47.0/sobjects/elfBCProject__c/a081r00001WYrkOAAT"
  "Id" : "a081r00001WYrkOAAT",
  "OwnerId" : "0051r000008kTRoAAM",
  "IsDeleted" : false,
  // ... lots of fields here .... reduced! 
  "mueGPMOrderCards__r" : {
     "totalSize" : 79,
     "done" : true,
     "records" : [ {
        "attributes" : {
           "type" : "mueGPMOrderCard__c",
           "url" : "/services/data/v47.0/sobjects/mueGPMOrderCard__c/a0R1r00000TApKkEAL"
        "mueGPMProject__c" : "a081r00001WYrkOAAT",
        "Id" : "a0R1r00000TApKkEAL",
        "IsDeleted" : false,
        // ... more fields come here ... 
        // ... more records come here ... 

So far so good.

Now returning the stuff all to my Lightning Aura Component:

        return new Map<String,Object>{
            'success'                           =>  true,
            'projects'                          =>  projects, 
            'projectCount'                      =>  projects.size(),
            // ... more stuff to return comes here ...

The important part is, that I know there are exactly 79 OrderCards for the project in the database. This is a fact. APEX gets 79, Reports show 79, Listviews show 79.

But in JavaScript, only 61 are present in the result. See console in screenshot below.

The console.log() happens on the FULL result a first command in the JS-callback method. What could cause that loss of records?

enter image description here

Update 1

The loss seems to happen at the end. The LAST records in the list just get cut. I can’t identify anything special on the lost records. I can rule out permission, access or security. The user I am testing with can see all 79 records everywhere.

Update 2

Now it gets even more interesting, when I additionally add a serialized version of my database-result to my return statement:

    return new Map<String,Object>{
        'projects'                          => projects, 
        'jsonProjects'                      => system.JSON.serializePretty(o),
        // ... more stuff to return comes here ...

this blows the overall payload above 300kb (no error, all good) and now it comes:
Looking into the JSON, again we have

… “mueGPMOrderCards__r” : { “totalSize” : 79, …

So the deserialization (done by Salesforce likely the Aura Framework) seems to loose the records.

Update 3

After reading sfdcfox hint to check if "totalSize" : 79 really matches the body of the JSON I’ve inspected the content of the JSON structures a little bit more. It is very hard to avoid mistakes and the best way to inspect it was to copy huge JSON strings in the browsers JavaScript console an assign it to a temporary variable. This way, I was able to see that:

  • in APEX the result indicate totalSize:79, but actually contains only 61 records
  • the received result in the JavaScript callback accordingly has only 61 records
  • my own JSON-string jsonProjects added during Update 2 has only 61 records in Apex and 61 records in JavaScript.

So my temporary conclusion, that that the sub-query-limit mentioned by @sfdcfox is the reason


I tested three basic boundaries: total payload size, size of an element, and number of items.

Total Payload Size, Element Size

First, I created a map with 9 keys, each key being a 3,000,000 character string. This weighs in at just over 27 MB of data. Aura rendered the page in a couple of seconds, all elements present and accounted for.


If you use @AuraEnabled(cacheable=true), there’s a 10 MB limit. In most cases, caching is desirable, but if you need a large amount of data, disabling caching can allow you to exceed this limit.

Large Number of Items

I then created a map consisting of 100,000 elements, but just smaller strings, weighing in at about 1.2 MB of JSON-ified data. This took much longer to render (aura:iteration doesn’t like 100,000 items), but all the elements were present.

No, there doesn’t appear to be a maximum size aside from whatever your bandwidth and memory can handle.

Edit: I just realized that this may be an issue with the volume of data you’re querying–in Apex. See, if you query too many fields, you can cause the sub-query to be broken up into chunks with a QueryLocator. The threshold for this behavior depends on the number of fields you query.

Try using .size() on the value before returning the value, and see if you get an exception. If so, you’ll need to write a wrapper class, then use a for-each loop to extract the children:

sObject[] children = new sObject[0];
for(sObject record: queryResult.childRelationship__r) {

Source : Link , Question Author : Uwe Heim , Answer Author : sfdcfox

Leave a Comment