How to reduce the test class execution time while deploying?

I have optimized almost all my test classes but the “Run All Tests” execution time is still not reduced.

I ran the tests in my full copy sandbox and it took 28 minutes. There are about 464 test classes. Some of the test classes have 50 to 60 test methods as the class demands so. If I remove the testmethod, the test coverage is getting reduced. If I write more test methods, the deployment time taken is too long. I just trying to keep a balance between my coverage % and the no. of methods.

Currently while deploying to production it takes about 1.5 hours. So I have optimized my code and gave a run all test in my sandbox it takes about 28 minutes. But if I run in someother time it takes only 16 minutes.

Not sure how much time it will take when I am deploying to production? How do I find and reduce this time taken to deploy?

For Example, I have attached a result of a test class here (which is one of the test classes from run all tests).

enter image description here

If you notice this image, There are totally 19 methods and the 1st highlighted method shown above is
taking 6 mins 43 seconds but the 2nd method highlighted takes only 21 seconds.
But the code for both the methods are almost the same.

Code for highighted method 1 :-

public static testmethod void TestAddFooterDisclaimerAndHelp()
{
    //See Data and Cookie
    createDataAndCookie();

    Test.startTest();
    //Invoke CloneSiteContent for Footer Copyright
    AddSiteContent('Footer Copyright');

    //Add Content - approveAndPublish
    siteConDetail.ContentRichText = 'Test Add Footer Copyright Approve and Publish';
    CompanyListPopulate(siteConDetail.BTP);
    siteConDetail.approveAndPublish();

    //Invoke CloneSiteContent for Site Disclaimer
    AddSiteContent('Site Disclaimer');

    //Invoke CloneSiteContent for Help
    AddSiteContent('Help');
    Test.stopTest();
}

Code for Highlighted method 2 :-

public static testmethod void TestEditFooterDisclaimerAndHelp()
{
    //See Data and Cookie
    createDataAndCookie();

    Test.startTest();
    //Invoke EditSiteContent for Footer Copyright
    EditSiteContent('SiteContent', 'Footer Copyright');

    //Edit Content - approveAndPublish
    siteConDetail.ContentRichText = 'Test Edit Footer Copyright Approve and Publish';
    siteConDetail.ChangeLogNotes = 'Test';
    CompanyListPopulate(siteConDetail.BTP);
    siteConDetail.approveAndPublish();

    //Invoke EditSiteContent for Site Disclaimer
    EditSiteContent('SiteContent', 'Site Disclaimer');

    //Invoke EditSiteContent for Help
    EditSiteContent('SiteContent', 'Help');
    Test.stopTest();
}

Not sure why its making a difference. I guess it should be because its waiting for some other method in Run all tests to complete. If that is the case how can we determine the actual runtime of a test method and how can we optimize it?

Also I have tested each of the test classes separately and its just executing in a few seconds even for the ones which has about 60 test methods, but when I run it together sometimes the same class alone takes 17 minutes. Can some one help / guide me here correctly as to what I need to do for reducing the time to deploy in production?

* Edited Part *

I still couldn’t figure out how to reduce the time.
FYI, I am using change sets to deploy to production.

Answer

When you run in the sandbox are you running the tests in parallel? When you deploy to production the tests are not run in parallel, so it will likely take longer to run all of them. If you are running in parallel in the sandbox, you can switch to not run in parallel to get a better idea of how long the actual deployment will take.

If you are running in parallel in the sandbox it could be that there is some contention occurring for a shared resource. Also, Apex tests are placed in the Apex Job Queue for execution according to the documentation, so there could be some delay there, although 6 minutes+ seems excessive.

There could be different causes for the fluctuations in overall time between entire test runs. The documentation on Salesforce Asynchronous Processing has more info on how asynchronous processing is handled. It could be that there are a differing number of other orgs running at the same time (less when it takes less time…more when it takes longer). It could be that some tests depend on some shared resource and during certain test runs they happen to execute at the same time and in other test executions they run at separate times and don’t have an issue.

I would rather have longer deploys and more unit tests that cover as many use cases as possible. I would not recommend that you remove valid tests (e.g., negative tests, edge cases, etc.) to reduce running time even if doing so saves time and doesn’t reduce coverage. The 30 minutes you save on your scheduled deployments will easily be eaten up by time lost due to regressions introduced that would’ve been caught by the removed tests.

For what it’s worth, I’ve deployed to an org that has roughly ~1000 test cases (methods) and a relatively complex data model and it takes close to 45 minutes for production deployments.

One approach that you can take to save some time is to run the validate only deploy the night before the deploy (assuming you deploy in the early morning — adjust this example as necessary) and then on the following morning of the deploy run the deploy without the validate step. Salesforce will still do the validate and the deployment will fail if it doesn’t validate, so you don’t have anything to worry about. If you are currently doing the 1.5 hr validate deploy followed by a 1.5 hr actual deploy this will save you 1.5 hrs on the day of your actual deployment.

Lastly, if you are completely stuck you could start violating some testing best practices:

  1. Unbulkify tests. Always create the minimal amount of records for a test.
  2. Remove methods that don’t add additional coverage.
  3. Use existing records in the database.

If you do any of them it would be a good idea to structure the test code in a way that you could “toggle” that functionality. E.g., have a setting that holds the number of records to create for bulk testing and in preparation test it with the high number and on the actual deploy set it to 1, use a custom setting to conditionally execute tests that don’t add coverage and don’t execute them on deploy, etc.

Attribution
Source : Link , Question Author : Sathya , Answer Author : Peter Knolle

Leave a Comment