Development Team Best Practices, or “Is this IDE for real? I am in disbelief” [closed]

I have a family of questions that I am piling into one post, because my suspicion is that they will all point to a short list of bad practices that cause these quandaries. The way I have been taught to develop for salesforce is just unbelievable to me, and I don’t know what I’m doing wrong or what better options are available.

1) I think it’s fairly obvious that we should all want more than just a “Last modified by” name and date. How do you get the metadata to play nicely with version control? Is there any way to do a merge without crazy stuff happening? Has anyone gotten a sensible system in place for using branches on their various sandboxes and developer organizations, and been able to actually use the VCS history in the way you would in a traditional development language? (e.g. to look at the code on production in “blame” view and learn anything useful from it). What parts of the metadata API do you enter into version control? I am asking all these questions because it seems like salesforce expects all version control to happen after the fact, making it more of an archaeological process than a “control.”

2) The IDE, day-to-day: Is there a way to speed it up, or is this even necessary?

How long does “Refresh from server” take for you? It’s a Loooooooong time with my organization, like 20 minutes or more. Compared to “svn up” or “git pull” this is just unbelievable. Do you just live with it, or is there something else?

Does it kill your PC’s performance while you’re using eclipse/the IDE? It eats up a ton of memory for what it is. We have about 100 megs of code and resources, object & profile definitions, etc . . . so maybe if it used 500 megs of memory, I wouldn’t be surprised . . but I had to bump up the java heap size to a gig or two, and during a refresh from server the CPU and IO usage become very high and a system with 4 gigs of RAM starts acting wierd. Is there a way to speed it up, or should I just not be using this software?

3) Also about the IDE, but related to first-time per project setup. “Choose metadata components” is also horrifically slow, if I want to get the objects. Should I not even bother? Is there a better way to track changes to the objects?

4) Should multiple developers be sharing sandboxes? On one hand, it’s nice to have your own environment to work in, but on the other, keeping sandboxes up to date appears to be a trial of its own.

When I say in the subject that “I am in disbelief” I’m not exaggerating, nor am I necessarily looking just at the IDE – it is self-evident that many salesforce customers have had great success, but the way I have been introduced to the system by the development team at my new job is blowing my mind, as an experienced (10 yrs) developer on other platforms. It’s like this system was designed by people who had never seen the type of modern tools that developers have had for decades. So how do you all do it? Are the problems my team and I are having an indication that we are doing this all the wrong way?

Answer

See Team Development: Possible, Probable, and Painless and the Salesforce Development Life Cycle Book that are good references to developing in a team environment on the Force.com platform.

I come from a traditional web app background (Java, Spring, etc.) and use source control like SVN, git, CVS, etc. I have worked on traditional software where we have multiple branches going simultaneously and periodically merge and/or rebaseline, etc, etc.

I have tried the recommended best practice of each developer using their own sandbox, each one checking changes into a central repo (e.g., svn), and getting eachother’s changes by updating from the repo; all along, merging changes. That simply did not work and was not feasible for a project based contracting company to implement given time and budget constraints. That being said, I will outline what the best practice was, in theory, and then tell you an alternative.

I’ll just use svn as the repo in my illustration.

Theoretically…

  1. Each developer has their own sandbox and their own svn account.
  2. Developer makes changes in their own org and pulls down all of the meta data to their IDE.
  3. A package XML file is set up and maintained (constantly) that specifies which meta data is being created by your project.
  4. Changes are checked into SVN.
  5. Manual changes (i.e., not supported by metadata) are tracked in a shared Google doc (the audit trail CSV can help as well, but the Google doc is the official record).
  6. Developers update from svn to get eachother’s changes. They need to be careful about the order of pulling down data from their org vs. updating from svn.
  7. Developers will need to be very comformtable with the metadata XML file formats.

In theory, what is kept in SVN is just the changes that the developers make so that you could easily apply those changes to any of your sandboxes, rebaseline from production, look at revision history, reverse merge, etc.

Also keep a separate repo of data:

  1. Reference data that is needed to set up org: Think custom settings, portal accounts, product/event reference data, etc. This is data that would be considered a part of the installation or your org and/or project.
  2. Test data that can be used to populate an org: Useful for testing, getting orgs set up / recreated for development, etc.

Set up CI (Jenkins) to automate builds on check ins and/or nightly if the manual problems are an issue:

  1. Jenkins uses the ant migration tool to push changes.
  2. All unit tests are run
  3. Jenkins w/ Selenium plug-in runs suite of Selenium tests.

Reality…

There were issues with setting this up.

  1. There were bugs in the metadata that prevented us from automating pushes.
  2. Metadata XML files frequently got “whacked” or out of synch.
  3. Developer’s spent an inordinant amount of time resolving merge conflicts.
  4. Bugs in SF and the IDE such as the IDE mitakenly marking files as changed.
  5. Extra overhead of needing to know the metadata XML file format.
    All of that makes it cost prohibitive to set up development environments like this for short term (e.g., 3-6 sprints, etc.) projects.
  6. Frequently contract work is done on existing orgs with existing problems.

We’ve ended up developing in a less desirable way but it gets the job done and we honestly haven’t had any real issues with it.

  1. Developers all work in the same office and/or have excellent communication with eachother w/in the project. Each developer has their own log in, so that who changed what can be tracked.
  2. One dev sandbox. All developers work in it. Unit test job (in Apex) is set up to run periodically.
  3. Jenkins job that runs nightly to back up the sandbox to git, in case someone accidently refreshes it.
  4. One qa sandbox. Change sets are periodically pushed to the QA env for the QA team to do their testing in a stable and controlled environment.
  5. One ua sandbox. Change sets are pushed here periodically (e.g., end of sprint) for the users to test.
  6. One staging sandbox. Change sets are pushed here in preparation for production deployments.
  7. The audit log CSV is used to track who made what changes and assist in change set construction when the Force.com IDE or ant migration tool isn’t used.
  8. Data load CSVs maintained in a central repo.

All that being said, if I were working on longer term projects (2+ yrs), maintaining my own org, or maintaining an AppExchange app I would recommend trying to set up and maintain something more like the theoretical best practice.

Attribution
Source : Link , Question Author : Community , Answer Author :
Peter Knolle

Leave a Comment