Why you might not be using Drush make

In a previous blogpost, I detailed lots of reasons why you should be using Drush make. If it were that good, you might reply, then why isn't everyone already using it?

While the advantages of Drush make are in many ways immediately clear - you can watch it run and work - the disadvantages often appear when you start trying to use it. After the hype, the discovery of hidden problems can be pretty demoralizing. Drupal's own Dries Buytaert has applied the idea of Gartner's Hype Cycle to Drupal core; and that very emotional process of excitement and then disillusionment can happen in miniature, with each person's impressions of any software: including, of course, Drush make.

Below are some of the most likely reasons why you might have got disillusioned with Drush make, and some suggestions for how the underlying problems can be mitigated.

1. Fitting into an existing workflow

This is the immediate barrier I see to people using Drush make. Typically, your development, staging and (re-)deployment workflows will be oriented around something other than Drush make. Most likely, you'll be using the One Big Repository method, forking contributed code and putting it into your own bucket. But I've heard of people even doing things like building RPMs for (re-)deployment on a RedHat server: it takes all sorts.

As there are a lot of non-make solutions out there to the problem of "how do I build a site", then if you're looking at embedding Drush make into that existing workflow, the key really is to see what scripting you've already done, and then identify where it can be injected. Typically there will be a single point, where you would usually e.g. fetch changes, where you can run drush make instead. This might be run on your production server; it might be run when you build some kind of package (if that's what you're doing!)

If you're already committed to One Big Repository, you could try drush generate-makefile. This will generate a makefile from your existing site, referencing drupal.org projects wherever possible. You can use the excellent hacked module to determine which of these contributed modules have been (sometimes unintentionally) edited; anything that could usefully be submitted as patches on issue queues, you can do so by running hacked plus the diff module, which will even pin down exactly what changes you've made and represent them in diff/patch file format. If you're left with only a handful of remaining custom projects, consider hosting them as separate git repositories; and you're almost done!

There will always be some (re-)deployment methods that don't really suit having to execute and then re-execute Drush make whenever you're doing a major contributed-project update: git-only deployment springs to mind; but unless you're really wedded to that (re-)deployment method - and it's worth looking critically at it before you say you are - then you should consider switching to a method that can incorporate Drush make.

2. Speed of building a codebase from downloaded parts

If you have to download every project from drupal.org, every time, then the speed of your build process might suffer. That "pseudo-compression" factor I mentioned in my previous blogpost - version-controlling a single line rather than a megabyte of module code - only really makes sense when you think of the size of e.g. your git repository, or the code you need to pass to another developer or keep in your own head, if you're experimenting with a build.

At Torchbox, we've historically mitigated most of the download problems at our network gateway, with a squid proxy on the gateway router cacheing any traffic that looks like a Drupal module zipfile. In addition, later versions of Drush maintain their own internal cache of downloaded modules; even so, having a gateway proxy cache means that one developer's downloads will automatically be available for any other developer until the cache expires. 

However, it's true that, generally speaking, download speed must always be a consideration when building a codebase via Drush make, and you should definitely have contingency plans in place. If drupal.org is down, and your local copies of modules are not entirely complete, then as things stand your (re-)deployment procedure might have to wait, or worked around on live by downloading individual module changes into the codebase.

3. Drush make only does the bare minimum

What Drush make actually does is only the bare bones of a site: which is fine for a build, but less suitable for rebuilding an existing site. At that point, the "codebase" includes such customizations as a local settings.php file, the contents of Drupal's files folder, any client-specific themes and modules - if they're all in one git repository at sites/default - and occasionally even random custom folders or files at the root folder (a hard habit for some site maintainers to break!)

Despite the fact that most sites will contain at least one of these non-contributed components, Drush limits itself to: core, d.o-hosted projects, libraries, and projects hosted elsewhere using e.g. git; it's up to you to set up the rest. Where do your uploaded files go? What if you needed to apply symlinks? What if you've uploaded some custom top-level file for some temporary campaign or similar? And what about when you need to somehow put your newly built codebase in place of the old codebase, without any loss of service of either your static files (maybe still in the old codebase) or your new code (probably not yet at the correct webroot)?

The answer from Drush make, at any rate, is a - not unfriendly - shrug: it simply doesn't concern itself with these problems, and leaves the solutions up to the developer. But I think that this lack of functionality is the main reason that most people abandon Drush make after the initial site build. Indeed, whereas Drush make powers d.o's building of Drupal distributions from install profiles; yet, once built, downloaded and installed, people rarely see the usefulness in continuing to use Drush make to redeploy it and bring in updates: they understandably ask "where's the process for doing that?" and find none available.

Any larger software house that wants to persist with  builds a larger framework to sit Drush make into. After all, once the problem is identified, such a "framework" need only be a few command-line scripts for moving files and folders around. I've even known companies build this kind of solution out of C#, as this was the in-house language of choice that pre-dated Drupal adoption. I've been involved in building three now - two of which were proprietary, in Bash and Python - and the third is set to be a native Drush command project, already available in an unstable alpha on d.o, under the GPL. More on that in a later blogpost; but if you want to use Drush make to maintain your codebases, expect to introduce at least some scripting, whether your own or someone else's.

How you can work around these problems

Each of the problems above needs consideration, if you want to make substantial, ongoing use of Drush make (and like I've said previously, there are definite advantages to doing so.) Right now I would say the most obvious solutions are:

  1. Fitting into an existing workflow. Use drush generate-makefile, and identify the point in your workflow scripting where you can neatly insert a drush make.
  2. Speed of building a codebase from downloaded parts. Ensure you have a version of Drush that caches locally; consider a local squid proxy for all drush dl traffic; or even a local mirror of the zipfiles. Have an agreed contingency plan for when code is not easily available, even if it's "wait."
  3. Drush make only does the bare minimum. Script your full build process, or consider using something like Drush instance, which is basically the same thing.

As you can see, it's not going to be a one-click install. But you can introduce each of these fixes as and when you find that a particular aspect of Drush make is starting to get on your nerves. It's just best that you're aware of the problems in advance, and aware that others feel your pain!

Pain? so is Drush make really worth it?

Oh, yes. I definitely think it's worth using Drush make. In the absence of (among other things) good HTTP patching support in php-composer, it's a very elegant, targeted solution to a specific problem that arises when you're building a site from many contributed sources, mostly available from the same place. And I've built, or helped build, around a dozen sites with a Drush make build workflow, and it's a huge improvement on the One Big Git Repository approach we used to follow, especially where maintenance is concerned.

Seriously. Try using Drush make for your deployments. You'll never look back. It'll change your life. No pain, no gain. Right?