Thick imaging, thin imaging, and no imaging macOS

Last year, Tech Republic published a quick rundown on three approaches to Mac deployment.

Thought I'd do my quick take on it, based on my experiences.

Thick Imaging

Among the leading Mac admins out there (the ones giving workshops at conferences and on the tech panels and the primary contributors to widely used GitHub projects that facilitate Mac admin'ing), it seems there's something approaching a consensus that the admins should be moving away from the "golden master image" approach.

The idea of the "golden master" is that you have a Mac entirely configured exactly the way you want it and then image that to other machines, so they're completely identical.

In terms of the details of the imaging process, I have a tutorial here: Cloning an image using Thunderbolt and Disk Utility (post–El Capitan).

Pros

  • The imaging process itself is very quick per machine. One of our fully configured faculty laptops we can ASR over Thunderbolt in 3-5 minutes.
  • Takes up less bandwidth. We're actually blessed with some hefty bandwidth here, but your organization may not be, and imaging over Thunderbolt or even USB-3.0 would be a great way to not have the imaging process take forever or steal bandwidth away from your users.

Cons

  • If you build your "golden master" on an older Mac model and then try to image that over to a newer Mac model, you may get a do-not-enter sign when you boot up the newly imaged machine. So you'll always want to create the "golden master" on the newest Mac that you have.
  • You'll have to constantly update the "golden master" so that it doesn't quickly become a "silver master" or a "bronze master." At a certain point, if the source image is behind enough in updates, you'll be pulling so many updates post-image that you're not gaining any of the bandwidth reduction or speed-of-deployment benefits that you should get with this method.
  • If you have several different configurations, you have to create and maintain all of those different "golden master" images. So if you have a multimedia lab image and a faculty laptop image and a staff laptop image and a faculty desktop image and a staff desktop image and a library desktop image... that's a lot of separate images to create and maintain.

Thin Imaging

Historically, Mac admins have tended to favor DeployStudio for thin imaging over a network, but many Mac admins are eschewing Mac servers for Linux ones, so there's been increasing adoption of Imagr (which can be run on a Mac but also on Linux) instead.

If you want to set Imagr up on Linux, Getting started with BSDPy on Docker is a good place to start.

If you want to set up Imagr using OS X Server on a Mac, Amsys has a great step-by-step tutorial on how to do so: Part 1, Part 2, Part 3, and Part 4.

Whether you decide to go with DeployStudio, Imagr, or even a local Thunderbolt ("bronze master"), you'll probably want to look into using AutoDMG to create that thin, never-booted Mac image. Here's an example workflow using AutoDMG and Munki: AutoDMG / Outset / Munki bootstrap workflow.

Pros

  • Allows for flexibility in creating various workflows.
  • Since the thin image is never-booted, it will work with more hardware models (anything that supports the operating system version).

Cons

  • Requires a lot of infrastructure setup, particularly if you're using DeployStudio or Imagr.
  • Requires a lot of bandwidth (may be a non-issue at your organization).
  • May require netboot troubleshooting for particular laptop models or certain cables/adapters.
  • Netboot itself could take a while. And even if you're using AutoDMG over Thunderbolt, all the bootstrapped updates will pull over the network, so if you want to immediately deploy the machine, your user may end up waiting for a while for it to be fully usable.

No Imaging

"BUILDING" 2015 MACS describes a cool process of installing .pkg files over to a never-booted and non-imaged Mac over Thunderbolt and Target Disk Mode. Unfortunately, this appears to result in a slow booting or refusing-to-boot machine. Greg Neagle's (author of the aforementioned blog post and primary developer on Munki) workaround at the time was to boot into recovery mode and use the terminal to install the .pkg files. I believe he's eventually gone on to use Imagr instead, but the no-image concept is a good one to consider still.

One way you can do it without recovery mode is to have an external drive (with Thunderbolt or USB-3.0) that has a version of macOS installed on it with an autologin account and then install the .pkg files you need to get things up and running (/var/db/.AppleSetupDone, Munki, Munki bootstrap, etc.). It'll be quicker to boot than recovery mode.

Pros

  • You don't have to create an image, even a thin one (which, with AutoDMG, has to be built on the same exact version of macOS as the never-booted image you're using to build the image). You just need the packages you want to install.
  • The no-image boot is a lot faster than a netboot and comparable to a "golden master" image in speed to get done (sans updates) and move on to the next image.

Cons

  • Still consumes a bunch of bandwidth to pull all updates.
  • Requires a lot of booting to recovery mode (which takes a long time) or having a bunch of external drives to boot from (installing the packages over Thunderbolt and Target Disk Mode does not always work well).

AutoDMG / Outset / Munki bootstrap workflow

I wanted to create a workflow that involved pretty much just imaging a new machine with a thin image and then having the image itself pull updates. Sounds simple, but I had to do quite a bit of experimenting to figure out the exact flow.

What to include with AutoDMG

Include in the AutoDMG-created image only CreateUserPkg (for one default user), Outset (for boot and login scripts), the latest Munki tools, and a special ".pkg" that puts some scripts in place to run at boot.

The special .pkg

In addition to distributing various payloads, it's key that the special .pkg have a postinstall script that runs

sudo touch "$3"/var/db/.AppleSetupDone
This cannot be an Outset script. It has to be part of the AutoDMG-created never-booted image, because if you boot the previously-never-booted image without the .AppleSetupDone file in place, you'll be prompted to do all the Mac setup stuff (e.g., create a user, select the time zone, connect to a wireless network manually) at first boot.

One of the payloads should be a script that goes into the /usr/local/outset/boot-every directory, because Outset won't run boot-once scripts unless there's a network connection by default—you can change the preferences .plist and deploy it, but I find it easier to just use a boot-every script. This script will do several things:

  • Check for a Munki preferences file. If the file exists, self-delete (otherwise the script will run at every boot).
  • Create Munki preferences.
  • Create the Munki bootstrap file.
  • Connect to a wireless network to pull in updates.
  • Reboot after waiting a minute (just to give a little time for the wireless connection to finish).***
*** In real-world testing, if you put in your script to wait one minute before shutting down, it may sometimes take more than one minute for the reboot to happen. In a recent test I did, it took about four minutes from first boot for the next reboot to happen. And then the reboot after that (the one that triggered the Munki bootstrap) took about 90 seconds.

After that, the Munki bootstrap file should take care of any subsequent reboots and updates until the machine is fully updated.