Script AutoPkg trust verification and trust update process

Starting with version 1, AutoPkg began evaluating trust info for recipes, so you could see what changes were made to a recipe (if changes were made) and then accept the changes if you wanted to. Here is what the typical trust verification workflow looks like.

Whether running a list of recipes via script or via AutoPkgr schedule, I'd occasionally get error'ed recipes when trust was broken, have to manually run

autopkg verify-trust-info -vv NAMEOFRECIPE
and then, after review, run
autopkg update-trust-info NAMEOFRECIPE
and then run the recipe after updating the trust info:
autopkg run -v NAMEOFRECIPE
So I thought I'd take a stab at scripting the whole process. Basically my script updates all the repos (to see if there are changes), verifies trust info on each of the recipes in the recipe list, and then prompts the user to review changes and approve them or not, before running all the approved or unchanged recipes.

It's still in the early testing phase, but it seems to work so far....

Guided Access mode after a reboot on iOS 9 vs. iOS 10

Just a quick observation based on testing:

If you're in Guided Access mode on an iPad running iOS 9.3.5 (say, an older model that can't install iOS 10 and above), and you do a forced reboot (hold home and power buttons until the Apple symbol appears), the device stays in Guided Access mode.

If, however, you're in Guided Access mode on an iPad running iOS 10 (and perhaps in future versions?) and do a forced reboot, the device gets out of Guided Access mode.

P.S. I was able to use this to help a student out who was stuck in Guided Access mode on an older iOS version—updated it to iOS 10, rebooted, and then the iPad was out of Guided Access mode, and a new Guided Access mode passcode could be set.

Basics of Crypt 2 and Crypt Server

Graham Gilbert created a pretty cool project called Crypt 2, which forces client machines to enable FileVault2 encryption, and then sends the recovery key to a Crypt Server.

So far the documentation on Crypt 2 is rather sparse, so this is what I was able to piece together based on the README, some asking around, and a lot of trial and error.

The server

The server bit was tricky for me to figure out. I happened to have a Ubuntu 16.04 LTS server I wanted to try it out on, but the Ubuntu 14.04 and Ubuntu 12.04 instructions did not work for 16.04. I got this error when trying to run the command to get the requirements:

Collecting django-extensions==1.6.8 (from -r
crypt/setup/requirements.txt (line 4))
Could not find a version that satisfies
the requirement django-extensions==1.6.8 (from -r crypt/setup/requirements.txt (line 4)) (from versions: 0.4, 0.4.1, 0.5, 0.6, 0.7, 0.7.1, 0.8, 0.9, 1.0.0, 1.0.1, 1.0.2, 1.0.3, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.8, 1.3.9, 1.3.10, 1.3.11, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.6.1, 1.6.2, 1.6.3, 1.6.5, 1.6.6, 1.6.7, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7)
No matching distribution found for django-extensions==1.6.8 (from -r crypt/setup/requirements.txt (line 4))

It seems a lot of people go the Docker route. I'm not really an experienced Docker user, but most of the instructions are fairly straightforward.

After you install Docker, you just pull Crypt Server:

docker pull macadmins/crypt-server
and then (as specified in the docs), go ahead and run the container:
docker run -d --name="Crypt" \
--restart="always" \
-v /somewhere/on/the/host:/home/docker/crypt/keyset \
-v /somewhere/else/on/the/host:/home/docker/crypt/crypt.db \
-p 8000:8000 \
macadmins/crypt-server
You should then be able to see your site at 0.0.0.0:8000.

There are ton of ways, apparently, to get the site more secure. Since I'm most familiar with using Apache, I just put in a :443 VirtualHost entry with

ServerName SUBDOMAIN.MAINDOMAIN.COM:443
ProxyPass / http://0.0.0.0:8000/
ProxyPassReverse / http://0.0.0.0:8000/
and then that worked out for getting SSL going (as long as you've got SSL going for SUBDOMAIN.MAINDOMAIN.COM... that's too much to go into for one blog entry).

Once you have that set up, go to https://SUBDOMAIN.MAINDOMAIN.COM and log in with admin and password, and then immediately change the password to a new one.

You'll then have the option to add other users with various types of permissions.

The client

The Crypt 2 client is a standard .pkg you can deploy with whatever you're using to manage your client machines. You can configure your clients' Crypt 2 preferences before deploying the .pkg.

When the user who's okay to enable encryption (one not in the SkipUsers array) logs in, a window will pop up that says This machine must be encrypted. It will reboot shortly after clicking continue.

Crypt will write a file to /var/root/crypt_output.plist with values for EnabledDate, EnabledUser, HardwareUUID, LVGUUID, LVUUID, PVUUID, RecoveryKey, and SerialNumber.

After that, probably nothing will happen for some time. Crypt 2 runs a Launch Daemon that invokes /Library/Crypt/checkin every 900 seconds (15 minutes). After around that time, you should see the client machine show up in the web interface the admin sees.

If you click Info... ...and then Info / Request... ...and then Retrieve Key... ... you should be able to see the recovery key.

You may have to ask permission of yourself to view the recovery key (presumably if you have users with different permission levels, a user with lesser permission would ask an admin user for temporary permissions to view the recovery key).

Other options

This isn't a comprehensive list:

Using startosinstall to install a macOS upgrade with Munki

Update: The instructions below will be obsolete once Munki 3 is released. More details on the Munki 3 implementation can be found on the Munki wiki.

createOSXinstallPkg is a great project for making an Apple macOS installer into a .pkg you can deploy with Munki.

Apple did some things to break that process for 10.12.4. People are in the process of finding workarounds for it.

One option is to use the built-in startosinstall tool that comes with the installer bundle.

If you import the bundle into Munki, you'll want to have both a preinstall_script and a postinstall_script.

The preinstall_script checks to make sure there aren't other updates pending, since startosinstall will run its own reboot independent of Munki. The pending updates should be 1 (it's the only 1) or 0 (it was part of a set of updates that did complete and then the pending updates cleared, and you're trying again):

#!/bin/bash

# Make sure there is only one pending update (this one)
pending_count=$(defaults read /Library/Preferences/ManagedInstalls PendingUpdateCount)

# If it's 1 or 0, we're good to go
if [ "$pending_count" == 1 ] || [ "$pending_count" == 0 ]; then

exit 0

else

# Otherwise, abort the installation
exit 1

fi
The postinstall_script does the actual install:
#!/bin/bash

sudo "/Applications/Install macOS Sierra.app/Contents/Resources/startosinstall" --applicationpath "/Applications/Install macOS Sierra.app" --agreetolicense --nointeraction
Just as you would with a normal OS upgrade item, you want the installs array to reflect the OS version (not the presence of the installer bundle in the /Applications folder), and you want to mark this as an Apple item. (Check the Munki wiki for more details about those two things.)

P.S. There is now a recommendation on the createOSXinstallPkg README to upgrade using 10.12.3 or investigate using startosinstall.

P.P.S It's possible, instead of my funky workaround with the preinstall_script, that you could use the --pidtosignal option instead with Munki. Here's an example using JAMF.

P.P.P.S. Looks as if Greg Neagle has started working on integrating startosinstall into Munki "natively"—yes!

Students unsubscribing from mailing lists within Google Apps for Education

GAM solution

Apparently, there is a GAM solution to this, which is

gam update group nameofgroup who_can_leave_group ALL_MANAGERS_CAN_LEAVE

I did a quick test, and it appears to work. Users will still see the option to "unsubscribe," but after they do, they'll still be subscribed anyway.

Thanks to Zack McCauley for the tip.

What's the problem?

We used to have students come in and say they'd accidentally unsubscribed from an important school mailing list and needed to be resubscribed. This confused us, because we didn't think students could unsubscribe themselves. Turns out not only can they (click on the arrow at the top of the email and then select Unsubscribe from this mailing-list), but we contacted Google directly, and they confirmed there isn't a way for Google Apps admins to prevent users from unsubscribing (and apparently this issue goes back at least as far as 2012, if not longer).

I get that people in general should be able to unsubscribe from mailing lists, but this is a controlled environment. These are email addresses provided by the institution (school, organization, company) and so the institution should be able to decide what mailing lists its own employees and students are on, right? Well, apparently not.

Fortunately, we don't have a ton of people who make it a habit of unsubscribing themselves from mailing lists. Most of the time when students do unsubscribe, they soon realize they're missing important messages, and then they ask to resubscribe.

Nevertheless, it can be handy to find which students are unsubscribed from the lists they should be subscribed to.

GAM?

This is something I thought GAM should be able to handle, and I even got some suggested commands from another Mac admin. Unfortunately, I couldn't get gam to work. I followed all the instructions and still ended up with this error, no matter how much "coffee" I got and despite enabling access to the suggested scopes:

Are you ready to authorize GAM to manage G Suite user data and settings? (yes or no) Y
Great! Checking service account scopes.This will fail the first time. Follow the steps to authorize and retry. It can take a few minutes for scopes to PASS after they've been authorized in the admin console.
User: USERNAME@EMAIL.COM
Scope: https://mail.google.com/ FAIL
Scope: https://www.googleapis.com/auth/activity FAIL
Scope: https://www.googleapis.com/auth/calendar FAIL
Scope: https://www.googleapis.com/auth/drive FAIL
Scope: https://www.googleapis.com/auth/gmail.settings.basic FAIL
Scope: https://www.googleapis.com/auth/gmail.settings.sharing FAIL
Scope: https://www.googleapis.com/auth/plus.me FAIL
ERROR: Some scopes failed! Please go to:
https://admin.google.com/siprep.org/AdminHome?#OGX:ManageOauthClients
and grant Client name:
CLIENTNAMERIGHTHERE
Access to scopes:
https://mail.google.com/,
https://www.googleapis.com/auth/activity,
https://www.googleapis.com/auth/calendar,
https://www.googleapis.com/auth/drive,
https://www.googleapis.com/auth/gmail.settings.basic,
https://www.googleapis.com/auth/gmail.settings.sharing,
https://www.googleapis.com/auth/plus.me
Service account authorization failed. Confirm you entered the scopes correctly in the admin console. It can take a few minutes for scopes to PASS after they are entered in the admin console so if you're sure you entered them correctly, go grab a coffee and then hit Y to try again. Say N to skip admin authorization.

Google Apps Script

I started looking into Google Apps Script, which is pretty cool, except the documentation isn't comprehensive enough to suggest all the steps involved to find unsubscribed users.. For example, there isn't any documentation on how to construct a query that has two parts to it. Frankly, I couldn't even find official Google documentation on how to include a query—I had to find that on Stack Overflow.

In addition to missing pieces in the documentation, Google Apps Script also suffers from arbitrarily imposed limits to what you can do. For example, you can't fetch more than 500 user records at once. Also, even though there's a function to check if a user is a member of a group, if you run that in a loop over several hundred users at once, you'll get this error:

Service invoked too many times in a short time: groups read. Try Utilities.sleep(1000) between calls.

I played around with scripting putting users back into groups, but it gave uneven results (some of that was our own fault—we had a couple of users in the incorrect organizational units, but it also seemed to sometimes not actually put users in groups successfully).

The working script

So ultimately, I didn't script this in the "ideal" way (due to limitations put in place by Google), but the script basically works. For each grade level, it will find all the students who aren't suspended users, put them into an array, find all the members of the appropriate mailing list, and then take each of those members out of the original array. Anyone still left in the array is potentially unsubscribed to a list she should be subscribed to.

Finally, an email is sent to specified recipients to let them know either everything's okay or something has to be investigated. You can script this to run every day or every week and then make fixes yourself afterwards.

Google Groups / mailing list quirk

I also found one quirk to Google Groups. There was one student I had trouble adding back to a group. It would seem I could add the student in the admin console, but then the student still wouldn't be added (no error message). It was only when I went to Google Groups itself (not the admin console) and tried to add the student directly that I saw a message saying I couldn't add the student because the student already had an pending request that needed approval (the student had asked to re-join the mailing list). Once I approved that request, the student was back in.

Why you should use FileVault personal recovery keys instead of institutional recovery keys

In my previous blog posts on FileVault, I talked about or showed how to use an institutional recovery key for FileVault encryption:
Enabling FileVault Encryption for Client Macs
Setting up deferred FileVault encryption
Using a FileVault institutional recovery key to unlock an encrypted disk

But in exploring FileVault further, I've found it's much better to use personal recovery keys instead of a single institutional recovery key, and it's not for the reason you might think.

IRK not necessarily less secure than PRK

Yes, from a security standpoint, you could make the case that an institutional recovery key creates a single breach point (someone obtains that one recovery key and thus can decrypt all your institution's machines), but I don't think this makes personal recovery keys more secure necessarily. First of all, the personal recovery key itself can unlock a machine, but the institutional recovery key is used in combination with a password to unlock the keychain. Secondly, most likely you're storing your personal recovery keys all in one place—it may be a secure place, but it's also a single breach point. If you somehow access that one storage location (database, spreadsheet, whatever you're using to store the personal recovery keys), you have access to all the recovery keys for all the machines.

I suppose you could scatter the personal recovery keys in multiple storage locations. There is always an artistic (not scientific) balance between security and convenience, so that's up to you how you decide to store things. The point, though, is that an IRK is not necessarily less secure than a PRK.

IRK is less useful than a PRK, though

As I was rolling out encryption to our fleet using an institutional recovery key, I started to realize through testing (fortunately not through an actual emergency) how limited in functionality the institutional recovery key is compared to the personal recovery key.

First of all, unless you are physically in front of the machine or using ARD to remote into a virtual session, you cannot enable another FileVault user without storing the password for it in plain text. If you try to do so via SSH and the command-line, you'll be prompted for the password of an FV-enabled user or for the personal recovery key, so having the IRK doesn't help there.

That's not really the worst part. The worst part is that, as far as I can tell (based on Google searches, asking other Mac admins, and just trial and error), there is no way to reset a forgotten user password with just the institutional recovery key. You can unlock the encrypted volume and save the data, but you can't just say "Reset this user's password." You can, as a horribly long workaround, decrypt the drive, log in as another admin user, reset the other user's forgotten password, wait for the decryption to finish completely, and then re-encrypt. That can take a really long time.

But if you just use personal recovery keys, you can have the user try to log in three times, and she'll be prompted to enter the recovery key to reset the forgotten password, and then be prompted to enter a new password.

Wesley Whetstone has created a neat little pkg that can generate/regenerate personal recovery keys: fde-rekey.

Once you've switched that over, you can also remove the institutional recovery key (yes, it's possible to have both an IRK and a PRK). If you're using Munki, I wrote a nopkg that will remove the IRK after fde-rekey is installed.

Automating an AutoPkg Munki import when vendors don’t package installers properly

You may have, when using (or creating) a .munki AutoPkg recipe, come across a situation in which you run it:

autopkg run -v NAMEOFITEM.munki
and then get something back like this:
Item NAMEOFITEM already exists in the munki repo as OLDNAMEOFITEM.
even though you're sure the item is newer than the one in the Munki repo.

That has to do with the find_matching_item_in_repo() function the MunkiImporter processor uses to determine whether the item exists already or not.

It compares a number of things between the to-be-imported item and what's already in the Munki repo—installer item hash, installs, receipts, files and paths, etc. If any of those matches up, MunkiImporter considers it a match.

So, for example, if you have BADLYPACKAGEDBYVENDOR 3.7.3, which is an update for BADLYPACKAGEDBYVENDOR 3.7.2, but the receipts for both are just 1 (yes, 1 and not 3.7.2 or 3.7.3), the MunkiImporter processor will see the two as the same and not do "another" import of the same item. Likewise, if the version in the app bundle is 3.7 and not 3.7.2 or 3.7.3, the MunkiImporter processor will see them as the same. I've even run into situations in which a vendor artificially ups the number but the "new" package or .app bundle is exactly the same. In that case, the installer hash will be the same, and the MunkiImporter processor will see them as the same.

So what do you, apart from complain to the vendor and pray it fixes the problem?

There may not be anything you can do apart from force an import. You may find a convoluted workaround, though. For LockDown Browser, I had to create an installs array based on the executable and also essentially override the useless receipts array. You might have to do something similar, depending on how bad the vendor package is.

Using an Outset boot-every script to add default applications via Munki

In Bash script to add optional installs for Munki, I introduced a script that uses PlistBuddy to add optional install items to the client machine's SelfServeManifest.

I thought at first I could use that as a boot-once script for Outset, but it seemed the script ran too early (actual first boot) and then didn't actually write the values it should.

As a workaround, I've put the script in as an Outset boot-every with a check to see if one of the optional items is already in the Munki install log. Here's an example:

#!/bin/bash

# See if this has ever run before... have to check the log, because Outset will delete the file once run. We don't want this to re-run if we update the pkg version
alreadyRun=$(cat /Library/Managed\ Installs/Logs/Install.log | grep "Firefox")

if [ -z "$alreadyRun" ]; then

# Self-serve manifest location
manifestLocation='/Library/Managed Installs/manifests/SelfServeManifest'

# PlistBuddy full path
plistBuddy='/usr/libexec/PlistBuddy'

# Add in "optional" default software
optionalDefaults=("Firefox"
"GoogleChrome"
"MSExcel2016"
"MSWord2016"
"MSPowerPoint2016"
)

# Check to see if the file exists. If it doesn't, you may have to create it with an empty array; otherwise,
if [ ! -f "$manifestLocation" ]; then
sudo "$plistBuddy" -c "Add :managed_installs array" "$manifestLocation"
fi

for packageName in "${optionalDefaults[@]}"
do
# Check it's not already in there
alreadyExists=$("$plistBuddy" -c "Print: managed_installs" "$manifestLocation" | grep "$packageName")

# Single quote expansion of variables gets messy in bash, so we're going to pre-double-quote the single-quotes on the package name
alteredPackageName="'""$packageName""'"

if [ -z "$alreadyExists" ]; then
sudo "$plistBuddy" -c "Add :managed_installs: string $alteredPackageName" "$manifestLocation"
fi
done

fi
So this basically checks for Firefox. If Firefox (one of the default optional installs) is in the install log, it won't run again.

When an AutoPkg recipe fails to import a .dmg

If you ever have an AutoPkg recipe that seems to be working fine for weeks or even months and then suddenly fails with a message like this one:

Error in local.munki.FileZilla: Processor: MunkiImporter: Error: creating pkginfo for
/Users/USERNAME/Library/AutoPkg/Cache/local.munki.FileZilla/FileZilla.dmg failed: Could not mount
/Users/USERNAME/Library/AutoPkg/Cache/local.munki.FileZilla/FileZilla.dmg!
(doesn't have to be FileZilla—could be anything), you may not see the .dmg is mounted in Disk Utility (or even diskutil list), but you can check to see if it's a phantom mount by seeing if it shows up in the output of
hdiutil info
If it does show up there, then run
hdiutil detach /dev/diskFILLINLOCATION
and then re-run the recipe. Should be fine after that.

Acknowledgements: Thanks to Eric Holtam for the tip—just documenting it here for anyone else who may benefit from it.

Using DVD Flick to create DVDs from video files

DVD Flick is an open source Windows program that allows you to burn various video file types to DVD (as an actual DVD, not as a data file).

It's fairly simple to use, but there are a couple of weird nuances:

  1. By default, it doesn't actually burn to disc when you create the DVD. It creates a DVD-ready set of files in a folder on your computer. In order to actually make the DVD, you have to change the settings in your project by going to Project Settings > Burning > Burn project to disc.
  2. Also by default, there is a weird audio delay (similar to bad dubbing on Bruce Lee movies from the 70s). In order to get rid of that delay, you have to go to Edit title... > Audio tracks > Edit > Ignore audio delay for this track.