阅读视图

发现新文章,点击刷新页面。

Building with nightly Swift toolchains on macOS

The Swift website provides nightly builds of the Swift compiler (called toolchains) for download. Building with a nightly compiler can be useful if you want to check if a bug has already been fixed on main, or if you want to experiment with upcoming language features such as Embedded Swift, as I’ve been doing lately.

A toolchain is distributed as a .pkg installer that installs itself into /Library/Developer/Toolchains (or the equivalent path in your home directory). After installation, you have several options to select the toolchain you want to build with:

In Xcode

In Xcode, select the toolchain from the main menu (Xcode > Toolchains), then build and/or run your code normally.

Not all Xcode features work with a custom toolchain. For example, playgrounds don’t work, and Xcode will always use its built-in copy of the Swift Package Manager, so you won’t be able to use unreleased SwiftPM features in this way. Also, Apple won’t accept apps built with a non-standard toolchain for submission to the App Store.

On the command line

When building on the command line there are multiple options, depending on your preferences and what tool you want to use.

The TOOLCHAINS environment variable

All of the various Swift build tools respect the TOOLCHAINS environment variable. This should be set to the desired toolchain’s bundle ID, which you can find in the Info.plist file in the toolchain’s directory.

Example (I’m using a nightly toolchain from 2024-03-03 here):

# My normal Swift version is 5.10
$ swift --version
swift-driver version: 1.90.11.1 Apple Swift version 5.10 (swiftlang-5.10.0.13 clang-1500.3.9.4)

# Make sure xcode-select points to Xcode, not to /Library/Developer/CommandLineTools
# The Command Line Tools will ignore the TOOLCHAINS variable.
$ xcode-select --print-path
/Applications/Xcode.app/Contents/Developer

# The nightly toolchain is 6.0-dev
$ export TOOLCHAINS=org.swift.59202403031a
$ swift --version
Apple Swift version 6.0-dev (LLVM 0c7823cab15dec9, Swift 0cc05909334c6f7)

Toolchain name vs. bundle ID

I think the TOOLCHAINS variable is also supposed to accept the toolchain’s name instead of the bundle ID, but this doesn’t work reliably for me. I tried passing:

  • the DisplayName from Info.plist (“Swift Development Snapshot 2024-03-03 (a)”),
  • the ShortDisplayName (“Swift Development Snapshot”; not unique if you have more than one toolchain installed!),
  • the directory name, both with and without the .xctoolchain suffix,

but none of them worked reliably, especially if you have multiple toolchains installed.

In my limited testing, it seems that Swift picks the first toolchain that matches the short name prefix (“Swift Development Snapshot”) and ignores the long name components. For example, when I select “Swift Development Snapshot 2024-03-03 (a)”, Swift picks swift-DEVELOPMENT-SNAPSHOT-2024-01-30-a, presumably because that’s the “first” one (in alphabetical order) I have installed.

My advice: stick to the bundle ID, it works. Here’s a useful command to find the bundle ID of the latest toolchain you have installed (you may have to adjust the path if you install your toolchains in ~/Library instead of /Library):

$ plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist
org.swift.59202403031

# Set the toolchain to the latest installed:
export TOOLCHAINS=$(plutil -extract CFBundleIdentifier raw /Library/Developer/Toolchains/swift-latest.xctoolchain/Info.plist)

xcrun and xcodebuild

xcrun and xcodebuild respect the TOOLCHAINS variable too. As an alternative, they also provide an equivalent command line parameter named --toolchain. The parameter has the same semantics as the environment variable: you pass the toolchain’s bundle ID. Example:

$ xcrun --toolchain org.swift.59202403031a --find swiftc
/Library/Developer/Toolchains/swift-DEVELOPMENT-SNAPSHOT-2024-03-03-a.xctoolchain/usr/bin/swiftc

Swift Package Manager

SwiftPM also respects the TOOLCHAINS variable, and it has a --toolchains parameter as well, but this one expects the path to the toolchain, not its bundle ID. Example:

$ swift build --toolchain /Library/Developer/Toolchains/swift-latest.xctoolchain

Missing toolchains are (silently) ignored

Another thing to be aware of: if you specify a toolchain that isn’t installed (e.g. because of a typo or because you’re trying to run a script that was developed in a different environment), none of the tools will fail:

  • swift, xcrun, and xcodebuild silently ignore the toolchain setting and use the default Swift toolchain (set via xcode-select).
  • SwiftPM silently ignores a missing toolchain set via TOOLCHAINS. If you pass an invalid directory to the --toolchains parameter, it at least prints a warning before it continues building with the default toolchain.

I don’t like this. I’d much rather get an error if the build tool can’t find the toolchain I told it to use. It’s especially dangerous in scripts.

How the Swift compiler knows that DispatchQueue.main implies @MainActor

You may have noticed that the Swift compiler automatically treats the closure of a DispatchQueue.main.async call as @MainActor. In other words, we can call a main-actor-isolated function in the closure:

import Dispatch

@MainActor func mainActorFunc() { }

DispatchQueue.main.async {
    // The compiler lets us call this because
    // it knows we're on the main actor.
    mainActorFunc()
}

This behavior is welcome and very convenient, but it bugs me that it’s so hidden. As far as I know it isn’t documented, and neither Xcode nor any other editor/IDE I’ve used do a good job of showing me the actor context a function or closure will run in, even though the compiler has this information. I’ve written about a similar case before in Where View.task gets its main-actor isolation from, where Swift/Xcode hide essential information from the programmer by not showing certain attributes in declarations or the documentation.

It’s a syntax check

So how is the magic behavior for DispatchQueue.main.async implemented? It can’t be an attribute or other annotation on the closure parameter of the DispatchQueue.async method because the actual queue instance isn’t known at that point.

A bit of experimentation reveals that it is in fact a relatively coarse source-code-based check that singles out invocations on DispatchQueue.main, in exactly that spelling. For example, the following variations do produce warnings/errors (in Swift 5.10/6.0, respectively), even though they are just as safe as the previous code snippet. This is because we aren’t using the “correct” DispatchQueue.main.async spelling:

let queue = DispatchQueue.main
queue.async {
    // Error: Call to main actor-isolated global function
    // 'mainActorFunc()' in a synchronous nonisolated context
    mainActorFunc() // ❌
}

typealias DP = DispatchQueue
DP.main.async {
    // Error: Call to main actor-isolated global function
    // 'mainActorFunc()' in a synchronous nonisolated context
    mainActorFunc() // ❌
}

I found the place in the Swift compiler source code where the check happens. In the compiler’s semantic analysis stage (called “Sema”; this is the phase right after parsing), the type checker calls a function named adjustFunctionTypeForConcurrency, passing in a Boolean it obtained from isMainDispatchQueueMember, which returns true if the source code literally references DispatchQueue.main. In that case, the type checker adds the @_unsafeMainActor attribute to the function type. Good to know.

Fun fact: since this is a purely syntax-based check, if you define your own type named DispatchQueue, give it a static main property and a function named async that takes a closure, the compiler will apply the same “fix” to it. This is NOT recommended:

// Define our own `DispatchQueue.main.async`
struct DispatchQueue {
    static let main: Self = .init()
    func async(_ work: @escaping () -> Void) {}
}

// This calls our 
DispatchQueue.main.async {
    // No error! Compiler has inserted `@_unsafeMainActor`
    mainActorFunc()
}

Perplexity through obscurity

I love that this automatic @MainActor inference for DispatchQueue.main exists. I do not love that it’s another piece of hidden, implicit behavior that makes Swift concurrency harder to learn. I want to see all the @_unsafeMainActor and @_unsafeInheritExecutor and @_inheritActorContext annotations! I believe Apple is doing the community a disservice by hiding these in Xcode.

The biggest benefit of Swift’s concurrency model over what we had before is that so many things are statically known at compile time. It’s a shame that the compiler knows on which executor a particular line of code will run, but none of the tools seem to be able to show me this. Instead, I’m forced to hunt for @MainActor annotations and hidden attributes in superclasses, protocols, etc. This feels especially problematic during the Swift 5-to-6 transition phase we’re currently in where it’s so easy to misuse concurrency and not get a compiler error (and sometimes not even a warning if you forget to enable strict concurrency checking).

The most impactful change Apple can make to make Swift concurrency less confusing is to show the inferred executor context for each line of code in Xcode. Make it really obvious what code runs on the main actor, some other actor, or the global cooperative pool. Use colors or whatnot! (Other Swift IDEs should do this too, of course. I’m just picking on Xcode because Apple has the most leverage.)

Keyboard shortcuts for Export Unmodified Original in Photos for Mac

Problem

  1. The Photos app on macOS doesn’t provide a keyboard shortcut for the Export Unmodified Original command.
  2. macOS allows you to add your own app-specific keyboard shortcuts via System Settings > Keyboard > Keyboard Shortcuts > App Shortcuts. You need to enter the exact spelling of the menu item you want to invoke.
  3. Photos renames the command depending on what’s selected: Export Unmodified Original For 1 Photo“ turns into ”… Originals For 2 Videos” turns into “… For 3 Items” (for mixed selections), and so on. Argh!
  4. The System Settings UI for assigning keyboard shortcuts is extremely tedious to use if you want to add more than one or two shortcuts.
Screenshot of the File > Export submenu of the Photos app on macOS. The selected menu command is called 'Export Unmodified Originals For 16 Items'
Dynamically renaming menu commands is cute, but it becomes a problem when you want to assign keyboard shortcuts.

Solution: shell script

Here’s a Bash script1 that assigns Ctrl + Opt + Cmd + E to Export Unmodified Originals for up to 20 selected items:

#!/bin/bash

# Assigns a keyboard shortcut to the Export Unmodified Originals
# menu command in Photos.app on macOS.

# @ = Command
# ^ = Control
# ~ = Option
# $ = Shift
shortcut='@~^e'

# Set shortcut for 1 selected item
echo "Setting shortcut for 1 item"
defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Photo" "$shortcut"
defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Video" "$shortcut"

# Set shortcut for 2-20 selected items
objects=(Photos Videos Items)
for i in {2..20}
do
  echo "Setting shortcut for $i items"
  for object in "${objects[@]}"
  do
    defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Originals For $i $object" "$shortcut"
  done
done

# Use this command to verify the result:
# defaults read com.apple.Photos NSUserKeyEquivalents

The script is also available on GitHub.

Usage:

  1. Quit Photos.app.
  2. Run the script. Feel free to change the key combo or count higher than 20.
  3. Open Photos.app.

Note: There’s a bug in Photos.app on macOS 13.2 (and at least some earlier versions). Custom keyboard shortcuts don’t work until you’ve opened the menu of the respective command at least once. So you must manually open the File > Export once before the shortcut will work. (For Apple folks: FB11967573.)

  1. I still write Bash scripts because Shellcheck doesn’t support Zsh. ↩︎

Swift Evolution proposals in Alfred

I rarely participate actively in the Swift Evolution process, but I frequently refer to evolution proposals for my work, often multiple times per week. The proposals aren’t always easy to read, but they’re the most comprehensive (and sometimes only) documentation we have for many Swift features.

For years, my tool of choice for searching Swift Evolution proposals has been Karoy Lorentey’s swift-evolution workflow for Alfred.

The workflow broke recently due to data format changes. Karoy was kind enough to add me as a maintainer so I could fix it.

The new version 2.1.0 is now available on GitHub. Download the .alfredworkflow file and double-click to install. Besides the fix, the update has a few other improvements:

  • The proposal title is now displayed more prominently.
  • New actions to copy the proposal title (hold down Command) or copy it as a Markdown link (hold down Shift + Command).
  • The script forwards the main metadata of the selected proposal (id, title, status, URL) to Alfred. If you want to extend the workflow with your own actions, you can refer to these variables.

Pattern matching on error codes

Foundation overloads the pattern matching operator ~= to enable matching against error codes in catch clauses.

catch clauses in Swift support pattern matching, using the same patterns you’d use in a case clause inside a switch or in an if case … statement. For example, to handle a file-not-found error you might write:

import Foundation

do {
    let fileURL = URL(filePath: "/abc") // non-existent file
    let data = try Data(contentsOf: fileURL)
} catch let error as CocoaError where error.code == .fileReadNoSuchFile {
    print("File doesn't exist")
} catch {
    print("Other error: \(error)")
}

This binds a value of type CocoaError to the variable error and then uses a where clause to check the specific error code.

However, if you don’t need access to the complete error instance, there’s a shorter way to write this, matching directly against the error code:

      let data = try Data(contentsOf: fileURL)
- } catch let error as CocoaError where error.code == .fileReadNoSuchFile {
+ } catch CocoaError.fileReadNoSuchFile {
      print("File doesn't exist")

Foundation overloads ~=

I was wondering why this shorter syntax works. Is there some special compiler magic for pattern matching against error codes of NSError instances? Turns out: no, the answer is much simpler. Foundation includes an overload for the pattern matching operator ~= that matches error values against error codes.1

The implementation looks something like this:

public func ~= (code: CocoaError.Code, error: any Error) -> Bool {
    guard let error = error as? CocoaError else { return false }
    return error.code == code
}

The actual code in Foundation is a little more complex because it goes through a hidden protocol named _ErrorCodeProtocol, but that’s not important. You can check out the code in the Foundation repository: Darwin version, swift-corelibs-foundation version.

This matching on error codes is available for CocoaError, URLError, POSIXError, and MachError (and possibly more types in other Apple frameworks, I haven’t checked).

  1. I wrote about the ~= operator before, way back in 2015(!): Pattern matching in Swift and More pattern matching examples↩︎

You should watch Double Fine Adventure

I know I’m almost a decade late to this party, but I’m probably not the only one, so here goes.

Double Fine Adventure was a wildly successful 2012 Kickstarter project to crowdfund the development of a point-and-click adventure game and, crucially, to document its development on video. The resulting game Broken Age was eventually released in two parts in 2014 and 2015. Broken Age is a beautiful game and I recommend you try it. It’s available for lots of platforms and is pretty cheap (10–15 euros/dollars or less). I played it on the Nintendo Switch, which worked very well.

Screenshot from Broken Age. A tall girl in a pink dress is talking to a shorter girl in a bird costume. They are standing on a cloud.
Broken Age.

But the real gem to me was watching the 12.5-hour documentary on YouTube. A video production team followed the entire three-year development process from start to finish. It provides a refreshingly candid and transparent insight into “how the sausage is made”, including sensitive topics such as financial problems, layoffs, and long work hours. Throughout all the ups and downs there’s a wonderful sense of fun and camaraderie among the team at Double Fine, which made watching the documentary even more enjoyable to me than playing Broken Age. You can tell these people love working with each other. I highly recommend taking a look if you find this mildly interesting.

Four people sitting at a conference table in an office. The wall in the background is covered in pencil drawings.
The Double Fine Adventure documentary.

The first major game spoilers don’t come until episode 15, so you can safely watch most of the documentary before playing the game (and this is how the original Kickstarter backers experienced it). However, I think it’s even more interesting to play the game first, or to experience both side-by-side. My suggestion: watch two or three episodes of the documentary. If you like it, start playing Broken Age alongside it.

How the relative size modifier interacts with stack views

And what it can teach us about SwiftUI’s stack layout algorithm

I have one more thing to say on the relative sizing view modifier from my previous post, Working with percentages in SwiftUI layout. I’m assuming you’ve read that article. The following is good to know if you want to use the modifier in your own code, but I hope you’ll also learn some general tidbits about SwiftUI’s layout algorithm for HStacks and VStacks.

Using relative sizing inside a stack view

Let’s apply the relativeProposed modifier to one of the subviews of an HStack:

HStack(spacing: 10) {
    Color.blue
        .relativeProposed(width: 0.5)
    Color.green
    Color.yellow
}
.border(.primary)
.frame(height: 80)

What do you expect to happen here? Will the blue view take up 50 % of the available width? The answer is no. In fact, the blue rectangle becomes narrower than the others:

This is because the HStack only proposes a proportion of its available width to each of its children. Here, the stack proposes one third of the available space to its first child, the relative sizing modifier. The modifier then halves this value, resulting in one sixth of the total width (minus spacing) for the blue color. The other two rectangles then become wider than one third because the first child view didn’t use up its full proposed width.

Update May 1, 2024: SwiftUI’s built-in containerRelativeFrame modifier (introduced after I wrote my modifier) doesn’t exhibit this behavior because it uses the size of the nearest container view as its reference, and stack views don’t count as containers in this context (which I find somewhat unintuitive, but that’s the way it is).

Order matters

Now let’s move the modifier to the green color in the middle:

HStack(spacing: 10) {
    Color.blue
    Color.green
        .relativeProposed(width: 0.5)
    Color.yellow
}

Naively, I’d expect an equivalent result: the green rectangle should become 100 pt wide, and blue and yellow should be 250 pt each. But that’s not what happens — the yellow view ends up being wider than the blue one:

I found this unintuitive at first, but it makes sense if you understand that the HStack processes its children in sequence:

  1. The HStack proposes one third of its available space to the blue view: (620 – 20) / 3 = 200. The blue view accepts the proposal and becomes 200 pt wide.

  2. Next up is the relativeProposed modifier. The HStack divides the remaining space by the number of remaining subviews and proposes that: 400 / 2 = 200. Our modifier halves this proposal and proposes 100 pt to the green view, which accepts it. The modifier in turn adopts the size of its child and returns 100 pt to the HStack.

  3. Since the second subview used less space than proposed, the HStack now has 300 pt left over to propose to its final child, the yellow color.

Important: the order in which the stack lays out its subviews happens to be from left to right in this example, but that’s not always the case. In general, HStacks and VStacks first group their subviews by layout priority (more on that below), and then order the views inside each group by flexibility such that the least flexible views are laid out first. For more on this, see How an HStack Lays out Its Children by Chris Eidhof. The views in our example are all equally flexible (they all can become any width between 0 and infinity), so the stack processes them in their “natural” order.

Leftover space isn’t redistributed

By now you may be able guess how the layout turns out when we move our view modifier to the last child view:

HStack(spacing: 10) {
    Color.blue
    Color.green
    Color.yellow
        .relativeProposed(width: 0.5)
}
  • Blue and green each receive one third of the available width and become 200 pt wide. No surprises there.

  • When the HStack reaches the relativeProposed modifier, it has 200 pt left to distribute. Again, the modifier and the yellow rectangle only use half of this amount.

The end result is that the HStack ends up with 100 pt left over. The process stops here — the HStack does not start over in an attempt to find a “better” solution. The stack makes itself just big enough to contain its subviews (= 520 pt incl. spacing) and reports that size to its parent.

Layout priority

We can use the layoutPriority view modifier to influence how stacks and other containers lay out their children. Let’s give the subview with the relative sizing modifier a higher layout priority (the default priority is 0):

HStack(spacing: 10) {
    Color.blue
    Color.green
    Color.yellow
        .relativeProposed(width: 0.5)
        .layoutPriority(1)
}

This results in a layout where the yellow rectangle actually takes up 50 % of the available space:

Explanation:

  1. The HStack groups its children by layout priority and then processes each group in sequence, from highest to lowest priority. Each group is proposed the entire remaining space.

  2. The first layout group only contains a single view, our relative sizing modifier with the yellow color. The HStack proposes the entire available space (minus spacing) = 600 pt. Our modifier halves the proposal, resulting in 300 pt for the yellow view.

  3. There are 300 pt left over for the second layout group. These are distributed equally among the two children because each subview accepts the proposed size.

Conclusion

The code I used to generate the images in this article is available on GitHub. I only looked at HStacks here, but VStacks work in exactly the same way for the vertical dimension.

SwiftUI’s layout algorithm always follows this basic pattern of proposed sizes and responses. Each of the built-in “primitive” views (e.g. fixed and flexible frames, stacks, Text, Image, Spacer, shapes, padding, background, overlay) has a well-defined (if not always well-documented) layout behavior that can be expressed as a function (ProposedViewSize) -> CGSize. You’ll need to learn the behavior for view to work effectively with SwiftUI.

A concrete lesson I’m taking away from this analysis: HStack and VStack don’t treat layout as an optimization problem that tries to find the optimal solution for a set of constraints (autolayout style). Rather, they sort their children in a particular way and then do a single proposal-and-response pass over them. If there’s space leftover at the end, or if the available space isn’t enough, then so be it.

Working with percentages in SwiftUI layout

SwiftUI’s layout primitives generally don’t provide relative sizing options, e.g. “make this view 50 % of the width of its container”. Let’s build our own!

Use case: chat bubbles

Consider this chat conversation view as an example of what I want to build. The chat bubbles always remain 80 % as wide as their container as the view is resized:

The chat bubbles should become 80 % as wide as their container. Download video

Building a proportional sizing modifier

1. The Layout

We can build our own relative sizing modifier on top of the Layout protocol. The layout multiplies its own proposed size (which it receives from its parent view) with the given factors for width and height. It then proposes this modified size to its only subview. Here’s the implementation (the full code, including the demo app, is on GitHub):

/// A custom layout that proposes a percentage of its
/// received proposed size to its subview.
///
/// - Precondition: must contain exactly one subview.
fileprivate struct RelativeSizeLayout: Layout {
    var relativeWidth: Double
    var relativeHeight: Double

    func sizeThatFits(
        proposal: ProposedViewSize, 
        subviews: Subviews, 
        cache: inout ()
    ) -> CGSize {
        assert(subviews.count == 1, "expects a single subview")
        let resizedProposal = ProposedViewSize(
            width: proposal.width.map { $0 * relativeWidth },
            height: proposal.height.map { $0 * relativeHeight }
        )
        return subviews[0].sizeThatFits(resizedProposal)
    }

    func placeSubviews(
        in bounds: CGRect, 
        proposal: ProposedViewSize, 
        subviews: Subviews, 
        cache: inout ()
    ) {
        assert(subviews.count == 1, "expects a single subview")
        let resizedProposal = ProposedViewSize(
            width: proposal.width.map { $0 * relativeWidth },
            height: proposal.height.map { $0 * relativeHeight }
        )
        subviews[0].place(
            at: CGPoint(x: bounds.midX, y: bounds.midY), 
            anchor: .center, 
            proposal: resizedProposal
        )
    }
}

Notes:

  • I made the type private because I want to control how it can be used. This is important for maintaining the assumption that the layout only ever has a single subview (which makes the math much simpler).

  • Proposed sizes in SwiftUI can be nil or infinity in either dimension. Our layout passes these special values through unchanged (infinity times a percentage is still infinity). I’ll discuss below what implications this has for users of the layout.

2. The View extension

Next, we’ll add an extension on View that uses the layout we just wrote. This becomes our public API:

extension View {
    /// Proposes a percentage of its received proposed size to `self`.
    public func relativeProposed(width: Double = 1, height: Double = 1) -> some View {
        RelativeSizeLayout(relativeWidth: width, relativeHeight: height) {
            // Wrap content view in a container to make sure the layout only
            // receives a single subview. Because views are lists!
            VStack { // alternatively: `_UnaryViewAdaptor(self)`
                self
            }
        }
    }
}

Notes:

  • I decided to go with a verbose name, relativeProposed(width:height:), to make the semantics clear: we’re changing the proposed size for the subview, which won’t always result in a different actual size. More on this below.

  • We’re wrapping the subview (self in the code above) in a VStack. This might seem redundant, but it’s necessary to make sure the layout only receives a single element in its subviews collection. See Chris Eidhof’s SwiftUI Views are Lists for an explanation.

Usage

The layout code for a single chat bubble in the demo video above looks like this:

let alignment: Alignment = message.sender == .me ? .trailing : .leading
chatBubble
    .relativeProposed(width: 0.8)
    .frame(maxWidth: .infinity, alignment: alignment)

The outermost flexible frame with maxWidth: .infinity is responsible for positioning the chat bubble with leading or trailing alignment, depending on who’s speaking.

You can even add another frame that limits the width to a maximum, say 400 points:

let alignment: Alignment = message.sender == .me ? .trailing : .leading
chatBubble
    .frame(maxWidth: 400)
    .relativeProposed(width: 0.8)
    .frame(maxWidth: .infinity, alignment: alignment)

Here, our relative sizing modifier only has an effect as the bubbles become narrower than 400 points. In a wider window the width-limiting frame takes precedence. I like how composable this is!

80 % won’t always result in 80 %

If you watch the debugging guides I’m drawing in the video above, you’ll notice that the relative sizing modifier never reports a width greater than 400, even if the window is wide enough:

A Mac window showing a mockup of a chat conversation with bubbles for the speakers. Overlaid on the chat bubbles are debugging views showing the widths of different components. The total container width is 753. The relW=80% debugging guide shows a width of 400.
The relative sizing modifier accepts the actual size of its subview as its own size.

This is because our layout only adjusts the proposed size for its subview but then accepts the subview’s actual size as its own. Since SwiftUI views always choose their own size (which the parent can’t override), the subview is free to ignore our proposal. In this example, the layout’s subview is the frame(maxWidth: 400) view, which sets its own width to the proposed width or 400, whichever is smaller.

Understanding the modifier’s behavior

Proposed size ≠ actual size

It’s important to internalize that the modifier works on the basis of proposed sizes. This means it depends on the cooperation of its subview to achieve its goal: views that ignore their proposed size will be unaffected by our modifier. I don’t find this particularly problematic because SwiftUI’s entire layout system works like this. Ultimately, SwiftUI views always determine their own size, so you can’t write a modifier that “does the right thing” (whatever that is) for an arbitrary subview hierarchy.

nil and infinity

I already mentioned another thing to be aware of: if the parent of the relative sizing modifier proposes nil or .infinity, the modifier will pass the proposal through unchanged. Again, I don’t think this is particularly bad, but it’s something to be aware of.

Proposing nil is SwiftUI’s way of telling a view to become its ideal size (fixedSize does this). Would you ever want to tell a view to become, say, 50 % of its ideal width? I’m not sure. Maybe it’d make sense for resizable images and similar views.

By the way, you could modify the layout to do something like this:

  1. If the proposal is nil or infinity, forward it to the subview unchanged.
  2. Take the reported size of the subview as the new basis and apply the scaling factors to that size (this still breaks down if the child returns infinity).
  3. Now propose the scaled size to the subview. The subview might respond with a different actual size.
  4. Return this latest reported size as your own size.

This process of sending multiple proposals to child views is called probing. Lots of built-in containers views do this too, e.g. VStack and HStack.

Nesting in other container views

The relative sizing modifier interacts in an interesting way with stack views and other containers that distribute the available space among their children. I thought this was such an interesting topic that I wrote a separate article about it: How the relative size modifier interacts with stack views.

The code

The complete code is available in a Gist on GitHub.

Digression: Proportional sizing in early SwiftUI betas

The very first SwiftUI betas in 2019 did include proportional sizing modifiers, but they were taken out before the final release. Chris Eidhof preserved a copy of SwiftUI’s “header file” from that time that shows their API, including quite lengthy documentation.

I don’t know why these modifiers didn’t survive the beta phase. The release notes from 2019 don’t give a reason:

The relativeWidth(_:), relativeHeight(_:), and relativeSize(width:height:) modifiers are deprecated. Use other modifiers like frame(minWidth:idealWidth:maxWidth:minHeight:idealHeight:maxHeight:alignment:) instead. (51494692)

I also don’t remember how these modifiers worked. They probably had somewhat similar semantics to my solution, but I can’t be sure. The doc comments linked above sound straightforward (“Sets the width of this view to the specified proportion of its parent’s width.”), but they don’t mention the intricacies of the layout algorithm (proposals and responses) at all.

containerRelativeFrame

Update May 1, 2024: Apple introduced the containerRelativeFrame modifier for its 2023 OSes (iOS 17/macOS 14). If your deployment target permits it, this can be a good, built-in alternative.

Note that containerRelativeFrame behaves differently than my relativeProposed modifier as it computes the size relative to the nearest container view, whereas my modifier uses its proposed size as the reference. The SwiftUI documentation somewhat vaguely lists the views that count as a container for containerRelativeFrame. Notably, stack views don’t count!

Check out Jordan Morgan’s article Modifier Monday: .containerRelativeFrame(_ axes:) (2022-06-26) to learn more about containerRelativeFrame.

Keyboard shortcuts for Export Unmodified Original in Photos for Mac

Problem

  1. The Photos app on macOS doesn’t provide a keyboard shortcut for the Export Unmodified Original command.
  2. macOS allows you to add your own app-specific keyboard shortcuts via System Settings > Keyboard > Keyboard Shortcuts > App Shortcuts. You need to enter the exact spelling of the menu item you want to invoke.
  3. Photos renames the command depending on what’s selected: Export Unmodified Original For 1 Photo“ turns into ”… Originals For 2 Videos” turns into “… For 3 Items” (for mixed selections), and so on. Argh!
  4. The System Settings UI for assigning keyboard shortcuts is extremely tedious to use if you want to add more than one or two shortcuts.
Screenshot of the File > Export submenu of the Photos app on macOS. The selected menu command is called 'Export Unmodified Originals For 16 Items'
Dynamically renaming menu commands is cute, but it becomes a problem when you want to assign keyboard shortcuts.

Solution: shell script

Here’s a Bash script1 that assigns Ctrl + Opt + Cmd + E to Export Unmodified Originals for up to 20 selected items:

#!/bin/bash

# Assigns a keyboard shortcut to the Export Unmodified Originals
# menu command in Photos.app on macOS.

# @ = Command
# ^ = Control
# ~ = Option
# $ = Shift
shortcut='@~^e'

# Set shortcut for 1 selected item
echo "Setting shortcut for 1 item"
defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Photo" "$shortcut"
defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Original For 1 Video" "$shortcut"

# Set shortcut for 2-20 selected items
objects=(Photos Videos Items)
for i in {2..20}
do
  echo "Setting shortcut for $i items"
  for object in "${objects[@]}"
  do
    defaults write com.apple.Photos NSUserKeyEquivalents -dict-add "Export Unmodified Originals For $i $object" "$shortcut"
  done
done

# Use this command to verify the result:
# defaults read com.apple.Photos NSUserKeyEquivalents

The script is also available on GitHub.

Usage:

  1. Quit Photos.app.
  2. Run the script. Feel free to change the key combo or count higher than 20.
  3. Open Photos.app.

Note: There’s a bug in Photos.app on macOS 13.2 (and at least some earlier versions). Custom keyboard shortcuts don’t work until you’ve opened the menu of the respective command at least once. So you must manually open the File > Export once before the shortcut will work. (For Apple folks: FB11967573.)

  1. I still write Bash scripts because Shellcheck doesn’t support Zsh. ↩︎

Swift Evolution proposals in Alfred

I rarely participate actively in the Swift Evolution process, but I frequently refer to evolution proposals for my work, often multiple times per week. The proposals aren’t always easy to read, but they’re the most comprehensive (and sometimes only) documentation we have for many Swift features.

For years, my tool of choice for searching Swift Evolution proposals has been Karoy Lorentey’s swift-evolution workflow for Alfred.

The workflow broke recently due to data format changes. Karoy was kind enough to add me as a maintainer so I could fix it.

The new version 2.1.0 is now available on GitHub. Download the .alfredworkflow file and double-click to install. Besides the fix, the update has a few other improvements:

  • The proposal title is now displayed more prominently.
  • New actions to copy the proposal title (hold down Command) or copy it as a Markdown link (hold down Shift + Command).
  • The script forwards the main metadata of the selected proposal (id, title, status, URL) to Alfred. If you want to extend the workflow with your own actions, you can refer to these variables.

Pattern matching on error codes

Foundation overloads the pattern matching operator ~= to enable matching against error codes in catch clauses.

catch clauses in Swift support pattern matching, using the same patterns you’d use in a case clause inside a switch or in an if case … statement. For example, to handle a file-not-found error you might write:

import Foundation

do {
    let fileURL = URL(filePath: "/abc") // non-existent file
    let data = try Data(contentsOf: fileURL)
} catch let error as CocoaError where error.code == .fileReadNoSuchFile {
    print("File doesn't exist")
} catch {
    print("Other error: \(error)")
}

This binds a value of type CocoaError to the variable error and then uses a where clause to check the specific error code.

However, if you don’t need access to the complete error instance, there’s a shorter way to write this, matching directly against the error code:

      let data = try Data(contentsOf: fileURL)
- } catch let error as CocoaError where error.code == .fileReadNoSuchFile {
+ } catch CocoaError.fileReadNoSuchFile {
      print("File doesn't exist")

Foundation overloads ~=

I was wondering why this shorter syntax works. Is there some special compiler magic for pattern matching against error codes of NSError instances? Turns out: no, the answer is much simpler. Foundation includes an overload for the pattern matching operator ~= that matches error values against error codes.1

The implementation looks something like this:

public func ~= (code: CocoaError.Code, error: any Error) -> Bool {
    guard let error = error as? CocoaError else { return false }
    return error.code == code
}

The actual code in Foundation is a little more complex because it goes through a hidden protocol named _ErrorCodeProtocol, but that’s not important. You can check out the code in the Foundation repository: Darwin version, swift-corelibs-foundation version.

This matching on error codes is available for CocoaError, URLError, POSIXError, and MachError (and possibly more types in other Apple frameworks, I haven’t checked).

  1. I wrote about the ~= operator before, way back in 2015(!): Pattern matching in Swift and More pattern matching examples↩︎

You should watch Double Fine Adventure

I know I’m almost a decade late to this party, but I’m probably not the only one, so here goes.

Double Fine Adventure was a wildly successful 2012 Kickstarter project to crowdfund the development of a point-and-click adventure game and, crucially, to document its development on video. The resulting game Broken Age was eventually released in two parts in 2014 and 2015. Broken Age is a beautiful game and I recommend you try it. It’s available for lots of platforms and is pretty cheap (10–15 euros/dollars or less). I played it on the Nintendo Switch, which worked very well.

Screenshot from Broken Age. A tall girl in a pink dress is talking to a shorter girl in a bird costume. They are standing on a cloud.
Broken Age.

But the real gem to me was watching the 12.5-hour documentary on YouTube. A video production team followed the entire three-year development process from start to finish. It provides a refreshingly candid and transparent insight into “how the sausage is made”, including sensitive topics such as financial problems, layoffs, and long work hours. Throughout all the ups and downs there’s a wonderful sense of fun and camaraderie among the team at Double Fine, which made watching the documentary even more enjoyable to me than playing Broken Age. You can tell these people love working with each other. I highly recommend taking a look if you find this mildly interesting.

Four people sitting at a conference table in an office. The wall in the background is covered in pencil drawings.
The Double Fine Adventure documentary.

The first major game spoilers don’t come until episode 15, so you can safely watch most of the documentary before playing the game (and this is how the original Kickstarter backers experienced it). However, I think it’s even more interesting to play the game first, or to experience both side-by-side. My suggestion: watch two or three episodes of the documentary. If you like it, start playing Broken Age alongside it.

Understanding SwiftUI view lifecycles

I wrote an app called SwiftUI View Lifecycle. The app allows you to observe how different SwiftUI constructs and containers affect a view’s lifecycle, including the lifetime of its state and when onAppear gets called. The code for the app is on GitHub. It can be built for iOS and macOS.

The view tree and the render tree

When we write SwiftUI code, we construct a view tree that consists of nested view values. Instances of the view tree are ephemeral: SwiftUI constantly destroys and recreates (parts of) the view tree as it processes state changes.

The view tree serves as a blueprint from which SwiftUI creates a second tree, which represents the actual view “objects” that are “on screen” at any given time (the “objects” could be actual UIView or NSView objects, but also other representations; the exact meaning of “on screen” can vary depending on context). Chris Eidhof likes to call this second tree the render tree (the link points to a 3 minute video where Chris demonstrates this duality, highly recommended).

The render tree persists across state changes and is used by SwiftUI to establish view identity. When a state change causes a change in a view’s value, SwiftUI will find the corresponding view object in the render tree and update it in place, rather than recreating a new view object from scratch. This is of course key to making SwiftUI efficient, but the render tree has another important function: it controls the lifetimes of views and their state.

View lifecycles and state

We can define a view’s lifetime as the timespan it exists in the render tree. The lifetime begins with the insertion into the render tree and ends with the removal. Importantly, the lifetime extends to view state defined with @State and @StateObject: when a view gets removed from the render tree, its state is lost; when the view gets inserted again later, the state will be recreated with its initial value.

The SwiftUI View Lifecycle app tracks three lifecycle events for a view and displays them as timestamps:

  • @State = when the view’s state was created (equivalent to the start of the view’s lifetime)
  • onAppear = when onAppear was last called
  • onDisappear = when onDisappear was last called
A table with three rows. @State: 1:26 ago. onAppear: 0:15 ago. onDisappear: 0:47 ago.
The lifecycle monitor view displays the timestamps when certain lifecycle events last occurred.

The app allows you to observe these events in different contexts. As you click your way through the examples, you’ll notice that the timing of these events changes depending on the context a view is embedded in. For example:

  • An if/else statement creates and destroys its child views every time the condition changes; state is not preserved.
  • A ScrollView eagerly inserts all of its children into the render tree, regardless of whether they’re inside the viewport or not. All children appear right away and never disappear.
  • A List with dynamic content (using ForEach) lazily inserts only the child views that are currently visible. But once a child view’s lifetime has started, the list will keep its state alive even when it gets scrolled offscreen again. onAppear and onDisappear get called repeatedly as views are scrolled into and out of the viewport.
  • A NavigationStack calls onAppear and onDisappear as views are pushed and popped. State for parent levels in the stack is preserved when a child view is pushed.
  • A TabView starts the lifetime of all child views right away, even the non-visible tabs. onAppear and onDisappear get called repeatedly as the user switches tabs, but the tab view keeps the state alive for all tabs.

Lessons

Here are a few lessons to take away from this:

  • Different container views may have different performance and memory usage behaviors, depending on how long they keep child views alive.
  • onAppear isn’t necessarily called when the state is created. It can happen later (but never earlier).
  • onAppear can be called multiple times in some container views. If you need a side effect to happen exactly once in a view’s lifetime, consider writing yourself an onFirstAppear helper, as shown by Ian Keen and Jordan Morgan in Running Code Only Once in SwiftUI (2022-11-01).

I’m sure you’ll find more interesting tidbits when you play with the app. Feedback is welcome!

clipped() doesn’t affect hit testing

The clipped() modifier in SwiftUI clips a view to its bounds, hiding any out-of-bounds content. But note that clipping doesn’t affect hit testing; the clipped view can still receive taps/clicks outside the visible area.

I tested this on iOS 16.1 and macOS 13.0.

Example

Here’s a 300×300 square, which we then constrain to a 100×100 frame. I also added a border around the outer frame to visualize the views:

Rectangle()
  .fill(.orange.gradient)
  .frame(width: 300, height: 300)
  // Set view to 100×100 → renders out of bounds
  .frame(width: 100, height: 100)
  .border(.blue)

SwiftUI views don’t clip their content by default, hence the full 300×300 square remains visible. Notice the blue border that indicates the 100×100 outer frame:

Now let’s add .clipped() to clip the large square to the 100×100 frame. I also made the square tappable and added a button:

VStack {
  Button("You can't tap me!") {
    buttonTapCount += 1
  }
  .buttonStyle(.borderedProminent)

  Rectangle()
    .fill(.orange.gradient)
    .frame(width: 300, height: 300)
    .frame(width: 100, height: 100)
    .clipped()
    .onTapGesture {
      rectTapCount += 1
    }
}

When you run this code, you’ll discover that the button isn’t tappable at all. This is because the (unclipped) square, despite not being fully visible, obscures the button and “steals” all taps.

Xcode preview displaying a blue button and a small orange square. A larger dashed orange outline covers both the smaller square and the button.
The dashed outline indicates the hit area of the orange square. The button isn’t tappable because it’s covered by the clipped view with respect to hit testing.

The fix: .contentShape()

The contentShape(_:) modifier defines the hit testing area for a view. By adding .contentShape(Rectangle()) to the 100×100 frame, we limit hit testing to that area, making the button tappable again:

  Rectangle()
    .fill(.orange.gradient)
    .frame(width: 300, height: 300)
    .frame(width: 100, height: 100)
    .contentShape(Rectangle())
    .clipped()

Note that the order of .contentShape(Rectangle()) and .clipped() could be swapped. The important thing is that contentShape is an (indirect) parent of the 100×100 frame modifier that defines the size of the hit testing area.

Video demo

I made a short video that demonstrates the effect:

  • Initially, taps on the button, or even on the surrounding whitespace, register as taps on the square.
  • The top switch toggles display of the square before clipping. This illustrates its hit testing area.
  • The second switch adds .contentShape(Rectangle()) to limit hit testing to the visible area. Now tapping the button increments the button’s tap count.

The full code for this demo is available on GitHub.

Summary

The clipped() modifier doesn’t affect the clipped view’s hit testing region. The same is true for clipShape(_:). It’s often a good idea to combine these modifiers with .contentShape(Rectangle()) to bring the hit testing logic in sync with the UI.

When .animation animates more (or less) than it’s supposed to

On the positioning of the .animation modifier in the view tree, or: “Rendering” vs. “non-rendering” view modifiers

The documentation for SwiftUI’s animation modifier says:

Applies the given animation to this view when the specified value changes.

This sounds unambiguous to me: it sets the animation for “this view”, i.e. the part of the view tree that .animation is being applied to. This should give us complete control over which modifiers we want to animate, right? Unfortunately, it’s not that simple: it’s easy to run into situations where a view change inside an animated subtree doesn’t get animated, or vice versa.

Unsurprising examples

Let me give you some examples, starting with those that do work as documented. I tested all examples on iOS 16.1 and macOS 13.0.

1. Sibling views can have different animations

Independent subtrees of the view tree can be animated independently. In this example we have three sibling views, two of which are animated with different durations, and one that isn’t animated at all:

struct Example1: View {
  var flag: Bool

  var body: some View {
    HStack(spacing: 40) {
      Rectangle()
        .frame(width: 80, height: 80)
        .foregroundColor(.green)
        .scaleEffect(flag ? 1 : 1.5)
        .animation(.easeOut(duration: 0.5), value: flag)

      Rectangle()
        .frame(width: 80, height: 80)
        .foregroundColor(flag ? .yellow : .red)
        .rotationEffect(flag ? .zero : .degrees(45))
        .animation(.easeOut(duration: 2.0), value: flag)

      Rectangle()
        .frame(width: 80, height: 80)
        .foregroundColor(flag ? .pink : .mint)
    }
  }
}

The two animation modifiers each apply to their own subtree. They don’t interfere with each other and have no effect on the rest of the view hierarchy:

2. Nested animation modifiers

When two animation modifiers are nested in a single view tree such that one is an indirect parent of the other, the inner modifier can override the outer animation for its subviews. The outer animation applies to view modifiers that are placed between the two animation modifiers.

In this example we have one rectangle view with animated scale and rotation effects. The outer animation applies to the entire subtree, including both effects. The inner animation modifier overrides the outer animation only for what’s nested below it in the view tree, i.e. the scale effect:

struct Example2: View {
  var flag: Bool
  
  var body: some View {
    Rectangle()
      .frame(width: 80, height: 80)
      .foregroundColor(.green)
      .scaleEffect(flag ? 1 : 1.5)
      .animation(.default, value: flag) // inner
      .rotationEffect(flag ? .zero : .degrees(45))
      .animation(.default.speed(0.3), value: flag) // outer
  }
}

As a result, the scale and rotation changes animate at different speeds:

Note that we could also pass .animation(nil, value: flag) to selectively disable animations for a subtree, overriding a non-nil animation further up the view tree.

3. animation only animates its children (with exceptions)

As a general rule, the animation modifier only applies to its subviews. In other words, views and modifiers that are direct or indirect parents of an animation modifier should not be animated. As we’ll see below, it doesn’t always work like that, but here’s an example where it does. This is a slight variation of the previous code snippet where I removed the outer animation modifier (and changed the color for good measure):

struct Example3: View {
  var flag: Bool

  var body: some View {
    Rectangle()
      .frame(width: 80, height: 80)
      .foregroundColor(.orange)
      .scaleEffect(flag ? 1 : 1.5)
      .animation(.default, value: flag)
      // Don't animate the rotation
      .rotationEffect(flag ? .zero : .degrees(45))
  }
}

Recall that the order in which view modifiers are written in code is inverted with respect to the actual view tree hierarchy. Each view modifier is a new view that wraps the view it’s being applied to. So in our example, the scale effect is the child of the animation modifier, whereas the rotation effect is its parent. Accordingly, only the scale change gets animated:

Surprising examples

Now it’s time for the “fun” part. It turns out not all view modifiers behave as intuitively as scaleEffect and rotationEffect when combined with the animation modifier.

4. Some modifiers don’t respect the rules

In this example we’re changing the color, size, and alignment of the rectangle. Only the size change should be animated, which is why we’ve placed the alignment and color mutations outside the animation modifier:

struct Example4: View {
  var flag: Bool

  var body: some View {
    let size: CGFloat = flag ? 80 : 120
    Rectangle()
      .frame(width: size, height: size)
      .animation(.default, value: flag)
      .frame(maxWidth: .infinity, alignment: flag ? .leading : .trailing)
      .foregroundColor(flag ? .pink : .indigo)
  }
}

Unfortunately, this doesn’t work as intended, as all three changes are animated:

It behaves as if the animation modifier were the outermost element of this view subtree.

5. padding and border

This one’s sort of the inverse of the previous example because a change we want to animate doesn’t get animated. The padding is a child of the animation modifier, so I’d expect changes to it to be animated, i.e. the border should grow and shrink smoothly:

struct Example5: View {
  var flag: Bool

  var body: some View {
    Rectangle()
      .frame(width: 80, height: 80)
      .padding(flag ? 20 : 40)
      .animation(.default, value: flag)
      .border(.primary)
      .foregroundColor(.cyan)
  }
}

But that’s not what happens:

6. Font modifiers

Font modifiers also behave seemingly erratic with respect to the animation modifier. In this example, we want to animate the font width, but not the size or weight (smooth text animation is a new feature in iOS 16):

struct Example6: View {
  var flag: Bool

  var body: some View {
    Text("Hello!")
      .fontWidth(flag ? .condensed : .expanded)
      .animation(.default, value: flag)
      .font(.system(
        size: flag ? 40 : 60,
        weight: flag ? .regular : .heavy)
      )
  }
}

You guessed it, this doesn’t work as intended. Instead, all text properties animate smoothly:

Why does it work like this?

In summary, the placement of the animation modifier in the view tree allows some control over which changes get animated, but it isn’t perfect. Some modifiers, such as scaleEffect and rotationEffect, behave as expected, whereas others (frame, padding, foregroundColor, font) are less controllable.

I don’t fully understand the rules, but the important factor seems to be if a view modifier actually “renders” something or not. For instance, foregroundColor just writes a color into the environment; the modifier itself doesn’t draw anything. I suppose this is why its position with respect to animation is irrelevant:

RoundedRectangle(cornerRadius: flag ? 0 : 40)
  .animation(.default, value: flag)
  // Color change still animates, even though we’re outside .animation
  .foregroundColor(flag ? .pink : .indigo)

The rendering presumably takes place on the level of the RoundedRectangle, which reads the color from the environment. At this point the animation modifier is active, so SwiftUI will animate all changes that affect how the rectangle is rendered, regardless of where in the view tree they’re coming from.

The same explanation makes intuitive sense for the font modifiers in example 6. The actual rendering, and therefore the animation, occurs on the level of the Text view. The various font modifiers affect how the text is drawn, but they don’t render anything themselves.

Similarly, padding and frame (including the frame’s alignment) are “non-rendering” modifiers too. They don’t use the environment, but they influence the layout algorithm, which ultimately affects the size and position of one or more “rendering” views, such as the rectangle in example 4. That rectangle sees a combined change in its geometry, but it can’t tell where the change came from, so it’ll animate the full geometry change.

In example 5, the “rendering” view that’s affected by the padding change is the border (which is implemented as a stroked rectangle in an overlay). Since the border is a parent of the animation modifier, its geometry change is not animated.

In contrast to frame and padding, scaleEffect and rotationEffect are “rendering” modifiers. They apparently perform the animations themselves.

Conclusion

SwiftUI views and view modifiers can be divided into “rendering“ and “non-rendering” groups (I wish I had better terms for these). In iOS 16/macOS 13, the placement of the animation modifier with respect to non-rendering modifiers is irrelevant for deciding if a change gets animated or not.

Non-rendering modifiers include (non-exhaustive list):

  • Layout modifiers (frame, padding, position, offset)
  • Font modifiers (font, bold, italic, fontWeight, fontWidth)
  • Other modifiers that write data into the environment, e.g. foregroundColor, foregroundStyle, symbolRenderingMode, symbolVariant

Rendering modifiers include (non-exhaustive list):

  • clipShape, cornerRadius
  • Geometry effects, e.g. scaleEffect, rotationEffect, projectionEffect
  • Graphical effects, e.g. blur, brightness, hueRotation, opacity, saturation, shadow

Xcode 14.0 generates wrong concurrency code for macOS targets

Mac apps built with Xcode 14.0 and 14.0.1 may contain concurrency bugs because the Swift 5.7 compiler can generate invalid code when targeting the macOS 12.3 SDK. If you distribute Mac apps, you should build them with Xcode 13.4.1 until Xcode 14.1 is released.

Here’s what happened:

  1. Swift 5.7 implements SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions, which introduces new rules how async functions hop between executors. Because of SE-0338, when compiling concurrency code, the Swift 5.7 compiler places executor hops in different places than Swift 5.6.

  2. Some standard library functions need to opt out of the new rules. They are annotated with a new, unofficial attribute @_unsafeInheritExecutor, which was introduced for this purpose. When the Swift 5.7 compiler sees this attribute, it generates different executor hops.

  3. The attribute is only present in the Swift 5.7 standard library, i.e. in the iOS 16 and macOS 13 SDKs. This is fine for iOS because compiler version and the SDK’s standard library version match in Xcode 14.0. But for macOS targets, Xcode 14.0 uses the Swift 5.7 compiler with the standard library from Swift 5.6, which doesn’t contain the @_unsafeInheritExecutor attribute. This is what causes the bugs.

    Note that the issue is caused purely by the version mismatch at compile-time. The standard library version used by the compiled app at run-time (which depends on the OS version the app runs on) isn’t relevant. As soon as Xcode 14.1 gets released with the macOS 13 SDK, the version mismatch will go away, and Mac targets built with Xcode 14.1 won’t exhibit these bugs.

  4. Third-party developers had little chance of discovering the bug during the Xcode 14.0 beta phase because the betas ship with the new beta macOS SDK. The version mismatch occurs when the final Xcode release in September reverts back to the old macOS SDK to accommodate the different release schedules of iOS and macOS.

Sources

Breaking concurrency invariants is a serious issue, though I’m not sure how much of a problem this is in actual production apps. Here are all related bug reports that I know of:

And explanations of the cause from John McCall of the Swift team at Apple:

John McCall (2022-10-07):

This guarantee is unfortunately broken with Xcode 14 when compiling for macOS because it’s shipping with an old macOS SDK that doesn’t declare that withUnsafeContinuation inherits its caller’s execution context. And yes, there is a related actor-isolation issue because of this bug. That will be fixed by the release of the new macOS SDK.

John McCall (2022-10-07):

Now, there is a bug in Xcode 14 when compiling for the macOS SDK because it ships with an old SDK. That bug doesn’t actually break any of the ordering properties above. It does, however, break Swift’s data isolation guarantees because it causes withUnsafeContinuation, when called from an actor-isolated context, to send a non-Sendable function to a non-isolated executor and then call it, which is completely against the rules. And in fact, if you turn strict sendability checking on when compiling against that SDK, you will get a diagnostic about calling withUnsafeContinuation because it thinks that you’re violating the rules (because withUnsafeContinuation doesn’t properly inherit the execution context of its caller).

Poor communication from Apple

What bugs me most about the situation is Apple’s poor communication. When the official, current release of your programming language ships with a broken compiler for one of your most important platforms, the least I’d expect is a big red warning at the top of the release notes. I can’t find any mention of this issue in the Xcode 14.0 release notes or Xcode 14.0.1 release notes, however.

Even better: the warning should be displayed prominently in Xcode, or Xcode 14.0 should outright refuse to build Mac apps. I’m sure the latter option isn’t practical for all sorts of reasons, although it sounds logical to me: if the only safe compiler/SDK combinations are either 5.6 with the macOS 12 SDK or 5.7 with the macOS 13 SDK, there shouldn’t be an official Xcode version that combines the 5.7 compiler with the macOS 12 SDK.

Where View.task gets its main-actor isolation from

SwiftUI’s .task modifier inherits its actor context from the surrounding function. If you call .task inside a view’s body property, the async operation will run on the main actor because View.body is (semi-secretly) annotated with @MainActor. However, if you call .task from a helper property or function that isn’t @MainActor-annotated, the async operation will run in the cooperative thread pool.

Example

Here’s an example. Notice the two .task modifiers in body and helperView. The code is identical in both, yet only one of them compiles — in helperView, the call to a main-actor-isolated function fails because we’re not on the main actor in that context:

Xcode showing the compiler diagnostic 'Expression is 'async' but is not marked with await'
We can call a main-actor-isolated function from inside body, but not from a helper property.
import SwiftUI

@MainActor func onMainActor() {
  print("on MainActor")
}

struct ContentView: View {
  var body: some View {
    VStack {
      helperView
      Text("in body")
        .task {
          // We can call a @MainActor func without await
          onMainActor()
        }
    }
  }

  var helperView: some View {
    Text("in helperView")
      .task {
        // ❗️ Error: Expression is 'async' but is not marked with 'await'
        onMainActor()
      }
  }
}

Why does it work like this?

This behavior is caused by two (semi-)hidden annotations in the SwiftUI framework:

  1. The View protocol annotates its body property with @MainActor. This transfers to all conforming types.

  2. View.task annotates its action parameter with @_inheritActorContext, causing it to adopt the actor context from its use site.

Sadly, none of these annotations are visible in the SwiftUI documentation, making it very difficult to understand what’s going on. The @MainActor annotation on View.body is present in Xcode’s generated Swift interface for SwiftUI (Jump to Definition of View), but that feature doesn’t work reliably for me, and as we’ll see, it doesn’t show the whole truth, either.

Xcode showing the generated interface for SwiftUI’s View protocol. The @MainActor annotation on View.body is selected.
View.body is annotated with @MainActor in Xcode’s generated interface for SwiftUI.

SwiftUI’s module interface

To really see the declarations the compiler sees, we need to look at SwiftUI’s module interface file. A module interface is like a header file for Swift modules. It lists the module’s public declarations and even the implementations of inlinable functions. Module interfaces use normal Swift syntax and have the .swiftinterface file extension.

SwiftUI’s module interface is located at:

[Path to Xcode.app]/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS.sdk/System/Library/Frameworks/SwiftUI.framework/Modules/SwiftUI.swiftmodule/arm64e-apple-ios.swiftinterface

(There can be multiple .swiftinterface files in that directory, one per CPU architecture. Pick any one of them. Pro tip for viewing the file in Xcode: Editor > Syntax Coloring > Swift enables syntax highlighting.)

Inside, you’ll find that View.body has the @MainActor(unsafe) attribute:

@available(iOS 13.0, macOS 10.15, tvOS 13.0, watchOS 6.0, *)
@_typeEraser(AnyView) public protocol View {
  // …
  @SwiftUI.ViewBuilder @_Concurrency.MainActor(unsafe) var body: Self.Body { get }
}

And you’ll find this declaration for .task, including the @_inheritActorContext attribute:

@available(iOS 15.0, macOS 12.0, tvOS 15.0, watchOS 8.0, *)
extension SwiftUI.View {
  #if compiler(>=5.3) && $AsyncAwait && $Sendable && $InheritActorContext
    @inlinable public func task(
      priority: _Concurrency.TaskPriority = .userInitiated,
      @_inheritActorContext _ action: @escaping @Sendable () async -> Swift.Void
    ) -> some SwiftUI.View {
      modifier(_TaskModifier(priority: priority, action: action))
    }
  #endif
  // …
}
Xcode showing the declaration for the View.task method in the SwiftUI.swiftinterface file. The @_inheritActorContext annotation is selected.
SwiftUI’s module interface file shows the @_inheritActorContext annotatation on View.task.

Putting it all together

Armed with this knowledge, everything makes more sense:

  • When used inside body, task inherits the @MainActor context from body.
  • When used outside of body, there is no implicit @MainActor annotation, so task will run its operation on the cooperative thread pool by default.
  • Unless the view contains an @ObservedObject or @StateObject property, which makes the entire view @MainActor via this obscure rule for property wrappers whose wrappedValue property is bound to a global actor:

    A struct or class containing a wrapped instance property with a global actor-qualified wrappedValue infers actor isolation from that property wrapper

    Update May 1, 2024: SE-0401: Remove Actor Isolation Inference caused by Property Wrappers removes the above rule when compiling in Swift 6 language mode. This is a good change because it makes reasoning about actor isolation simpler. In the Swift 5 language mode, you can opt into the better behavior with the -enable-upcoming-feature DisableOutwardActorInference compiler flags. I recommend you do.

The lesson: if you use helper properties or functions in your view, consider annotating them with @MainActor to get the same semantics as body.

By the way, note that the actor context only applies to code that is placed directly inside the async closure, as well as to synchronous functions the closure calls. Async functions choose their own execution context, so any call to an async function can switch to a different executor. For example, if you call URLSession.data(from:) inside a main-actor-annotated function, the runtime will hop to the global cooperative executor to execute that method. See SE-0338: Clarify the Execution of Non-Actor-Isolated Async Functions for the precise rules.

On Apple’s policy to hide annotations in documentation

I understand Apple’s impetus not to show unofficial API or language features in the documentation lest developers get the preposterous idea to use these features in their own code!

But it makes understanding so much harder. Before I saw the annotations in the .swiftinterface file, the behavior of the code at the beginning of this article never made sense to me. Hiding the details makes things seem like magic when they actually aren’t. And that’s not good, either.

Experimenting with Live Activities

iOS 16 beta 4 is the first SDK release that supports Live Activities. A Live Activity is a widget-like view an app can place on your lock screen and update in real time. Examples where this can be useful include live sports scores or train departure times.

These are my notes on playing with the API and implementing my first Live Activity.

A bike computer on your lock screen

My Live Activity is a display for a bike computer that I’ve been developing with a group a friends. Here’s a video of it in action:

And here with simulated data:

I haven’t talked much about our bike computer project publicly yet; that will hopefully change someday. In short, a group of friends and I designed a little box that connects to your bike’s hub dynamo, measures speed and distance, and sends the data via Bluetooth to an iOS app. The app records all your rides and can also act as a live speedometer when mounted on your bike’s handlebar. It’s this last feature that I wanted to replicate in the Live Activity.

Follow Apple’s guide

Adding a Live Activity to the app wasn’t hard. I found Apple’s guide Displaying live data on the Lock Screen with Live Activities easy to follow and quite comprehensive.

No explicit user approval

iOS doesn’t ask the user for approval when an app wants to show a Live Activity. I found this odd since it seems to invite developers to abuse the feature, but maybe it’s OK because of the foreground requirement (see below). Plus, users can disallow Live Activities on a per-app basis in Settings.

Users can dismiss an active Live Activity from the lock screen by swiping (like a notification).

Most apps will probably need to ask the user for notification permissions to update their Live Activities.

The app must be in the foreground to start an activity

To start a Live Activity, an app must be open in the foreground. This isn’t ideal for the bike computer because the speedometer can’t appear magically on the lock screen when the user starts riding (even though iOS wakes up the app in the background at this point to deliver the Bluetooth events from the bike). The user has to open the app manually at least once.

On the other hand, this limitation may not be an issue for most use cases and will probably cut down on spamming/abuse significantly.

The app must keep running in the background to update the activity (or use push notifications)

As long as the app keeps running (in the foreground or background), it can update the Live Activity as often as it wants (I think). This is ideal for the bike computer as the app keeps running in the background processing Bluetooth events while the bike is in motion. I assume the same applies to other apps that can remain alive in the background, such as audio players or navigation apps doing continuous location monitoring.

Updating the Live Activity once per second was no problem in my testing, and I didn’t experience any rate limiting.

Most apps get suspended in the background, however. They must use push notifications to update their Live Activity (or background tasks or some other mechanism to have the system wake you up). Apple introduced a new kind of push notification that is delivered directly to the Live Activity, bypassing the app altogether. I haven’t played with push notification updates, so I don’t know the benefits of using this method over sending a silent push notification to wake the app and updating the Live Activity from there. Probably less aggressive rate limiting?

Lock screen color matching

I haven’t found a good way to match my Live Activity’s colors to the current system colors on the lock screen. By default, text in a Live Activity is black in light mode, whereas the built-in lock screen themes seem to favor white or other light text colors. If there is an API or environment value that allows apps to match the color style of the current lock screen, I haven’t found it. I experimented with various foreground styles, such as materials, without success.

I ended up hardcoding the foreground color, but I’m not satisfied with the result. Depending on the user’s lock screen theme, the Live Activity can look out of place.

The lock screen of an iPhone running iOS 16. The system text (clock, date) is in a light, whitish color. The Live Activity at the bottom of the screen has black text.
The default text color of a Live Activity in light mode is black. This doesn’t match most lock screen themes.

Animations can’t be disabled

Apple’s guide clearly states that developers have little control over animations in a Live Activity:

Animate content updates

When you define the user interface of your Live Activity, the system ignores any animation modifiers — for example, withAnimation(_:_:) and animation(_:value:) — and uses the system’s animation timing instead. However, the system performs some animation when the dynamic content of the Live Activity changes. Text views animate content changes with blurred content transitions, and the system animates content transitions for images and SF Symbols. If you add or remove views from the user interface based on content or state changes, views fade in and out. Use the following view transitions to configure these built-in transitions: opacity, move(edge:), slide, push(from:), or combinations of them. Additionally, request animations for timer text with numericText(countsDown:).

It makes total sense to me that Apple doesn’t want developers to go crazy with animations on the lock screen, and perhaps having full control over animations also makes it easier for Apple to integrate Live Activities into the always-on display that’s probably coming on the next iPhone.

What surprised me is that I couldn’t find a way to disable the text change animations altogether. I find the blurred text transitions for the large speed value quite distracting and I think this label would look better without any animations. But no combination of .animation(nil), .contentTransition(.identity), and .transition(.identity) would do this.

Sharing code between app and widget

A Live Activity is very much like a widget: the UI must live in your app’s widget extension. You start the Live Activity with code that runs in your app, though. Both targets (the app and the widget extension) need access to a common data type that represents the data the widget displays. You should have a third target (a framework or SwiftPM package) that contains such shared types and APIs and that the downstream targets import.

Availability annotations

Update September 22, 2022: This limitation no longer applies. The iOS 16.1 SDK added the ability to have availability conditions in WidgetBundle. Source: Tweet from Luca Bernardi (2022-09-20).

WidgetBundle apparently doesn’t support widgets with different minimum deployment targets. If your widget extension has a deployment target of iOS 14 or 15 for an existing widget and you now want to add a Live Activity, I’d expect your widget bundle to look like this:

@main
struct MyWidgets: WidgetBundle {
  var body: some Widget {
    MyNormalWidget()
    // Error: Closure containing control flow statement cannot
    // be used with result builder 'WidgetBundleBuilder'
    if #available(iOSApplicationExtension 16.0, *) {
      MyLiveActivityWidget()
    }
  }
}

But this doesn’t compile because the result builder type used by WidgetBundle doesn’t support availability conditions. I hope Apple fixes this.

This wasn’t a problem for me because our app didn’t have any widgets until now, so I just set the deployment target of the widget extension to iOS 16.0. If you have existing widgets and can’t require iOS 16 yet, a workaround is to add a second widget extension target just for the Live Activity. I haven’t tried this, but WidgetKit explicitly supports having multiple widget extensions, so it should work:

Typically, you include all your widgets in a single widget extension, although your app can contain multiple extensions.

How @MainActor works

@MainActor is a Swift annotation to coerce a function to always run on the main thread and to enable the compiler to verify this. How does this work? In this article, I’m going to reimplement @MainActor in a slightly simplified form for illustration purposes, mainly to show how little “magic” there is to it. The code of the real implementation in the Swift standard library is available in the Swift repository.

@MainActor relies on two Swift features, one of them unofficial: global actors and custom executors.

Global actors

MainActor is a global actor. That is, it provides a single actor instance that is shared between all places in the code that are annotated with @MainActor.

All global actors must implement the shared property that’s defined in the GlobalActor protocol (every global actor implicitly conforms to this protocol):

@globalActor
final actor MyMainActor {
  // Requirements from the implicit GlobalActor conformance
  typealias ActorType = MyMainActor
  static var shared: ActorType = MyMainActor()

  // Don’t allow others to create instances
  private init() {}
}

At this point, we have a global actor that has the same semantics as any other actor. That is, functions annotated with @MyMainActor will run on a thread in the cooperative thread pool managed by the Swift runtime. To move the work to the main thread, we need another concept, custom executors.

Executors

A bit of terminology:

  • The compiler splits async code into jobs. A job roughly corresponds to the code from one await (= potential suspension point) to the next.
  • The runtime submits each job to an executor. The executor is the object that decides in which order and in which context (i.e. which thread or dispatch queue) to run the jobs.

Swift ships with two built-in executors: the default concurrent executor, used for “normal”, non-actor-isolated async functions, and a default serial executor. Every actor instance has its own instance of this default serial executor and runs its code on it. Since the serial executor, like a serial dispatch queue, only runs a single job at a time, this prevents concurrent accesses to the actor’s state.

Custom executors

As of Swift 5.6, executors are an implementation detail of Swift’s concurrency system, but it’s almost certain that they will become an official feature fairly soon. Why? Because it can sometimes be useful to have more control over the execution context of async code. Some examples are listed in a draft proposal for allowing developers to implement custom executors that was first pitched in February 2021 but then didn’t make the cut for Swift 5.5.

@MainActor already uses the unofficial ability for an actor to provide a custom executor, and we’re going to do the same for our reimplementation. A serial executor that runs its job on the main dispatch queue is implemented as follows. The interesting bit is the enqueue method, where we tell the job to run on the main dispatch queue:

final class MainExecutor: SerialExecutor {
  func asUnownedSerialExecutor() -> UnownedSerialExecutor {
    UnownedSerialExecutor(ordinary: self)
  }

  func enqueue(_ job: UnownedJob) {
    DispatchQueue.main.async {
      job._runSynchronously(on: self.asUnownedSerialExecutor())
    }
  }
}

We’re responsible for keeping an instance of the executor alive, so let’s store it in a global:

private let mainExecutor = MainExecutor()

Finally, we need to tell our global actor to use the new executor:

import Dispatch

@globalActor
final actor MyMainActor {
  // ...
  
  // Requirement from the implicit GlobalActor conformance
  static var sharedUnownedExecutor: UnownedSerialExecutor {
    mainExecutor.asUnownedSerialExecutor()
  }

  // Requirement from the implicit Actor conformance
  nonisolated var unownedExecutor: UnownedSerialExecutor {
    mainExecutor.asUnownedSerialExecutor()
  }
}

That’s all there is to reimplement the basics of @MainActor.

Conclusion

The full code is on GitHub, including a usage example to demonstrate that the @MyMainActor annotations work.

John McCall’s draft proposal for custom executors is worth reading, particularly the philosophy section. It’s an easy-to-read summary of some of the design principles behind Swift’s concurrency system:

Swift’s concurrency design sees system threads as expensive and rather precious resources. …

It is therefore best if the system allocates a small number of threads — just enough to saturate the available cores — and for those threads [to] only block for extended periods when there is no pending work in the program. Individual functions cannot effectively make this decision about blocking, because they lack a holistic understanding of the state of the program. Instead, the decision must be made by a centralized system which manages most of the execution resources in the program.

This basic philosophy of how best to use system threads drives some of the most basic aspects of Swift’s concurrency design. In particular, the main reason to add async functions is to make it far easier to write functions that, unlike standard functions, will reliably abandon a thread when they need to wait for something to complete.

And:

The default concurrent executor is used to run jobs that don’t need to run somewhere more specific. It is based on a fixed-width thread pool that scales to the number of available cores. Programmers therefore do not need to worry that creating too many jobs at once will cause a thread explosion that will starve the program of resources.

AttributedString’s Codable format and what it has to do with Unicode

Here’s a simple AttributedString with some formatting:

import Foundation

let str = try! AttributedString(
  markdown: "Café **Sol**",
  options: .init(interpretedSyntax: .inlineOnly)
)

AttributedString is Codable. If your task was to design the encoding format for an attributed string, what would you come up with? Something like this seems reasonable (in JSON with comments):

{
  "text": "Café Sol",
  "runs": [
    {
      // start..<end in Character offsets
      "range": [5, 8],
      "attrs": {
        "strong": true
      }
    }
  ]
}

This stores the text alongside an array of runs of formatting attributes. Each run consists of a character range and an attribute dictionary.

Unicode is complicated

But this format is bad and can break in various ways. The problem is that the character offsets that define the runs aren’t guaranteed to be stable. The definition of what constitutes a Character, i.e. a user-perceived character, or a Unicode grapheme cluster, can and does change in new Unicode versions. If we decoded an attributed string that had been serialized

  • on a different OS version (before Swift 5.6, Swift used the OS’s Unicode library for determining character boundaries),
  • or by code compiled with a different Swift version (since Swift 5.6, Swift uses its own grapheme breaking algorithm that will be updated alongside the Unicode standard)1, the character ranges might no longer represent the original intent, or even become invalid.

Update April 11, 2024: See this Swift forum post I wrote for an example where the Unicode rules for grapheme cluster segmentation changed for flag emoji. This change caused a corresponding change in how Swift counts the Characters in a string containing consecutive flags, such as "🇦🇷🇯🇵".

Normalization forms

So let’s use UTF-8 byte offsets for the ranges, I hear you say. This avoids the first issue but still isn’t safe, because some characters, such as the é in the example string, have more than one representation in Unicode: it can be either the standalone character é (Latin small letter e with acute) or the combination of e + ◌́ (Combining acute accent). The Unicode standard calls these variants normalization forms.2 The first form needs 2 bytes in UTF-8, whereas the second uses 3 bytes, so subsequent ranges would be off by one if the string and the ranges used different normalization forms.

Now in theory, the string itself and the ranges should use the same normalization form upon serialization, avoiding the problem. But this is almost impossible to guarantee if the serialized data passes through other systems that may (inadvertently or not) change the Unicode normalization of the strings that pass through them.

A safer option would be to store the text not as a string but as a blob of UTF-8 bytes, because serialization/networking/storage layers generally don’t mess with binary data. But even then you’d have to be careful in the encoding and decoding code to apply the formatting attributes before any normalization takes place. Depending on how your programming language handles Unicode, this may not be so easy.

Foundation’s solution

The people on the Foundation team know all this, of course, and chose a better encoding format for Attributed String. Let’s take a look.3

let encoder = JSONEncoder()
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
let jsonData = try encoder.encode(str)
let json = String(decoding: jsonData, as: UTF8.self)

This is how our sample string is encoded:

[
  "Café ",
  {

  },
  "Sol",
  {
    "NSInlinePresentationIntent" : 2
  }
]

This is an array of runs, where each run consists of a text segment and a dictionary of formatting attributes. The important point is that the formatting attributes are directly associated with the text segments they belong to, not indirectly via brittle byte or character offsets. (This encoding format is also more space-efficient and possibly better represents the in-memory layout of AttributedString, but that’s beside the point for this discussion.)

There’s still a (smaller) potential problem here if the character boundary rules change for code points that span two adjacent text segments: the last character of run N and the first character of run N+1 might suddenly form a single character (grapheme cluster) in a new Unicode version. In that case, the decoding code will have to decide which formatting attributes to apply to this new character. But this is a much smaller issue because it only affects the characters in question. Unlike our original example, where an off-by-one error in run N would affect all subsequent runs, all other runs are untouched.

Related forum discussion: Itai Ferber on why Character isn’t Codable.

Storing string offsets is a bad idea

We can extract a general lesson out of this: Don’t store string indices or offsets if possible. They aren’t stable over time or across runtime environments.

  1. On Apple platforms, the Swift standard library ships as part of the OS so I’d guess that the standard library’s grapheme breaking algorithm will be based on the same Unicode version that ships with the corresponding OS version. This is effectively no change in behavior compared to the pre-Swift 5.6 world (where the OS’s ICU library determined the Unicode version).

    On non-ABI-stable platforms (e.g. Linux and Windows), the Unicode version used by your program is determined by the version of the Swift compiler your program is compiled with, if my understanding is correct. ↩︎

  2. The Swift standard library doesn’t have APIs for Unicode normalization yet, but you can use the corresponding NSString APIs, which are automatically added to String when you import Foundation:

    import Foundation
    
    let precomposed = "é".precomposedStringWithCanonicalMapping
    let decomposed  = "é".decomposedStringWithCanonicalMapping
    precomposed == decomposed // → true
    precomposed.unicodeScalars.count // → 1
    decomposed.unicodeScalars.count  // → 2
    precomposed.utf8.count // → 2
    decomposed.utf8.count  // → 3
    

    ↩︎

  3. By the way, I see a lot of code using String(jsonData, encoding: .utf8)! to create a string from UTF-8 data. String(decoding: jsonData, as: UTF8.self) saves you a force-unwrap and is arguably “cleaner” because it doesn’t depend on Foundation. Since it never fails, it’ll insert replacement characters into the string if it encounters invalid byte sequences. ↩︎

A heterogeneous dictionary with strong types in Swift

The environment in SwiftUI is sort of like a global dictionary but with stronger types: each key (represented by a key path) can have its own specific value type. For example, the \.isEnabled key stores a boolean value, whereas the \.font key stores an Optional<Font>.

I wrote a custom dictionary type that can do the same thing. The HeterogeneousDictionary struct I show in this article stores mixed key-value pairs where each key defines the type of value it stores. The public API is fully type-safe, no casting required.

Usage

I’ll start with an example of the finished API. Here’s a dictionary for storing text formatting attributes:

import AppKit

var dict = HeterogeneousDictionary<TextAttributes>()

dict[ForegroundColor.self] // → nil
// The value type of this key is NSColor
dict[ForegroundColor.self] = NSColor.systemRed
dict[ForegroundColor.self] // → NSColor.systemRed

dict[FontSize.self] // → nil
// The value type of this key is Double
dict[FontSize.self] = 24
dict[FontSize.self] // → 24 (type: Optional<Double>)

We also need some boilerplate to define the set of keys and their associated value types. The code to do this for three keys (font, font size, foreground color) looks like this:

// The domain (aka "keyspace")
enum TextAttributes {}

struct FontSize: HeterogeneousDictionaryKey {
  typealias Domain = TextAttributes
  typealias Value = Double
}

struct Font: HeterogeneousDictionaryKey {
  typealias Domain = TextAttributes
  typealias Value = NSFont
}

struct ForegroundColor: HeterogeneousDictionaryKey {
  typealias Domain = TextAttributes
  typealias Value = NSColor
}

Yes, this is fairly long, which is one of the downsides of this approach. At least you only have to write it once per “keyspace”. I’ll walk you through it step by step.

Notes on the API

Using types as keys

As you can see in this line, the dictionary keys are types (more precisely, metatype values):

dict[FontSize.self] = 24

This is another parallel with the SwiftUI environment, which also uses types as keys (the public environment API uses key paths as keys, but you’ll see the types underneath if you ever define your own environment key).

Why use types as keys? We want to establish a relationship between a key and the type of values it stores, and we want to make this connection known to the type system. The way to do this is by defining a type that sets up this link.

Domains aka “keyspaces”

A standard Dictionary is generic over its key and value types. This doesn’t work for our heterogeneous dictionary because we have multiple value types (and we want more type safety than Any provides). Instead, a HeterogeneousDictionary is parameterized with a domain:

// The domain (aka "keyspace")
enum TextAttributes {}

var dict = HeterogeneousDictionary<TextAttributes>()

The domain is the “keyspace” that defines the set of legal keys for this dictionary. Only keys that belong to the domain can be put into the dictionary. The domain type has no protocol constraints; you can use any type for this.

Defining keys

A key is a type that conforms to the HeterogeneousDictionaryKey protocol. The protocol has two associated types that define the relationships between the key and its domain and value type:

protocol HeterogeneousDictionaryKey {
  /// The "namespace" the key belongs to.
  associatedtype Domain
  /// The type of values that can be stored
  /// under this key in the dictionary.
  associatedtype Value
}

You define a key by creating a type and adding the conformance:

struct Font: HeterogeneousDictionaryKey {
  typealias Domain = TextAttributes
  typealias Value = NSFont
}

Implementation notes

A minimal implementation of the dictionary type is quite short:

struct HeterogeneousDictionary<Domain> {
  private var storage: [ObjectIdentifier: Any] = [:]

  var count: Int { self.storage.count }

  subscript<Key>(key: Key.Type) -> Key.Value?
    where Key: HeterogeneousDictionaryKey, Key.Domain == Domain
  {
    get { self.storage[ObjectIdentifier(key)] as! Key.Value? }
    set { self.storage[ObjectIdentifier(key)] = newValue }
  }
}

Internal storage

private var storage: [ObjectIdentifier: Any] = [:]

Internally, HeterogeneousDictionary uses a dictionary of type [ObjectIdentifier: Any] for storage. We can’t use a metatype such as Font.self directly as a dictionary key because metatypes aren’t hashable. But we can use the metatype’s ObjectIdentifier, which is essentially the address of the type’s representation in memory.

Subscript

subscript<Key>(key: Key.Type) -> Key.Value?
  where Key: HeterogeneousDictionaryKey, Key.Domain == Domain
{
  get { self.storage[ObjectIdentifier(key)] as! Key.Value? }
  set { self.storage[ObjectIdentifier(key)] = newValue }
}

The subscript implementation constrains its arguments to keys in the same domain as the dictionary’s domain. This ensures that you can’t subscript a dictionary for text attributes with some other unrelated key. If you find this too restrictive, you could also remove all references to the Domain type from the code; it would still work.

Using key paths as keys

Types as keys don’t have the best syntax. I think you’ll agree that dict[FontSize.self] doesn’t read as nice as dict[\.fontSize], so I looked into providing a convenience API based on key paths.

My preferred solution would be if users could define static helper properties on the domain type, which the dictionary subscript would then accept as key paths, like so:

extension TextAttributes {
  static var fontSize: FontSize.Type { FontSize.self }
  // Same for font and foregroundColor
}

Sadly, this doesn’t work because Swift 5.6 doesn’t (yet?) support key paths to static properties (relevant forum thread).

We have to introduce a separate helper type that acts as a namespace for these helper properties. Since the dictionary type can create an instance of the helper type, it can access the non-static helper properties. This doesn’t feel as clean to me, but it works. I called the helper type HeterogeneousDictionaryValues as a parallel with EnvironmentValues, which serves the same purpose in SwiftUI.

The code for this is included in the Gist.

Drawbacks

Is the HeterogeneousDictionary type useful? I’m not sure. I wrote this mostly as an exercise and haven’t used it yet in a real project. In most cases, if you need a heterogeneous record with full type safety, it’s probably easier to just write a new struct where each property is optional — the boilerplate for defining the dictionary keys is certainly longer and harder to read.

For representing partial values, i.e. struct-like records where some but not all properties have values, take a look at these two approaches from 2018:

These use a similar storage approach (a dictionary of Any values with custom accessors to make it type-safe), but they use an existing struct as the domain/keyspace, combined with partial key paths into that struct as the keys. I honestly think that this is the better design for most situations.

Aside from the boilerplate, here are a few more weaknesses of HeterogeneousDictionary:

  • Storage is inefficient because values are boxed in Any containers
  • Accessing values is inefficient: every access requires unboxing
  • HeterogeneousDictionary can’t easily conform to Sequence and Collection because these protocols require a uniform element type

The code

The full code is available in a Gist.

Advanced Swift, fifth edition

We released the fifth edition of our book Advanced Swift a few days ago. You can buy the ebook on the objc.io site. The hardcover print edition is printed and sold by Amazon (amazon.com, amazon.co.uk, amazon.de).

Highlights of the new edition:

  • Fully updated for Swift 5.6
  • A new Concurrency chapter covering async/await, structured concurrency, and actors
  • New content on property wrappers, result builders, protocols, and generics
  • The print edition is now a hardcover (for the same price)
  • Free update for owners of the ebook

A growing book for a growing language

Updating the book always turns out to be more work than I expect. Swift has grown substantially since our last release (for Swift 5.0), and the size of the book reflects this. The fifth edition is 76 % longer than the first edition from 2016. This time, we barely stayed under 1 million characters:

Bar chart of the character count growth of the first five editions of Advanced Swift, from 537k (first edition) to 947k characters (fifth edition)
Character counts of Advanced Swift editions from 2016–2022.

Many thanks to our editor, Natalye, for reading all this and improving our Dutch/German dialect of English.

Hardcover

For the first time, the print edition comes in hardcover (for the same price). Being able to offer this makes me very happy. The hardcover book looks much better and is more likely to stay open when laid flat on a table.

We also increased the page size from 15×23 cm (6×9 in) to 18×25 cm (7×10 in) to keep the page count manageable (Amazon’s print on demand service limits hardcover books to 550 pages).

I hope you enjoy the new edition. If you decide to buy the book or if you bought it in the past, thank you very much! And if you’re willing to write a review on Amazon, we’d appreciate it.

Synchronous functions can support cancellation too

Cancellation is a Swift concurrency feature, but this doesn’t mean it’s only available in async functions. Synchronous functions can also support cancellation, and by doing so they’ll become better concurrency citizens when called from async code.

Motivating example: JSONDecoder

Supporting cancellation makes sense for functions that can block for significant amounts of time (say, more than a few milliseconds). Take JSON decoding as an example. Suppose we wrote an async function that performs a network request and decodes the downloaded JSON data:

import Foundation

func loadJSON<T: Decodable>(_ type: T.Type, from url: URL) async throws -> T {
  let (data, _) = try await URLSession.shared.data(from: url)
  return try JSONDecoder().decode(type, from: data)
}

The JSONDecoder.decode call is synchronous: it will block its thread until it completes. And if the download is large, decoding may take hundreds of milliseconds or even longer.

Avoid blocking if possible

In general, async code should avoid calling blocking APIs if possible. Instead, async functions are expected to suspend regularly to give waiting tasks a chance to run. But JSONDecoder doesn’t have an async API (yet?), and I’m not even sure it can provide one that works with the existing Codable protocols, so let’s work with what we have. And if you think about it, it’s not totally unreasonable for JSONDecoder to block. After all, it is performing CPU-intensive work (assuming the data it’s working on doesn’t have to be paged in), and this work has to happen on some thread.

Async/await works best for I/O-bound functions that spend most of their time waiting for the disk or the network. If an I/O-bound function suspends, the runtime can give the function’s thread to another task that can make more productive use of the CPU.

Responding to cancellation

Cancellation is a cooperative process. Canceling a task only sets a flag in the task’s metadata. It’s up to individual functions to periodically check for cancellation and abort if necessary. If a function doesn’t respond promptly to cancellation or outright ignores the cancellation flag, the program may appear to the user to be stalling.

Now, if the task is canceled while JSONDecoder.decode is running, our loadJSON function can’t react properly because it can’t interrupt the decoding process. To fix this, the decode method would have to perform its own periodic cancellation checks, using the usual APIs, Task.isCancelled or Task.checkCancellation(). These can be called from anywhere, including synchronous code.

Internals

How does this work? How can synchronous code access task-specific metadata? Here’s the code for Task.isCancelled in the standard library:

extension Task where Success == Never, Failure == Never {
  public static var isCancelled: Bool {
      withUnsafeCurrentTask { task in
        task?.isCancelled ?? false
      }
  }
}

This calls withUnsafeCurrentTask to get a handle to the current task. When the runtime schedules a task to run on a particular thread, it stores a pointer to the task object in that thread’s thread-local storage, where any code running on that thread – sync or async – can access it.

If task == nil, there is no current task, i.e. we haven’t been called (directly or indirectly) from an async function. In this case, cancellation doesn’t apply, so we can return false.

If we do have a task handle, we ask the task for its isCancelled flag and return that. Reading the flag is an atomic (thread-safe) operation because other threads may be writing to it concurrently.

Conclusion

I hope we’ll see cancellation support in the Foundation encoders and decoders in the future. If you have written synchronous functions that can potentially block their thread for a significant amount of time, consider adding periodic cancellation checks. It’s a quick way to make your code work better with the concurrency system, and you don’t even have to change your API to do it.

Update February 2, 2022: Jordan Rose argues that cancellation support for synchronous functions should be opt-in because it introduces a failure mode that’s hard to reason about locally as the “source“ of the failure (the async context) may be several levels removed from the call site. Definitely something to consider!

Cancellation can come in many forms

In Swift’s concurrency model, cancellation is cooperative. To be a good concurrency citizen, code must periodically check if the current task has been cancelled, and react accordingly.

You can check for cancellation by calling Task.isCancelled or with try Task.checkCancellation() — the latter will exit by throwing a CancellationError if the task has been cancelled.

By convention, functions should react to cancellation by throwing a CancellationError. But this convention isn’t enforced, so callers must be aware that cancellation can manifest itself in other forms. Here are some other ways how functions might respond to cancellation:

  • Throw a different error. For example, the async networking APIs in Foundation, such as URLSession.data(from: URL), throw a URLError with the code URLError.Code.cancelled on cancellation. It’d be nice if URLSession translated this error to CancellationError, but it doesn’t.

  • Return a partial result. A function that has completed part of its work when cancellation occurs may choose to return a partial result rather than throwing the work away and aborting. In fact, this may be the best choice for a non-throwing function. But note that this behavior can be extremely surprising to callers, so be sure to document it clearly.

  • Do nothing. Functions are supposed to react promptly to cancellation, but callers must assume the worst. Even if cancelled, a function might run to completion and finish normally. Or it might eventually respond to cancellation by aborting, but not promptly because it doesn’t perform its cancellation checks often enough.

So as the caller of a function, you can’t really rely on specific cancellation behavior unless you know how the callee is implemented. Code that wants to know if its task has been cancelled should itself call Task.isCancelled, rather than counting on catching a CancellationError from a callee.

What is structured concurrency?

Structured concurrency is a new term for most Swift developers. This is an attempt to decipher its meaning.

What’s the difference between structured concurrency and async/await?

Structured concurrency has two aspects, structured and concurrency. async/await on its own is structured but not concurrent.

What do you mean by “async/await is structured”?

It’s an analogy to structured programming of the 1950s and 1960s, when the now-ubiquitous control flow structures — if/then/else, loops, subroutines, lexical scopes for variables — were being invented. Today, nearly all programming is structured programming, so we take it as a given.

The async/await syntax lets us write asynchronous code using the same control flow structures we use for synchronous code:

  • Sequential control flow from top to bottom
  • Async functions can return the results of asynchronous computations
  • Error handling with throws/try/catch
  • Loops (using AsyncSequence), including support for break and continue

In contrast, using completion handlers for async programming is unstructured: Control flow jumps all over the place, asynchronous functions can’t return their results directly, native error handling doesn’t work.

What do you mean by “async/await is not concurrent”?

async/await doesn’t introduce concurrency, i.e. the execution of multiple tasks at the same time. In Swift’s model, an async function always executes in a task, and every task has a single path of execution. That is, when an async function suspends, it’s really the task that gets suspended until the function can resume.

Concurrency is achieved by having more than one task, allowing the runtime to run another task while the first one is suspended. And task creation is always explicit in Swift: just calling an async function with await will never create a new task. (That’s not to say that the called function won’t create any new tasks in its implementation, but the function will always start executing on the calling task.)

Structured concurrency

This is where structured concurrency comes in. The two structured concurrency constructs, async let and task groups, create new tasks that are then executed concurrently, with each other and the originating task.

So we have concurrency, but why “structured”?

Tasks created with async let or in a task group become child tasks of the originating task. This hierarchy has some nice properties, such as automatic cancellation propagation from parent to children.

But the fundamental rule that puts the structure in structured concurrency is this: Child tasks can’t outlive their parent’s lifetime.

Like in structured programming, where a called function must return before its caller can return, and where a local variable can’t outlive the scope it’s defined in, a parent task always waits for its child tasks to complete before leaving the scope they’re defined in.

This simple rule makes control flow in concurrent code much easier to follow, just like structured programming made programs easier to reason about compared to code littered with goto statements. Nathaniel J. Smith explores this analogy between structured programming and structured concurrency in his fantastic article, Notes on structured concurrency, or: Go statement considered harmful, which I believe also influenced the design of Swift’s concurrency model.

There’s also unstructured concurrency

In contrast, the Task { … } and Task.detached { … } APIs create unstructured tasks: new top-level tasks for long-running jobs that should outlive the current scope or fire-and-forget style work the current task doesn’t need to wait for to complete. You can mix and match structured and unstructured concurrency, e.g. an unstructured task will often use structured child tasks to perform its work.

How Swift runs an async executable

Here’s one of the simplest async/await programs you can write in Swift (using an @main program entry point):

@main struct Main {
  static func main() async {
    print("Sleeping")
    await Task.sleep(1_000_000_000) // Sleep for 1 second
    print("Done")
  }
}

How is this program executed? Async functions, like our main() method, can only be called from an async context. Who creates this context on launch, and how?

The Swift runtime does, with the help of the compiler. The compiler creates an entry point for the program that looks something like this:

_runAsyncMain {
  await Main.main()
}

_runAsyncMain is a function that’s defined in the Swift concurrency runtime library. You can read the full implementation in the Swift repo. Here, I’ve simplified it to its essence (without error handling or OS differences):

public func _runAsyncMain(_ asyncFun: @escaping () async -> ()) {
  Task.detached {
    await asyncFun()
    exit(0)
  }
  _asyncMainDrainQueue()
}

So the Swift runtime creates a detached task as the execution context for our main method. It’s this hidden task that becomes the parent task of all other structured child tasks our code might create.

_runAsyncMain then proceeds by calling _asyncMainDrainQueue (implementation), another runtime function that passes control to GCD by calling dispatchMain or CFRunLoopRun (on Apple platforms).

_asyncMainDrainQueue never returns. The program will run until our main method returns, and then exit. That’s it.

Swift needs a better language reference

In August 2020, I posted a rant on the Swift forums about the poor state of Swift documentation. Nothing came of it, but I want to reiterate one point I made then: the Swift project sorely needs a searchable, linkable language reference.

To be fair, Swift does have a language reference: the eponymous section in The Swift Programming Language (TSPL) contains most of the information I’d expect from such a resource. But that section isn’t well structured to serve as an actual reference:

TSPL is not searchable

The TSPL website doesn’t have a search field. Even if it had one, I imagine it would be a full-text search over the entire site, as is common (and appropriate) for a book. A language reference needs a different search engine:

  • Searching for keywords (if, case, where) must reliably find the documentation for the keyword as the top result. I don’t want to see the hundreds of pages that contain the word “if” in their body text.

  • I’d love to be able to search for punctuation. Imagine if you could search for a symbol such as # and it would show you a list of all syntax elements that use this symbol.1 This would be very informative and a great way to explore the language, not just for beginners — especially with good IDE integration (see below). Swift is such a big and complex language that most people won’t know every language feature.

A language reference needs a search engine that knows to handle keywords and punctuation.

TSPL is not linkable

Pages in TSPL tend to be long, with many separate items crammed into a single page. For example, all compiler attributes are documented on a single page.

Sharing a link to a specific attribute, such as @resultBuilder, is difficult if you know your way around HTML and pretty much impossible if you don’t (not to mention the bad URL).

As a reader, opening such a link is disorienting as it drops you in the middle of a very long page, 95 % of which is irrelevant to you.

The reader experience is even poorer when you arrive from a search engine (as most people would because the site has no search function): TSPL is one of the top results for swift resultbuilder on Google, but it drops you at the top of the superlong page on Attributes, with no indication where to find the information you’re looking for.

Every language construct, keyword, attribute, and compiler directive should have its own, linkable page.

TSPL is structured wrong

The Language Reference section in TSPL is organized as if it was written for parser or compiler developers. It uses the language’s grammar as a starting point and branches out into expressions, statements, declarations, and so on.

For example:

I don’t know about you, but as a user of the language, that’s not how I think about Swift or how I search for documentation.

In addition to a good search engine, a language reference needs an alphabetical index of every keyword or other syntax element, with links to the respective detail page.

IDE integration

I was careful to make this a complaint about the documentation for Swift and not about the (equally poor) state of Apple’s developer documentation. Swift is not limited to app development for Apple devices, and I believe it’s essential for Swift to position itself as a standalone project if it wants to be perceived as a viable general-purpose language.

It’s good that TSPL is hosted on swift.org and not developer.apple.com, and that’s also where this new language reference I’m envisioning should live. (I also think it’s wrong to host the Swift API documentation on developer.apple.com.)

But once we have this language reference, Apple should of course integrate it into Xcode for offline search and context-sensitive help. Imagine if you could Option-click not only identifiers but any token in a source file to see its documentation.

A few examples:

  • Clicking on if case let would explain the pattern matching syntax.
  • Clicking on in would explains the various closure expression syntax variants.
  • Clicking on #fileID would show you an example of the resulting string and compare it to #file and filePath.
  • Clicking on @propertyWrapper would explain what a property wrapper is and how you can implement one.
  • Clicking on @dynamicMemberLookup would explain its purpose and what you have to do to implement it.
  • Clicking on < in a generic declaration would explain what generic parameters are and how they are used.
  • Clicking on ? would show all language elements that use a question mark (shorthand for Optionals, optional chaining, Optional pattern matching, try?).
  • Clicking on /// would list the magic words Xcode understands in doc comments.

You get the idea. This would be such a big help, not only for beginners.

To summarize, this is the sad state of searching for language features in Xcode’s documentation viewer:

Xcode documentation viewer showing meaningless results when searching for 'guard'
guardian let me watch youtube else { throw fit }
Xcode documentation viewer showing meaningless results when searching for 'associatedtype'
Nope, this isn’t what I was looking for.

And this mockup shows how it could be:

Mockup of an imagined Xcode documentation popover for #fileID
Yes, I rebuilt Xcode’s documentation popover in SwiftUI for this mockup, syntax highlighting and all.
  1. The # symbol is used in these places:

    Did I miss anything? ↩︎

How OrderedSet works

The new Swift Collections library provides efficient Swift implementations for useful data structures, starting with Deque, OrderedSet, and OrderedDictionary in the initial release.

The documentation for OrderedSet says this about the implementation:

An OrderedSet stores its members in a regular Array value (exposed by the elements property). It also maintains a standalone hash table containing array indices alongside the array; this is used to implement fast membership tests.

I didn’t understand how this standalone hash table can get away with only storing array indices — wouldn’t it also need to store the corresponding elements? So I checked out the source code to find out.

I imagined a pseudo implementation of OrderedSet that uses an array (for the ordered elements) and a dictionary that maps elements to array indices (for fast lookups), like this:

struct OrderedSet<Element: Hashable> {
  private var elements: [Element]
  /// For fast lookups. Maps from Element to Array.Index.
  private var hashTable: [Element: Int]
}

This is conceptually correct, but storing every element twice is obviously not memory-efficient. So how does OrderedSet do it? My error was to equate “hash table” with “set or dictionary” in my head, but it turns out there are other kinds of hash tables.

OrderedSet implements a custom hash table that stores one set of values (the array indices), but uses entirely different values (the set’s elements) for hashing and equality checks.

For testing if an item is in the set, OrderedSet performs these steps:

  1. Compute the item’s hash value.
  2. Find the bucket for this hash value in the hash table.
  3. If the bucket is empty, the item is not in the set ➡ return false.
  4. Otherwise, use the array index in the bucket to access the corresponding element in the elements array.
  5. Compare the item to be tested with the stored element.
  6. If they’re equal, we found the item ➡ return true.
  7. Otherwise, advance to the next bucket and go back to step 3.

For inserting an item at a specific index (appending an item is a special case of this):

  1. Check if the item is already in the set (see above). If yes, we’re done because set elements must be unique. If no, we now have an empty bucket in the hash table to store the item’s array index in.
  2. Store the array index in the empty bucket we found in step 1.
  3. Adjust the existing array indices in the hash table to account for the inserted item. Conceptually, we need to increment every array index that’s larger than the index of the inserted item. The actual implementation is cleverer, but that’s not relevant for this discussion.
  4. Insert the item into the elements array at the specified index.

This strategy saves a ton of memory, especially for large element types.

In addition, the hash table saves memory by only allocating as many bits per bucket as needed for the current capacity. For example, an OrderedSet with a current capacity of 180 elements only needs 8 bits per bucket to store every possible array index. As elements are added, OrderedSet will regenerate the hash table with larger buckets before it becomes too full.

The source code for the custom hash table implementation is in the OrderedCollections/HashTable directory.

My favorite books 2020

I read 45 books in 2020. These were my favorites. (Thinking in SwiftUI was not in the running because I’m biased.)

The Age of Surveillance Capitalism

The Age of Surveillance Capitalism by Shoshana Zuboff (2019). If you pick up one book from this list, make it this one. I think everyone working in the tech industry should read it, it’s so important. Shoshana Zuboff takes apart the business model of the big tech giants and the many unethical (and at times possibly illegal) things they have done and continue to do to preserve it.

If we ever manage to regulate the digital surveillance industry that pervades modern life (as we eventually did with the factory owners who exploited workers during the industrialization), this book will become a classic as a major contribution to that effort.


Circe

Circe by Madeline Miller. A beautiful reimagination of a Greek myth. I bought this book because I fell in love with the cover design of the UK edition without knowing what it was about and I’m glad I did. Madeline Miller’s writing was a challenge for my English skills, I had to look up so many words. But it was totally worth it.


Mythos

Mythos: The Greek Myths Reimagined by Stephen Fry (2017). Circe led me to Stephen Fry’s delightful retelling of the Greek myths. This is part one in a series. The sequels, Heroes and Troy, are on my list for next year. I bought the US edition because I liked the cover design better than the UK version. It’s a very pretty book.


Permanent Record

Permanent Record by Edward Snowden (2019). Snowden’s memoirs, recounting his work for NSA contractors up until the covert meeting with journalists in Hong Kong that led to his exile.


A Philosophy of Software Design

A Philosophy of Software Design by John Ousterhout (2018). Very approachable (short and easy to read) and contains a lot of solid advice for designing and maintaining complex systems. Two good reviews about the book: Gergely Orosz’s review matches my own thoughts. James Koppel’s take is more critical (but still positive) and very insightful.


The Art of Doing Science and Engineering

The Art of Doing Science and Engineering: Learning to Learn by Richard Hamming (1996, new edition 2020). Beautiful new edition of Hamming’s 1970s lecture series converted to prose. Themes include some technical topics, such as Hamming’s invention of the error-correcting code named after him, but the majority of the book is advice how to approach scientific work and how to build a successful career. The final chapter is a version of Hamming’s famous talk “You and Your Research”, which gives a good impression of the character of the book. The 2020 Stripe Press version is very pretty. Typography, illustrationsm and paper quality are all top-notch.


Deutschland Schwarz Weiß

Deutschland Schwarz Weiß by Noah Sow (2008, updated 2018). Racism in Germany. How white people advance and strengthen racism by not thinking about our actions. Bad typography, but I otherwise liked it.


Fake Facts

Fake Facts: Wie Verschwörungstheorien unser Denken bestimmen by Katharina Nocun und Pia Lamberty (2020). An inside look into several conspiracist milieus in German society, from Holocaust deniers and antisemitic ideologists to anti-vaxxers and esoterics. Highly relevant these days.


March

March (Book One to Three) by John Lewis, Andrew Aydin, and Nate Powell (2013–2016). The late US Congressman John Lewis recounts his extraordinary life in the US civil rights movement in comic book form. Reading this deeply impressed me. The struggle Black people fought then (and are still fighting today)! The courage and determination it took to take humiliation after humiliation, beating after beating, arrest after arrest, murder after murder, without fighting back with violence. I had known this before, but seeing it from the perspective of someone who experienced it firsthand was something else.


MetaMaus

MetaMaus by Art Spiegelman (2011). Reading March led me to a comic classic: Maus (1985 and 1991), Art Spiegelman’s two-part story how his parents survived the Holocaust in Poland. I liked the comics, but I felt I didn’t really understand why they were so famous.

MetaMaus is essentially a 230-page interview with the author in which he takes his work apart in detail: how it came to be, how he approached certain scenes (including tons of draft sketches), Art’s complicated relationship with his father (which plays a major role in Maus), the many layers of hidden meaning in his drawings (most of which I missed when reading the comics). Reading this gave me a much deeper appreciation for the original work. Kudos to Hillary Chute, who went through Spiegelman’s archives and asked fantastic questions.


Berlin

Berlin Jason Lutes (2018). Another comic classic, published as a magazine series in 2000, 2008, and 2018. A 550-page tome that describes life in Berlin from 1928 to 1933, during the decline of the Weimar Republic and the rise of the Nazis. Recommended if you’re interested in that period.


Brüder

Brüder by Jackie Thomae (2019). The story of two brothers who get separated as children and grow up to live very different lives, each a tragic figure in their own way. Themes: racism, East and West, 1980s and 1990s Berlin, provincialism vs. cosmopolitanism, how a single mistake can destroy a life.


Grünmantel

Grünmantel by Manfred Maurenbrecher (2019). Continuing my theme from last year, this is another novel set in rural East Germany, 25 years after reunification.


Im Sommer wieder Fahrrad

Im Sommer wieder Fahrrad by Lea Streisand (2016). Autobiographical novel. The author (in her thirties) narrates her fight against cancer and memories of her grandmother who led a nonconformist, emancipated life in pre-, mid-, and postwar Germany.


Side note: publisher websites are terrible

I dread compiling this list every year because finding good links and cover images for the books is a pain. Most publisher websites are terrible. It’s very likely many links on this page will no longer work in a few months or years. The idea of not breaking URLs seems foreign to most companies that have products to sell. When a book goes out of print, it may well just disappear from the catalog. I preemptively apologize for that.

Nonetheless, I don’t want to link to Amazon if I can help it. If you decide to buy a book from this list, please buy it from a local bookshop or another retailer that isn’t one of the most powerful companies in the world. Let’s try to not make Jeff Bezos any richer. (Yes, I see the irony that the print version of my own book is only available on Amazon. Sorry.)

Thinking in SwiftUI

Chris and Florian’s new book Thinking in SwiftUI came out in March 2020 already, so I’m late to the party with this announcement. But I still want to mention it here because (a) it’s a good book and (b) I had a small part in its creation as a reviewer. This week’s release of an updated version for the latest SwiftUI version is a good opportunity.

The book is short (150 pages), and that’s one of its strengths. It doesn’t want to be a comprehensive reference manual for every SwiftUI API — rather, it wants to help readers build a mental model of how SwiftUI works on a fundamental level.

The first edition received a lot of praise on Twitter (1, 2, 3, 4, 5, 6). The second edition includes new content on some of the new SwiftUI features that are part of iOS 14/macOS 11, as well as refined explanations on SwiftUI’s layout system. If you’re a Swift Talk subscriber, you already know that Florian and Chris did a deep dive into the layout system this year.

I’m a big fan of the exercises included in the book. Each chapter ends with one or more exercises that encourage you to solve a realistic problem using the things you just learned. Personally, doing the exercises definitely gave me a deeper understanding of the material than just reading about it.

Looking for a technical reviewer? Hire me

On a side note, I love this work as a reviewer, and I think I’m good at it. I you’re planning to write a book about Swift or iOS and are looking for a technical reviewer, hit me up.

Where is end-to-end encryption for iCloud?

Update December 31, 2022: On December 7, 2022, Apple announced that it will finally allow users to opt into end-to-end encryption for most iCloud data categories, including photos, notes, backups, and iCloud Drive. This “Advanced Data Protection” has been rolled out to U.S. users with iOS 16.2/macOS 13.1 and will hopefully arrive soon for the rest of the world.


In a December 2020 video recorded for the European Data Protection & Privacy Conference, Apple’s Craig Federighi touts end-to-end-encryption for iMessage (starting at 55:36):

iPhone users don’t have to worry their private conversations, using iMessage and Facetime, will be intercepted. We’ve designed these features so that bad actors can’t listen to these communications, and neither can anyone at Apple.

Apple has been using this self-congratulatory tone about their encryption efforts for years and I find it increasingly disingenuous. What Federighi fails to mention: if you have iCloud Backup enabled, that last claim (emphasis mine) is not the whole truth. Apple may not be able to listen in on your conversations, but they can decrypt the messages stored in your backups, because data in iCloud backups is not end-to-end encrypted.1

Screenshot: 'Messages are only seen by who you send them to. Apple can’t read your iMessages while they’re being sent between you and the person you’re texting.'
Screenshot from Apple’s privacy marketing page. The key phrase is while they’re being sent.

And it’s not just iCloud backups. Here’s an incomplete list of data sources in iCloud that are not end-to-end-encrypted:

  • iCloud backups
  • Messages (de facto when iCloud Backup is enabled because the backup contains a decryption key for the messages)
  • Photos
  • Files in iCloud Drive
  • Notes
  • Contacts
  • Reminders
  • Calendars
  • Voice memos
  • Bookmarks (your Safari history and open tabs are end-to-end-encrypted)

Source: Apple, iCloud security overview

In other words, if you use Apple services as intended and recommended by Apple, a large portion of your most sensitive data is in fact not securely encrypted. Both Apple and U.S. government agencies (and possibly other governments?) can potentially access it.

At least give us the option

I understand that using end-to-end encryption for everything comes with its own problems:

  • Accessing your iCloud data through a web browser on icloud.com may become impossible.
  • Some users will lose their most precious data when they lose their devices and decryption keys, and Apple won’t be able to recover it for them.

These are real tradeoffs, but I don’t think they’re reason enough for Apple not to offer end-to-end encryption, at least as an option. If you believe a January 2020 Reuters report, they tradeoffs sound more like convenient excuses to not risk another confrontation with U.S. law enforcement and lawmakers:

Apple dropped plans to let iPhone users fully encrypt backups of their devices in the company’s iCloud service after the FBI complained that the move would harm investigations, six sources familiar with the matter told Reuters.

I hope this is wrong and Apple gets its act together to use end-to-end encryption for all user data — it’s long overdue. Until then, I won’t be taking their privacy claims at face value.

  1. If you enable iMessage syncing via iCloud, Apple will use end-to-end encryption to store your messages and will no longer include Messages data in iCloud backups. But that doesn’t change the fundamental problem because iCloud backups will still include a decryption key for your “end-to-end encrypted” messages:

    Messages in iCloud also uses end-to-end encryption. If you have iCloud Backup turned on, your backup includes a copy of the key protecting your Messages. — Apple, iCloud security overview

    ↩︎

❌