普通视图

发现新文章,点击刷新页面。
昨天以前WWDC NOTES

Meet Reality Composer Pro

2024年3月14日 07:48

Chapters

00:00 - Introduction
01:15 - Project setup
02:47 - UI navigation
04:08 - Composing scenes
07:08 - Particle emitters
13:23 - Audio authoring
17:39 - Statistics
19:26 - On-device preview
19:59 - Wrap-up

Session is a walkthrough of Reality Composer Pro, using Diorama sample app.

Can launch RCP directly from Xcode->Developer Tools menu. Or can create a RCP project by creating a new Xcode xrOS project, which creates a Reality Composer Pro Package.

Reality Composer Pro UI navigation:

  • 3D viewport (center): navigate with WASD and arrow keys, or with a paired game controller.
  • Hierarchy panel (left) for object selection and reorganizaation.
  • Inspector panel (right): edit properties of selected objects. "Add component" button at bottom shows available built-in objects.
  • Editor panel (bottom): Project browser, Shader Graph, Audio Mixer, Statistics.

3 ways to add assets: - Import button in project browser: import existing assets (USDZ, wav, more) - Content library ("+" button at top right): curated assets (USDZ, materials, more) from Apple. - Object Capture. See "Meet Object Capture for iOS" session.

05:03: Walkthrough of building diorama from imported assets, plus library assets.

Imported assets can be replaced with new version, e.g. change the style of all location pins by updating one file.

Particle emitters

Demo: Clouds composition. Add article emitters (freestanding asset, or attach to an existing asset).

Build a Cloud Chunk. Particle Emitter tinkering, starting with the "Impact" particle emitter preset. Change 3D viewport background color. Live playback (top of inspector panel). Large number of particles slows performance. Check "isLocalSpace" so that parent's translation/rotation/scaling will also apply to the emitter.

New scene, Cloud A. Add Cloud Chunk multiple times, positioned for realism.

Back to diorama. Add group Clouds, which has Cloud A, Cloud B, Cloud C. Preview just the "Clouds" group with Playback.

Audio authoring

Audio files. Can be played on one or more objects. An object can play one or multiple audio files.

Audio sources: spatial (comes from a particular object), ambient (e.g. wind from the east, no matter how far east you travel), channel (background music).

Example: animated bird (USDZ), with two bird call audio files attached as Spatial audio source. Audio File Group randomly selects one of its members to play back. Preview: playback of animation and audio of all birds and calls in scene. Additional work needed to control from Swift; see Work with Reality Composer Pro content in Xcode.

Statistics

For performance optimization.

Note that diorama base has many more triangles than the terrain itself. Replace base asset with much simpler version. Reduces triangle count by over half.

On-device preview

Drop-down at upper right, select actual device. Object appears in AR, can pinch, drag, zoom the scene.

Wrap-up

Explore testing in-app purchases

2024年2月28日 04:05

Speaker: Hemant Sawle, Commerce Developer Advocate

StoreKit Testing in Xcode

StoreKit Testing in Xcode allows local testing. Added in WWDC20.

  • StoreKit configuration file to create and manage IAPs.
  • Perform local testing of in app purchases
  • Use simulator or device to test.
  • Leverage StoreKitTest framework to automate testing IAPs.
  • Sync products from AppStore Connect to Xcode: so configuration file doesn’t need to be created manually.
  • Supports advance subscription use cases, like offer code redemption, price increase sheet, and subscriptions entering and exiting billing retry.
  • Flexible subscription renewal rate, from real-time to every 2 seconds.

New in Xcode 15:

  • Static renewal rates, independent of subscription duration.
  • Simulate StoreKit errors to build better error handling.
  • If running multiple instances of your app, the transaction manager will display transactions for each app instance.
  • Transactions manager also allows buying IAPs directly, to test external transactions.

Learn more in the session “What’s new in StoreKit 2 and StoreKit Testing in Xcode” from WWDC23

App Store Sandbox

App Store Sandbox allows testing products set up in App Store Connect on client and server.

Prerequisites:

  • Accept Paid Applications agreement
  • Registered device with your developer account
  • Create Sandbox Apple ID to make purchases in App Store Connect > Users & Access
  • Enable developer mode on device, in Privacy Settings

Sandbox enables testing production-like scenarios like purchases, restores, and subscription offers. Distribute the app directly to the device from Xcode, or using a distribution method like Release Testing, Debugging, or Custom (to generate an IPA file).

Billing Problem message simulation (new)

  • Available in sandbox, it will be available for customers in production when they enter billing retry.
  • Uses StoreKit 2 message API with reason billingIssue.
  • Implement a message listener in views to defer or suppress the message
  • Simulate billingIssue message in sandbox to test how your app handles the message presentation.
  • To trigger this, the sandbox Apple ID needs to be subscribed to an auto-renewable subscription. In the device, go to App Store Settings > Account Settings and disable "Allow Purchases & Renewals". Then, go back to your app and the billing issue message will appear.

Learn more about implementing StoreKit 2 Message API in "What's new with in-app purchase" from WWDC22

Billing Grace Period (new)

Grace period allows users to maintain access to paid features while payment is being collected, without interruption.

  • Go to App Store Connect > App > Subscriptions > Billing Grace Period.
  • The durations shown only apply to production; in testing, the sandbox account's renewal rate is used.

Family Sharing (new)

Allow customers to share digital purposes with their family members.

  • Go to App Store Connect > App > Subscriptions / Non-Consumable Products > enable Family Sharing
  • Organize Sandbox Family Sharing in App Store Connect
  • Make a purchase with sandbox Apple ID
  • Go to App Store Connect > Users and access > Family Sharing
  • To stop sharing, go to the device's settings, Account Settings > Family Sharing > Stop sharing

Family Sharing in Sandbox allows validating:

  • Merchandise family-shareable products using StoreKit's isFamilyShareable
  • Validate app logic to entitle service for family members
  • Revoke access for family members, validate with revocationDate available in JWSTransactions
  • Receive App Store Server Notifications for family members

Learn more in the Tech Talk session "Explore Family Sharing for in-app purchases."

iOS sandbox Account Settings (new)

Options only available in App Store Connect are now available on-device for testing. Go to App Store settings > Sandbox Account > Manage, to find Renewal Rate, Test Interrupted Purchases, and Clear Purchase History.

TestFlight

TestFlight allows end to end beta testing and feedback from testers. Overview:

  • Distribute app across all platforms
  • Add internal and external testers
  • Automatic updates
  • Builds valid for 90 days

Learn more in the Tech Talk session "Get started with TestFlight"

When testing in-app purchases:

  • Builds are downloaded using TestFlight app
  • Uses Apple ID signed into Media & Purchases
  • In-app purchases are free
  • Subscription renewal rates are accelerated, equivalent to Sandbox
  • If the app has implemented StoreKit's showManageSubscription, you can test subscription cancellation or change subscription.

New:

  • Manage TestFlight testers: filter tester data like status, sessions, and bulk selection.
  • Internal Only distribution makes the build only available to internal testers, and it cannot be submitted to App Store. Learn more in the sessions "What's new in App Store Connect" and "Simplify distribution in Xcode and Xcode Cloud" from WWDC23.

Explore 3D body pose and person segmentation in Vision

2024年2月4日 17:02

Speaker: Andrew Rauh, Software Engineer

Human Body Pose in Vision

Previous Version

  • In the initial version of Vision framework, body pose detection uses 2D coordinate system to locate and track the positions of body parts.
  • It specifically uses x and y coordinates for detecting body part in the image or video.

See more in the session "Detect Body and Handpose with Vision" from WWDC20.

Human Body Pose in 3D

  • New Vision framework expands and supports detecting human body in 3D with VNDetectHumanBodyPose3DRequest.
  • This 3D request can detect 17 joints. These joints can be accessed by joint names or can be used as joint group name.
  • The position of 3D joint is captured in meters from the relative to the image captured. This relative position can be defined as root joint.
  • For example, when there are multiple persons present in real world, initial version will detect the most prominent person in the frame as a root joint. Human Body Pose in 3D

    VNHumanBodyPose3DObservation.JointName

    • The 3D human body pose is being demonstrated with skeleton structure with different joint groups.
    • Namely head, torso, left arm, right arm, left leg and right leg group names. (VNHumanBodyPose3DObservation.JointsGroupName)

    .head group

    • .centerHead - A joint name that represents the center of the head.
      • .topHead - A joint name that represents the top of the head. head group

    .torso group

    • .leftShoulder - A joint name that represents the left shoulder.
      • .centertShoulder - A joint name that represents the point between the shoulders.
      • .rightShoulder - A joint name that represents the right shoulder.
      • .spine - A joint name that represents the spine.
      • .root - A joint name that represents the point between the left hip and right hip.
      • .leftHip - A joint name that represents the left hip.
      • .rightHip - A joint name that represents the right hip. torso group

      .leftArm group

      • .leftWrist - A joint name that represents the left wrist.
        • .leftShoulder - A joint name that represents the left shoulder.
        • .leftElbow - A joint name that represents the left elbow. leftArm group

        .rightArm group

        • .rightWrist - A joint name that represents the right wrist.
          • .rightShoulder - A joint name that represents the right shoulder.
          • .rightElbow - A joint name that represents the right elbow. rightArm group

        .leftleg group

        • .leftHip - A joint name that represents the left hip.
          • .leftKnee - A joint name that represents the left knee.
          • .leftAnkle - A joint name that represents the left ankle. leftleg group

        .rightleg group

        • .rightHip - A joint name that represents the right hip.
          • .rightKnee - A joint name that represents the right knee.
          • .rightAnkle - A joint name that represents the right ankle. rightleg group

        Snippet converting a 2D image to 3D world

        Snippet converting 2D image to 3D
        • You need to initialize an image asset and call the request function.
        • If the request function is successful, a VNHumanBodyPose3DObservation will be returned without error.

        There are two ways we can retrieve position VNHumanBodyPose3DObservation:

        • For accessing a specific joint's position. `swift let recognizedPoint = try observation.recognizedPoint(.centerHead) `
        • For accessing a collection of joints with a specified group name. `swift let recognizedPoints = try observation.recognizedPoints(.torso) `

        Other advantages

        • bodyHeight and heightEstimation Advantage 1
        • Understanding where the camera was relative to the person when the frame was captured. Advantage 2

        3D Positions in Vision

        • VNPoint3D is the base class that defines 4x4 matrix for storing 3D position. This notation is consistent with ARKit and available for all rotations and translations.
        • VNRecognizedPoint3D is used to store corresponding information like joint name and inherits position and adds an identifier.
        • VNHumanBodyRecognizedPoint3D is used to get more specifics around how to work with properties of the point like local position and the parent joint. 3D positions in vision
        • point.position - It retrieves skeleton's root joint at the center of hip. For example: .leftWrist
        • point.localPosition - It is the position relative to a parent joint. It works only one area of the body. Eg. determining the angle between child and parent joint. `swift public func calculateLocalAngleToParent(joint: VNHumanBodyPose3DObservation.JointName) -> simd_float3 { var angleVector: simdfloat3 = simdfloat3() do { if let observation = self.humanObservation { let recognizedPoint = try observation.recognizedPoint(joint) let childPosition = recognizedPoint.localPosition let translationC = childPosition.translationVector // The rotation for x, y, z. // Rotate 90 degrees from the default orientation of the node. Add yaw and pitch, and connect the child to the parent. let pitch = (Float.pi / 2) let yaw = acos(translationC.z / simd_length(translationC)) let roll = atan2((translationC.y), (translationC.x)) angleVector = simd_float3(pitch, yaw, roll) } } catch { print("Unable to return point: (error).") } return angleVector } `

        Depth in Vision

        • Vision framework is now accepting depth as input along with image or frame buffer
        • New API request accepting depth as parameter `swift let requestHandler = VNImageRequestHandler(cmSampleBuffer: frameBuffer, depthData: depthData, orientation: .up, options: [:]) let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, depthData: depthData, orientation: .up, options: [:]) `
        • API will fetch depth from URL `swift let requestHandler = VNImageRequestHandler(url: imageURL) `

        Working with Depth

        • AVDepthData is the container class which interfaces all Depth metadata (which is captured from camera sensors).
        • Depth map has either Disparity (DisparityFloat16, DisparityFloat32) or Depth format (DepthFloat16, DepthFloat32).
        • Depth map data can be interchangable and converted to other format using AVFoundation.
        • Depth metadata is used to construct 3D scene. Depth metadata also has camera calibration data (instrinsics, extrinsics, and lens distortion). See more in the session "Discover advancements in iOS camera capture" from WWDC22.

        Sources of Depth

        • Camera captured session or file.
        • Images captured by camera app like portrait images, which stores dispartiy data.
        • LiDAR enables high accuracy measurment of scene.

        Person Instance Mask

        • Vision API is now supporting when it interacts with more than one person in the image.
        • Person Segmentation technique is used to separate people from the surroundings.

        VNGeneratePersonSegmentationRequest

        • Uses single mask for all people in the frame. Single mask on 1 personSingle mask on multiple people

        VNGeneratePersonInstanceMaskRequest

        • API allows you to be more selective.
        • You can select and lift the subjects other than people.
        • It allows upto 4 people in foreground (individual mask)

        See more in the session "Lift subjects from images in your app" from WWDC23.

        Selecting personal instance mask which you want from an image

        result.createMattedImage(
          ofInstances: result.allInstances, 
          from: requestHandler,
          croppedToInstancesExtent: false
        )
        
        Selecting personal instance from an image

        Selecting personal instance mask with many people

        When more than four people are present request may be missing people in background or merge people in close contact. You can use either any one technique. - Use face detection API to filter four or more people. - Use person segmentation request and work with one mask for everyone. Selecting personal instance from many people

        Wrap-up

        • Vision framework is now offering a powerful ways to understand people, environment supporting depth, 3D Human Body Pose, and person instance masks.

        Also See more in the session "Detect animal poses in Vision" from WWDC23.

        Explore SwiftUI animation

        2024年2月3日 01:20

        Speaker: Kyle Macomber, SwiftUI Engineer

        This is an overview of SwiftUI's animation capabilities.

        Anatomy of an update

        SwiftUI tracks a view's dependencies, like state variables. When an event happens, an update transaction is opened. If any dependencies change, the view is invalidated, and at the end of the transaction, the framework calls body to refresh the rendering.

        SwiftUI maintains a dependency graph that manages the lifetimes of views and their data, storing each piece of the UI in attributes. When state changes, the value of each attribute becomes stale, and the new view value is unwrapped to refresh the view.

        Attribute graph in SwiftUI.

        Then the view's body value is discarded, and drawing commands are emitted to update the rendering.

        If the state change is wrapped in withAnimation, any attributes that are animatable will check if an animation is set for the transaction, and if so, it makes a copy and uses the animation to interpolate between old and new value over time. Built-in animatable attributes like scaleEffect are very efficient.

        Animation timeline in SwiftUI.

        Animatable

        Animatable attributes (like scaleEffect) determine the data being animated.

        SwiftUI builds an animatable attribute for any view conforming to the Animatable protocol — which requires that the view define a readwrite vector of the data it wants to animate. The data must conform to VectorArithmetic.

        VectorArithmetic deals in vectors, a fixed-length list of numbers that support vector addition and scalar multiplication. They allow abstracting over the length of that list. (CGFloat and Double are one-dimensional vectors; CGPoint and CGSize are two-dimensional; and CGRect is a four-dimensional vector). Using vectors, SwiftUI can animate with a single generic implementation.

        scaleEffect might seem really simple, but in reality is a four-dimensional vector. It fuses together width, height, and anchor point, using AnimatablePair. AnimatablePair is what you'll want to use if you need to conform a view to Animatable.

        Most of the time, it's best to use SwiftUI's built-in animatable visual effects, since it's far more expensive to create our own. When animating a custom layout or drawing code, this might be the only way to achieve the desired effect.

        The example shown is for animating three images moving along a custom RadialLayout: with automatic animation, the images move to their end positions in a straight line, while with custom animation, they go around the perimeter. The main difference between the two is that the default version animates each of the child subviews independently, moving each one's position, while the custom version moves the body itself instead of the position: body is called each frame with a new offset angle.

        Animation timeline for animatable position and animatable body.

        Animation

        Animation determines how data changes over time (or, the generic algorithms that interpolate animatable data over time).

        Choose an animation by passing it to withAnimation:

        struct Avatar: View {
        	var pet: Pet
        	@State private var selected: Bool = false
        	
        	var body: some View {
        		Image(pet.type)
        			.scaleEffect(selected ? 1.5 : 1)
        			.onTapGesture {
        				withAnimation(.bouncy) { // HERE
        					selected.toggle()
        				}
        			}
        	}
        }
        

        Timing curve

        The most commonly seen animations. All timing curve animations take a curve and a duration. Curves are defined with bezier control points.

        SwiftUI comes with: - linear - easeIn - easeOut - easeInOut They can all take in an optional custom duration.

        Spring (recommended)

        Springs determine the value at a given point in time by running a spring simulation. They are traditionally specified using mass, stiffness, and damping, but in SwiftUI, duration and bounce are preferred.

        SwiftUI comes with: - smooth (no bounce, default in withAnimation since iOS 17) - snappy (small bounce) - bouncy (medium bounce) They can all take in optional duration and extraBounce.

        Higher order

        Modify a base animation: slow down / speed up, add a delay, repeat, playing forwards / in reverse.

        NEW: Custom animations

        Gives developers access to the same entry points used to implement the animations included in SwiftUI.

        The protocol has three requirements: animate, shouldMerge (optional), velocity (optional).

        public protocol CustomAnimation: Hashable {
            func animate<V: VectorArithmetic>(
                value: V, // vector to animate towards
                          // comes from view's animatable data
                time: TimeInterval, // time elapsed since animation began
                context: inout AnimationContext<V> // additional animation state
            ) -> V? // current value of the animation, or nil if finished
            
            func shouldMerge<V: VectorArithmetic>(
                previous: Animation,
                value: V,
                time: TimeInterval,
                context: inout AnimationContext<V>
            ) -> Bool
            
            func velocity<V: VectorArithmetic>(
                value: V, time: TimeInterval, context: AnimationContext<V>
            ) -> V?
        }
        

        With vector addition and scalar multiplication, animations don't actually happen from start value to end value, but over the delta between the two. For instance, if an animation starts and 1 and ends at 1.5, the actual animation is 0.5. This makes the animate method more convenient.
        - shouldMerge comes in when the animation is interrupted: for instance, if the user taps again while a toggle animation is running. In timing curve animations, it returns false, and the vectors are added together. In spring animations, it returns true, so it preserves velocity and retargets to a new value, which feels more natural. - velocity: implementing it preserves velocity when a running animation is merged with a new one.

        Transaction

        In this talk, transaction has meant "the set of work that's performed for a given update to the UI". It also refers to a related data-flow construct and family of APIs. It's a dictionary SwiftUI uses to implicitly propagate all the context for the current update, most notably the animation.

        Let's look through the earlier example of how an animatable attribute reads the value, in more detail. - withAnimation sets the animation in the root transaction dictionary. - body updates the attribute values. - The transaction dictionary is propagated - When it reaches an animatable attribute, the attribute checks for an animation: if it finds one, it makes a copy for presentation. - The transaction is discarded at the end up the update.

        To change state programmatically, and make that change still be animated, add the transaction modifier:

        struct Avatar: View {
        	var pet: Pet
        	// @State changed to @Binding to change it externally
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.scaleEffect(selected ? 1.5 : 1.0)
        			.transition { // this attribute will override the animation.
        				$0.animation = .bouncy
        			}
        			.onTapGesture {
        				withAnimation(.bouncy) {
        					selected.toggle()
        				}
        			}
        	}
        }
        

        [!warning] This can lead to accidental animation.

        To fix that, SwiftUI provides the animation view modifier. In the example, the animation will only run if selected has changed. withAnimation is no longer needed, it can be removed.

        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.scaleEffect(selected ? 1.5 : 1.0)
        			.animation(.bouncy, value: selected) // HERE
        			.onTapGesture {
        				selected.toggle()
        			}
        	}
        }
        

        The animation modifier is also useful to apply different animations to different parts of a view. In the example, a shadow is added, which has a different animation. A different animation modifier, with a value of smooth, is added immediately after the shadow.

        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.shadow(radius: selected ? 12 : 8)
        			.animation(.smooth, value: selected) // HERE
        			.scaleEffect(selected ? 1.5 : 1.0)
        			.animation(.bouncy, value: selected)
        			.onTapGesture {
        				selected.toggle()
        			}
        	}
        }
        

        Animation modifiers are only active when their value changes, reducing the odds of accidental animation. But if another change happens in the same transaction, it would inherit the same animation. Depending on the component structure, this can be a problem: if the component may contain arbitrary child content, accidental animations may happen. In this case, we can use a new version of the animation modifier.

        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.animation(.smooth) {
        				$0.shadow(radius: selected ? 12 : 8)
        			}
        			.animation(.bouncy) {
        				$0.scaleEffect(selected ? 1.5 : 1.0)
        			}
        			.onTapGesture {
        				selected.toggle()
        			}
        	}
        }
        

        When the transaction propagates through the attributes and finds an animation view modifier, a copy is made that populated with the specified animation. Then, the copy is discarded, and the transaction continues down the attributes.

        New: Custom transaction keys can be defined, to leverage the transaction dictionary and implicitly propagate custom update-specific data. It's similar to declaring a custom environment key, and the only requirement is a defaultValue. Then, declare a computed property as an extension on Transaction, that reads and writes from the transaction dictionary using the custom key.

        In this example, a boolean key is defined to track whether the image was tapped or not, which will determine which animation is used.

        private struct AvatarTappedKey: TransactionKey {
        	static let defaultValue: false
        }
        
        extension Transaction {
        	var avatarTapped: Bool {
        		get { self[AvatarTappedKey.self] }
        		set { self[AvatarTappedKey.self] = newValue }
        	}
        }
        
        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.scaleEffect(selected ? 1.5 : 1.0)
        			.transaction {
        				// If the image was tapped, the animation will be more
        				// lively than if it was changed programmatically.
        				$0.animation = $0.avatarTapped
        					? .bouncy : .smooth
        			}
        			.onTapGesture {
        				withTransaction(\.avatarTapped, true) { // HERE
        					selected.toggle()
        				}
        			}
        	}
        }
        

        withAnimation is a wrapper around withTransaction: The arguments passed to withTransaction are a key path to a computed property on the Transaction and the value to set.

        This can again lead to accidental animations, which is why the transaction modifier has two new variants: One to scope using a value...

        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		Image(pet.type)
        			.scaleEffect(selected ? 1.5 : 1.0)
        			.transaction(value: selected) { // HERE
        				$0.animation = $0.avatarTapped
        					? .bouncy : .smooth
        			}
        			.onTapGesture {
        				withTransaction(\.avatarTapped, true) {
        					selected.toggle()
        				}
        			}
        	}
        }
        

        And another to scope to a sub-hierarchy defined in a body closure:

        struct Avatar: View {
        	var pet: Pet
        	@Binding var selected: Bool
        	
        	var body: some View {
        		content
        			.transaction {
        				$0.animation = $0.avatarTapped
        					? .bouncy : .smooth
        			} body: {
        				$0.scaleEffect(selected: 1.5 : 1.0)
        			}
        			.onTapGesture {
        				withTransaction(\.avatarTapped, true) { // HERE
        					selected.toggle()
        				}
        			}
        	}
        }
        

        Recommended: - WWDC23 Animate with Springs - WWDC23 Wind your way through advanced animations in SwiftUI

        Deploy passkeys at work

        2023年12月2日 02:30

        Deploy Passkeys at Work

        Passkeys

        Passkeys are replacements for passwords. Benefits of passkeys are: 1. Much faster and easier to sign in with 2. Guarenteed to be more secure because they are strong and unique 3. Phising resistant 4. Available on all devices via secure syncing 5. Great user experience

        "passkey" is lowercase because it is an industry standard term like "password"

        For more information https://developer.apple.com/videos/play/wwdc2022/10092

        Phishing

        Phising attacks on employers are one of the top attacks that enterprises have to defend against and are typically the initial foothold for attackers in major breaches. Because there is nothing to type, users cannot be tricked into putting their information in the wrong place. Passkeys are intrinsically linked to the website or app it's used for.

        Stolen Credentials

        Typically passwords are stored as a hash on a server. This leads to a possibility of hashes being stolen and cracked. Passkeys, on the other hand, are stored as a public key. For hackers, this is not worth stealing as it does not provide any benefit.

        2FA

        Attackers are increasingly tricking users to bypass the three most popular forms of 2FA.

        Type of 2FA Attack
        SMS Phishing
        TOTP Phishing
        Push notifications Push fatigue

        With passkeys, layering on SMS, time-based one-time password, and push notifications adds no extra security.

        User Experience

        Using passkeys provides the user with a much better experience. Creating/signing in is as simple as using Face ID UX

        Managing Passkeys at Work

        Requirements in managed environments: 1. Manage the Apple IDs used with iCloud Keychain and passkeys 2. Ensure passkeys only sync to managed devices 3. Store passkeys created for work in iCloud Keychain of managed accounts 4. Prove to relying parties that passkey creation happens on managed devices 5. Turn off sharing of passkeys between employees

        Access Management functionality

        There are two different controls administrators can use.

        Administrators can allow managed Apple ID on: - Any Device - Managed Devices Only - Supervised Devices Only

        In addition they have the options for allowing iCloud on: - Any Device - Managed Devices Only - Supervised Devices Only

        A new configuration to provision passkeys was added: com.apple.configuration.security.passkey.attestation. This new configuration: - References an identity asset - Provides identity attests enterprise passkeys - Is available on iOS, iPadOS, and macOS

        Example passkey attestation configuration

        
        // Example configuration: com.apple.configuration.security.passkey.attestation
        
        {
            "Type": "com.apple.configuration.security.passkey.attestation",
            "Identifier": "B1DC0125-D380-433C-913A-89D98D68BA9C",
            "ServerToken": "8EAB1785-6FC4-4B4D-BD63-1D1D2A085106",
            "Payload": {
                "AttestationIdentityAssetReference": "88999A94-B8D6-481A-8323-BF2F029F4EF9",
                "RelyingParties": [
                    "www.example.com"
                ]
            }
        }
        

        This is how the process works: 1. The MDM server sends the passkey attestation configuration and identity asset to the device 2. The identity certificate is provisioned from the corporate certificate authority server 3. The website a user connects to requests a passkey for access 4. The device generates a new passkey, attests to it using the provision identity certificate, and returns it to the website 5. The website verifies the attestation by checking that the device certificate inside the Web Authentication enterprise attestation payload chains back to the organization

        Reliant parties will need to verify the attestation statement inside Web Authentication passkey creation response. They will do this by verifying the following: 1. The AAGUID came from an Apple Device 2. The algorithm is set to -7 for ES256 on Apple platforms 3. There is a byte string containing the attestation signature 4. There is an array with the attestation certificate and its certificate chain

        Example

        
        // WebAuthn Packed Attestation Statement Format
        
        attestationObject: {
            "fmt": "packed",
            "attStmt": {
                "alg": -7, // for ES256
                "sig": bytes,
                "x5c": [ attestnCert: bytes, * (caCert: bytes) ]
            }
            "authData": {
                "attestedCredentialData": {
                    "aaguid": “dd4ec289-e01d-41c9-bb89-70fa845d4bf2”, // for Apple devices
                    
                }
                
            }
            
        }
        

        Meet device management for Apple Watch

        2023年12月1日 09:40

        Enrolling Apple Watch

        Considerations

        There are a few things to consider when Apple Watch enrolls into MDM. - iPhone and Apple Watch are managed together - Apps and restrictions can be shared - Enrollment begins with iPhone - Supervision is required - Apple Watch is paired as a new device - Existing Apple Watches will need to be reset to be enrolled

        The Apple Watch enrollment flow utilizes declarative device management so your server will need to support both Apple Watch and Declarative Device Management to enroll Apple Watch. More info on declarative device management here

        https://developer.apple.com/videos/play/wwdc2023/10041

        Enrollment Flow

        Starting with a managed iPhone device, the administrator will send a new declaration to the phone. This example shows the new Watch Enrollment configuration Configuration

        This signifies that any Watch paired to the iPhone needs to be enrolled in MDM.

        The payload would look like this: payload

        In this payload: - EnrollmentProfileURL delivers the MDM profile that the Apple Watch will download and install - AnchorCertificateAssetReferences is an optional item that specifies an array of anchor certificates

        Once the user initiates pairing from the phone, they will be prompted to accept Remote Management. The pairing flow will end if the user does not accept. enrollment

        Secure Enrollment Process

        There are two key pieces to ensure security. 1. The administrator needs to verify that the host iPhone is enrolled in MDM server managed by their organization 2. They then need to identify the iPhone the Apple Watch is pairing to

        The new enrollment flow is as follows: 1. During Apple Watch pairing the iPhone sends info from its configuration to the watch 2. The Apple Watch uses the URL and provided anchor certificates to make contact with the server 3. The server will inspect machine info data and look for new pairing token key 4. Key will not be available during first attempt and return an HTTP 403 response 5. Random UUID string inside 403 response will be used by the Apple Watch to start the pairing token retrieval flow 6. The iPhone will receive the security token from the Apple Watch 7. The iPhone will use the security token to do a gettoken check-in request with the server 8. The gettoken request looks like this checkin 9. The server creates a secure pairing token and sends it to the iPhone 10. The token looks like this token 11. The iPhone sends the pairing token to the watch 12. The Apple Watch adds the pairing token to its machine info 13. The watch will once again send a request to the server, which will now succeed since it contains a pairing key 14. The watch receives the MDM enrollment profile 15. MDM profile is installed at the end of the pairing flow

        Managing Device

        In WatchOS 10, all declaration types are supported on WatchOS. These include: - Configurations - Activations - Assets - Status - Management

        Payloads, restrictions, commands, and queries can all be sent to the Apple Watch.

        Network Configurations

        The watch supports the following network configurations: - Wi-Fi Payload - Cellular Payload - Per-app VPN payload

        Security Configurations

        The following payloads are available on WatchOS: - SCEP and ACME - Password policy - Restrictions

        Restrictions and passcode rules that are applied on iPhone are synced to the paired Apple Watch passcode

        Restrictions applied directly to the Apple Watch will not be synced to the paired iPhone

        Apple Watch Commands

        • Clear passcode
        • Lock Apple Watch
        • Erase Apple Watch
        • Unenroll from MDM

        Deployment

        Apple Watch has three deployment types for applications: 1. Paired apps - shares data with iPhone app but can be run alone 2. Dependent apps - require a companion iPhone app to be functional 3. Standalone apps - exist only on WatchOS

        Administrators will need to install paired and dependent apps on iPhone first before installing them on the Apple Watch.

        Add accessibility to your Unity games

        2023年11月30日 12:17

        Apple Accessibility Plug-in for Unity Developers

        This presentation involves a plugin available on Unity's GitHub.

        https://github.com/apple/unityplugins

        Accessibility Elements

        In this demo, cards can be flipped by tapping the button. However, VoiceOver would not read the text on the screen and an external switch would not tap the button. Example-Card-Game

        The text, cards, and button need to be accessibility elements so the user can understand what is on the screen. Elements

        If the app supports multiple languages, the labels should also be localized.

        With the labels added as accessibility elements, VoiceOver would now be able to read what is on the screen. However it would be unable to tell that there is a button.

        By adding an accessibility trait, VoiceOver would read the button as "Flip Button" and an external switch would be able to control the button.

        Elements

        There are many different types of traits, full list can be found here:

        https://developer.apple.com/documentation/uikit/uiaccessibilitytraits

        In this example, the cards would need a value trait to be able to provide the face value of the cards.

        Value

        Unity Implementation

        Accessibility elements are added using the Accessibility Node component. This component is added to any gameObject that the user wishes to add accessibility for. The script provides the following fields: - Traits - Label - Value - Hint - Identifier

        Script

        Buttons in Unity UI already have the Accessibility Node component by default.

        Creating custom C# scripts using Apple's Accessibility requires using Apple.Accessibility

        In this example, the accessibility information is returned in the accessibilityValueDelegate lambda expression.

        
        using Apple.Accessibility;
        public class AccessibleCard : MonoBehaviour 
        {
            public PlayingCard cardType;
            public bool isCovered;
            void Start()
            {
                var accessibilityNode = GetComponent();
                accessibilityNode.accessibilityValueDelegate = () => {
                    if (isCovered) {
                      return "covered";
                    }
                    if (cardType == PlayingCard.AceOfSpades) {
                      return "Ace of Spades";
                    }
                }
            }
        }
        

        Dynamic Type

        Dynamic Type allows users to adjust their font size.

        To implement this in Unity, a new C# script can be made. The script can subscribe to the onPreferredTextSizesChanged event, and modify its font size when new events occur.

        
        public class DynamicCardFaces : MonoBehaviour
        {
            public Material RegularMaterial;
            public Material LargeMaterial;
            void OnEnable()
            {
                AccessibilitySettings.onPreferredTextSizesChanged += _settingsChanged;
            }
        
            void _settingsChanged() 
            {
                GetComponent().textSize = (int)(originalSize * AccessibilitySettings.PreferredContentSizeMultiplier);
            }
        }
        

        In a similar way, the face of the cards could also be modified during these accessibility events.

        
            void _settingsChanged() 
            {
                var shouldUseLarge = AccessibilitySettings.PreferredContentSizeCategory >= 
                    ContentSizeCategory.AccessibilityMedium;
                GetComponent().material = shouldUseLarge ? RegularMaterial :
                    LargeMaterial;
            }
        
        ValueValue

        UI Accommodations

        Reduce Transparency

        • Turns transparent objects more opaque
        • Helps improve legibility
        • Can be checked with AccessibilitySettings.IsReduceTransparencyEnabled
        Reduce-Transparency

        Increase Contrast

        • Colors stand out more
        • Makes controls easier to recognize
        • Can be checked with AccessibilitySettings.IsIncreaseContrastEnabled
        Increase-Contrast

        Reduce Motion

        • Animations should be removed if this is enabled
        • Can be checked with AccessibilitySettings.IsReduceMotionEnabled

        Target and optimize GPU binaries with Metal 3

        2023年11月30日 09:04

        Offline Compilation

        Offline compilation can help reduce app stutters, first launch, and new level load times. This is accomplished by moving GPU binary generation to project build time.

        Previous Way to Generate GPU Binaries

        • Metal library is instantiated from source during runtime using AIR (Apple's Intermediate Representation) this operation is CPU intensive.
        • Library generation can be moved to build time by precompiling the source file and instantiating it.
        • When the Metal library is in memory, a Pipeline State Descriptor and Pipeline State Object (PSO) are created.
        • Creating a PSO is CPU intensive.
        • After the PSO is created, just-in-time GPU binary generation takes place.
        • When PSOs are created, Metal stores the GPU binaries in its file system cache.
        • Binary archives let users control when and where GPU binaries are cached.
        • PSO creation can become a lightweight operation by using PSO descriptors to cache GPU binaries in an archive. Runtime ## What's New Offline binary generation allows for a Metal pipeline script to be specified at project build time. This new artifact is equivalent to a collection of Pipeline State Descriptors in the API. The output provides a binary archive that can be loaded to accelerate PSO creation. psodescriptor

        Creating Metal Pipeline Script

        A Metal pipeline script is a JSON formatted description of one or more API Pipeline State Descriptors and can be created in a JSON editor or harvested from the binary archives.

        Using a JSON Editor 1. Specify API Metal library file path. 2. Add API render descriptor function names as render pipeline properties. 3. Add pipeline state information (such as rastersamplecount or pixel formats).

        Metal code for generating a render pipeline script

        
        // An existing Obj-C render pipeline descriptor
        NSError *error = nil;
        id device = MTLCreateSystemDefaultDevice();
        
        id library = [device newLibraryWithFile:@"default.metallib" error:&error];
        
        MTLRenderPipelineDescriptor *desc = [MTLRenderPipelineDescriptor new];
        desc.vertexFunction = [library newFunctionWithName:@"vert_main"];
        desc.fragmentFunction = [library newFunctionWithName:@"frag_main"];
        desc.rasterSampleCount = 2;
        desc.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
        desc.depthAttachmentPixelFormat = MTLPixelFormatDepth32Float;
        

        JSON equivalent

        
          "//comment": "Its equivalent new JSON script",
          "libraries": {
            "paths": [
              {
                "path": "default.metallib"
              }
            ]
          },
          "pipelines": {
            "render_pipelines": [
              {
                "vertex_function": "vert_main",
                "fragment_function": "frag_main",
                "raster_sample_count": 2,
                "color_attachments": [
                  {
                    "pixel_format": "BGRA8Unorm"
                  },
                ],
                "depth_attachment_pixel_format": "Depth32Float"
              }
            ]
          }
        }
        

        Further schema details can be found in Metal's developer documentation https://developer.apple.com/documentation/metal

        Using Metal Runtime

        This is done during runtime 1. Create Pipeline Descriptor with state and functions 2. Add descriptor to binary archive 3. Serialize binary archive to be imported by app

        Harvesting sample

        
        MTLRenderPipelineDescriptor *pipeline_desc = [MTLRenderPipelineDescriptor new];
        pipeline_desc.vertexFunction = [library newFunctionWithName:@"vert_main"];
        pipeline_desc.fragmentFunction = [library newFunctionWithName:@"frag_main"];
        pipeline_desc.rasterSampleCount = 2;
        pipeline_desc.colorAttachments[0].pixelFormat = MTLPixelFormatBGRA8Unorm;
        pipeline_desc.depthAttachmentPixelFormat = MTLPixelFormatDepth32Float;
        
        // Add pipeline descriptor to new archive
        MTLBinaryArchiveDescriptor* archive_desc = [MTLBinaryArchiveDescriptor new];
        id archive = [device newBinaryArchiveWithDescriptor:archive_desc error:&error];
        bool success = [archive addRenderPipelineFunctionsWithDescriptor:pipeline_desc error:&error];
        
        // Serialize archive to file system
        NSURL *url = [NSURL fileURLWithPath:@"harvested-binaryArchive.metallib"];
        success = [archive serializeToURL:url error:&error];
        

        Extracting JSON pipelines script from binary archive can be done to move generation from runtime to build time.

        This can be done by using metal-source while specifying buffers and output directory options

        
        
        ### Generating Offline GPU Binaries
        
        Generating GPU binary from source can be done by invoking `metal` with source, pipeline script, and output
        
        

        metal shaders.metal -N descriptors.mtlp-json -o archive.metallib`

        Generating GPU Binary from Metal Library can be done by invoking metal-tt with source, pipeline script, and output file

        
        
        ### Loading Offline Binaries
        1. Provide binary archive URL when creating archive descriptor
        2. Use URL to instantiate archive
        
        For more information regarding API, see last year's talk.
        
        https://developer.apple.com/videos/play/wwdc2021/10229
        
        # Optimize for Size
        The Metal Compiler optimizes aggressively for runtime performance. These optimizations may expand the GPU program size which may have unexpected costs. Xcode 14 provides a new optimization mode for metal: optimize for size. 
        This setting prevents optimizations such as inlining and loop unrolling which should lower application size and compile time. This setting is useful for situations where the user encounters long compilation time. Optimize for size may hurt runtime performance, however it may improve runtime performance if the program was 
        incurring runtime penalties associated with large size.
        
        ## Enabling Optimize for Size
        This feature can be enabled in 3 ways:
        1. In Xcode build settings under Metal Compiler - Build Options by selecting Size [-Os] under Optimization Level
        2. In Terminal with option `-Os`
        3. In Metal Framework setting `MTLLibraryOptimizationLevelSize` in an `MTLCompileOptions` object
        
        
        
        [runtime]: https://www.wwdcnotes.com/images/notes/wwdc22/10102/runtime.JPG
        [psodescriptor]: https://www.wwdcnotes.com/images/notes/wwdc22/10102/psodescriptor.
        

        Simplify distribution in Xcode and Xcode Cloud

        2023年6月10日 03:03

        Simplify Distribution in Xcode and Xcode Cloud

        Distribution Tools

        Methods for building and sharing your app

        Xcode Organizer Window

        Provides streamlined one-click distribution options

        Xcode Cloud Workflow

        Provides the ability to create a workflow to automate the building and sharing of an app

        Express TestFlight Distribution

        Archives

        The first step to distribution is to create an archive. An archive: - Is a record of the app build - Contains an optimized release build - Contains debug symbols (.dSYM) - Has its contents repackaged when uploading - Has an .xcarchive extension

        Creating an archive can be done by going to the Product menu in Xcode and selecting Archive. When the archive is created, it can be found in the Organizer menu (Window > Organizer). From there, the archive can be distributed by clicking the "Distribute App" button. Xcode 15 has added new streamlined options options

        These options include: - TestFlight & App Store - has the full capabilities of TestFlight and allow App Store submission - TestFlight internal only - can be shared with team but does not allow App Store submission - Debugging - an optimized build that can be installed on registered devices - Release - similar to debug build but signed with distribution certificate - Custom - allows for configuration of settings

        The streamlined options come with the recommended settings. The settings include: - Automatic signing - Symbols for crash reports - Auto-incrementing build number - Strips Swift symbols

        Once the app has been distributed, it will be available to see on App Store Connect. Test Flight users will be able to see details about the new build on their device. Users can also share feedback about the build, which is available to see inside the Feedback tab on Xcode Organizer.

        Xcode Cloud

        Xcode Cloud is the Continuous Integration and Delivery service built by Apple. In the presentation, a workflow is created that will share a build with the team when new features are added. This is done by doing the following: 1. Going to the menu Integrate > Create Workflow or Integrate > Manage Workflow if one is already created 2. Edit the Archive action to add TestFlight (Internal Testing Only) 3. Add a TestFlight Internal Testing post-action to add a TestFlight group

        Xcode Cloud can automatically update the notes for what to test. More information about this can be found here:

        https://developer.apple.com/documentation/Xcode/including-notes-for-testers-with-a-beta-release-of-your-app

        Automating Notarization

        Notarization is the process by which Apple verifies software. This allows developers to directly distribute their Mac application to users while giving them confidence that it is safe.

        The notirization process is as follows: 1. An archive is created for the app and uploaded to Apple's Notary Service 2. The Notary Service scans the app for malicious content 3. A ticket is provided that can be stapled to the app 4. The app is distributed to customers 5. When the app is launched for the first time, it will verify the stapled ticket and a ticket provided by the Notary Service

        More information about notarization can be found here:

        https://developer.apple.com/videos/play/wwdc2022/10109

        Choosing the Direct Distribution option from Organizer will notarize the app. Xcode Cloud also supports notarization during automation. This can be done by adding a notarize post action inside the workflow editor.

        Design dynamic Live Activities

        2023年11月11日 06:42

        What are Live Activities?

        • Use rich graphical layouts to display their information and update seamlessly inline
        • On the lock screen, Live Activities live at the top of the list alongside notifications

        What can be a Live Activity?

        Anything someone wants to keep track of for a few minutes to a couple hours can be a Live Activity

        Examples: - Sports - Ridesharing - Delivery tracking - Live workouts

        Lock Screen

        Live Activity: lock screen style

        Lock Screen Design Tips

        Live Activity: lock screen nargins
        • Be aware of 14pt margins added to all notifications. --- Live Activity: show unique layout in lock screen
        • Try not to replicate the notification layouts themselves. Rather, create a layout that is unique and specific to the information you are displaying.
        • Buttons should only be used if they're controlling an essential part of the activity itself --- Live Activity: match app style in lock screen
        • Best result is when your Live Activity and app feel like they share the same visual aesthetic and personality
          • Consider the colors, iconography, typefaces, and other attributes from your app that you can use
          • Don't alter your colors between light and dark mode if it breaks this visual association --- Live Activity: change content colors in lock screen
        • You can also change the colors based on the content that it's representing to create a more dynamic and engaging visual of your information. --- Live Activity: integrate logo in lock screen
        • If you do use a logo mark as part of your brand, make sure it's integrated uncontained into the layout itself rather than just using your whole app icon --- Live Activity: dismiss button in lock screen
        • The background and foreground colors your Live Activity provides are used to automatically generate a matching dismiss button when swiping
          • Make sure to check that the resulting button looks correct --- ### Spacing

        IMPORTANT: Ensure your Live Activities are not too tall, since this area is shared with notifications, music player, etc

        Live Activity: grow and shrink in lock screen
        • Look for ways to reduce the height of your design by adjusting the size and placement of elements so they can better fit together and be more compact.
        • Dynamically change the height of your Live Activity between different moments as you have more or less information to display. --- ### Transitions
        • Use the numeric content transition to count up or down important numbers in your Activity.
        • For animating in and out graphic elements and text, use the content replace transition.
        • You can also create your own by combining different animations of the scale, opacity, and position of elements.

        Alerting

        • Do not send secondary notifications to alert user of something related to your Live Activity, instead use your Live Activity
          • Alerting lights up the screen and plays the standard notification sound
          • Emphasize the information that caused the alert in your layout during this transition.

        Removal

        • If your Live Activity has ended and is no longer relevant, make sure to remove it from the lock screen after a short duration of time --- ## Standby Design Tips
        Live Activity: standby view
        • Standby is a new feature that lets you use your iPhone as an ambient informational display.
        • Your layout is scaled up 200% to maximize its size
          • Ensure assets and images you are using in your design are high enough resolution to be displayed at this larger size --- Live Activity: avoid edge styling
        • Avoid using graphic elements that extend to the edge of your Live Activity, use dividing lines or a containing shape instead --- Live Activity: remove background in standby mode
        • When in Standby mode, consider removing your background and blend your layout seamlessly into the device bezel --- Live Activity: night mode in standby
        • Live Activities in SandBy automatically gain "night mode" that transitions the display to a red tint in low light.
          • Check to make sure your colors have enough contrast while in night mode --- ## Dynamic Island Design Tips
        Live Activity: dynamic island
        • Use of animation and continuously updating data here makes it feel more alive
        • Extra rounded, thicker shapes, as well as large, heavier weight, easy-to-read text works well
        • Use color to convey identity --- Live Activity: dynamic island concentric shapes
        • Dynamic Island is very sensitive to the shape and placement of things inside it. It’s really important to place objects and information in it in a way that stays in harmony with this shape.
          • Be concentric with its shape. This is when rounded shapes nest inside of each other with even margins all the way around. --- ### Different Size Classes

        Compact View

        Live Activity: compact view in dynamic island
        • Compact is most common. Used help people keep an eye on an Activity while using their phone.
        • It’s meant to be informational, communicating the most essential things about an activity
          • When you want to show multiple sessions for your app going on at once, consider ticking between the display of them --- Live Activity: keep things snug in dynamic island for compact view
        • Be as narrow as possible with no wasted space.
        • Ensure content is snug against the sensor region. --- Live Activity: dynamic island alerts from compact view
        • If you need to alert users of an event during your session, rather than sending a push notification, when possible, expand the island to present that information --- #### Expanded View
        Live Activity: expanded view in dynamic island
        • In addition to alerting you, people can press into the Dynamic Island to zoom into this view and see more information, and access essential controls.
        • Try to get to the essence of your activity here, not showing too little or too much.
        • Emphasize rounded, thicker shapes, and a liberal use of color to establish identity. --- Live Activity: keep cohesiveness in mind between expanded and compact
        • Try and maintain the relative placement of things between the expanded/compact views for cohesiveness. --- Live Activity: wrap around sensor in expanded view
        • Try to keep the tallness of your expanded view within reason
        • Avoid having a “forehead” at the top that calls attention to the sensor region --- #### Minimal View
        Live Activity: minimal view in dynamic island
        • This view is shown when juggling between multiple sessions going on at once --- Live Activity: showing data in minimal view for dynamic island
        • Avoid reverting to purely just a logo here, and think about how your session can continue to convey information even in this tiny state.

        Unleash the UIKit trait system

        2023年11月11日 02:42

        UIKit Trait System

        UIKit provides many built-in system traits, such as user interface style, horizontal size class, and preferred content size category. In iOS 17, we can define our own custom traits as well. This unlocks a powerful new way to provide data to the app's view controllers and views. The main way to work with traits in UIKit is using trait collections.

        UITraitCollection

        A trait collection contains traits and their associated values. The traitCollection property of the UITraitEnvironment protocol contains traits that describe the state of various elements of the iOS user interface, such as size class, display scale, and layout direction. Together, these traits compose the UIKit trait environment.

        There are some new APIs in iOS 17 that make it easier to work with trait collections.

        Trait environments and hierarchy

        Trait environments and hierarchy

        Example of how trait hierarchy worked prior to iOS 17. Flow of traits in the view hierarchy stopped at each view owned by a view controller. This behavior could be surprising.

        Trait hierarchy before iOS 17

        Updated trait hierarchy on iOS 17. View controllers inherit their trait collection from their view’s superview, instead of directly from their parent view controller. This creates a simple linear flow of traits through view controllers and views.

        Trait hierarchy after iOS 17

        View controller trait updates

        • Traits not up-to-date in viewwillAppear(_:)
        • Use viewIsAppearing(_:) instead
          • View controller and view traits up-to-date
          • View added to hierarchy, has accurate geometry

        Learn more about viewIsAppearing(_:) in What's new in UIKit

        View trait updates

        • Views only update traits when in the hierarchy
        • Views update traits before layout
        • layoutSubviews() is the best place to use traits

        Working with trait collections

        // Build a new trait collection instance from scratch
        let myTraits = UITraitCollection { mutableTraits in
            mutableTraits.userInterfaceIdiom = .phone
            mutableTraits.horizontalSizeClass = .regular
        }
        

        From the above code, there is a mutableTraints variable in side the closure which conforms to a new protocol called UIMutableTraits. When the closure finishes executing, the initializer returns an immutable UITraitCollection instance that contains all of the trait values I set inside the closure.

        There’s also a new modifyingTraits method that allows you to create a new instance by modifying values from the original trait collection inside the closure.

        // Get a new instance by modifying traits of an existing one
        let otherTraits = myTraits.modifyingTraits { mutableTraits in
            mutableTraits.horizontalSizeClass = .compact
            mutableTraits.userInterfaceStyle = .dark
        }
        

        Implementing a simple custom trait

        Conform to UITraitDefinition protocol with one required static property defaultValue. This is default value for the trait when no value has been set. Each trait definition has an associated value type, which is inferred from the defaultValue.

        struct ContainedInSettingsTrait: UITraitDefinition {
            static let defaultValue = falselet traitCollection = UITraitCollection { mutableTraits in
            mutableTraits[ContainedInSettingsTrait.self] = true
        }
        
        let value = traitCollection[ContainedInSettingsTrait.self]
        // true
        

        Adding property syntax with simple extension

        extension UITraitCollection {
            var isContainedInSettings: Bool { self[ContainedInSettingsTrait.self] }
        }
        
        extension UMutableTraits {
            var isContainedInSettings: Bool {
                get { self[ContainedInSettingsTrait.self] }
                set { self[ContainedInSettingsTrait.self] = newValue }
            }
        }
        
        let traitCollection = UITraitCollection { mutableTraits in
            mutableTraits.isContainedInSettings = true
        }
        
        let value = traitCollection.isContainedInSettings
        // true
        

        UIMutableTraits

        The UIMutableTraits protocol provides read-write access to get and set trait values on an underlying container. UIKit uses this protocol to facilitate working with instances of UITraitCollection, which are immutable and read-only. The UITraitCollection initializer init(mutations:) uses an instance of UIMutableTraits, which enables you to set a batch of trait values in one method call. UITraitOverrides conforms to UIMutableTraits, making it easy to set trait overrides on trait environments such as views and view controllers.

        // ThemeTrait conforms to UITraitDefinition, and has a defaultValue type of Theme
        extension UIMutableTraits {
            var theme: Theme {
                get { self[ThemeTrait.self] }
                set { self[ThemeTrait.self] = newValue }
            }
        }
        
        // Apply an override for the custom theme trait.
        view.traitOverrides.theme = .monochrome
        

        Trait overrides are the mechanism you use to modify data within the trait hierarchy. In iOS 17, it’s easier than ever to apply trait overrides. There’s a new traitOverrides property on each of the trait environment classes, including window scenes, windows, views, view controllers, and presentation controllers.

        Trait overrides applied to the parent affect the parent’s own trait collection. And then the values from the parent’s trait collection are inherited to the child. Finally, the child's trait overrides are applied to the values it inherited to produce its own trait collection.

        /// Managing trait overrides
        
        func toggleThemeOverride(_ overrideTheme: MyAppTheme) {
            if view.traitOverrides.contains(MyAppThemeTrait.self) {
                // There's an existing theme override; remove it
                view.traitOverrides.remove(MyAppThemeTrait.self)
            } else {
                // There's no existing theme override; apply one
                view.traitOverrides.myAppTheme = overrideTheme
            }
        }
        

        Note: traitCollectionDidChange is deprecated in iOS 17. When you implement traitCollectionDidChange, the system doesn’t know which traits you actually care about, so it has to call that method every time that any trait changes value. However, most classes only use a handful of traits and don’t care about changes to any others. This is why traitCollectionDidChange doesn’t scale as you add more and more custom traits.

        In place of traitCollectionDidChange we have new traits registration methods in iOS 17.0 with a closure-method based.

        Call registerForTraitChanges and pass an array of traits to register for as well as the target and action method to call on changes. The target parameter is optional. If you omit it, the target will be the same object that registerForTraitChanges is called on.

        // Register for horizontal size class changes on self
        registerForTraitChanges(
            [UITraitHorizontalSizeClass.self],
            action: #selector(UIView.setNeedsLayout)
        )
        
        // Register for changes to multiple traits on another view
        let anotherView: MyView
        anotherView.registerForTraitChanges(
            [UITraitHorizontalSizeClass.self, ContainedInSettingsTrait.self],
            target: self,
            action: #selector(handleTraitChange(view:previousTraitCollection:))
        )
        
        @objc func handleTraitChange(view: MyView, previousTraitCollection: UITraitCollection) {
            // Handle the trait change for this view...
        }
        

        The first parameter is always the object whose traits are changing. Use this parameter to get the new traitCollection. The second parameter will always be the previous trait collection for that object before the change. In addition to registering for individual traits, you can also register using new semantic sets of system traits.

        SwiftUI bridging

        • Bridge UIKit traits and SwiftUI environment keys
        • Data propagates across UIKit and SwiftUI boundaries
        • Works in both directions

        Implementing a bridged UIKit trait and SwiftUI environment key

        // Custom UIKit trait
        struct MyAppThemeTrait: UITraitDefinition {...}
        
        // Custom SwiftUI environment key
        struct MyAppThemeKey: EnvironmentKey {...}
        
        // Bridge SwiftUI environment key with UIKit trait
        extension MyAppThemeKey: UITraitBridgedEnvironmentKey {
            static func read(from traitCollection: UITraitCollection) -> MyAppTheme {
               traitCollection.myAppTheme
            }
        
            static func write(to mutableTraits: inout UIMutableTraits, value: MyAppTheme) {
                mutableTraits.myAppTheme = value
            }
        }
        

        Using a bridged UIKit trait and SwiftUI environment key

        // UIKit trait override applied to the window scene
        windowScene.traitOverrides.myAppTheme = monochrome
        
        // Cell in a UICollectionView configured to display a SwiftUI view
        cell.contentConfiguration = UIHostingConfiguration {
            CellView()
        }
        
        // SwiftUI view displayed in the cell, which reads the bridged value from the environment
        struct CellView: View {
            @Environment (\.myAppTheme) var theme: MyAppTheme
            var body: some View {
                Text("Settings")
                    .foregroundStyle(theme == monochrome ? .gray : .blue)
            }
        }
        
        // SwiftUI environment value applied to a UIViewControllerRepresentable
        struct SettingsView: View {
            var body: some View {
                SettingsControllerRepresentable()
                    .environment(\.myAppTheme, .standard)
            }
        }
        
        // UIKit view controller contained in the SettingsControllerRepresentable
        class SettingsViewController: UIViewController {
            override func viewWillLayoutSubviews() {
                super.viewWillLayoutSubviews()
                title = settingsTitle(for: traitCollection.myAppTheme)
            }
        }
        

        Spotlight your app with App Shortcuts

        2023年11月11日 02:57

        App Intents

        Extend your app’s custom functionality to support system-level services like Siri, Spotlight, the Shortcuts app, and the Action button using App Intents.

        The App Intents framework offers a programmatic way to make your app’s content and functionality available to system services like Siri and the Shortcuts app. The programmatic approach lets you expose any of your app’s capabilities, and not just ones that fall into specific categories. You also use this programmatic approach to supply metadata, UI information, activation phrases, and other information the system needs to initiate your app’s actions.

        App Actions

        App intents

        Define the custom actions your app exposes to the system, and incorporate support for existing SiriKit intents.

        Parameter resolution
        Define the required parameters for your app intents and specify how to resolve those parameters at runtime.

        Resolvers
        Resolve the parameters of your app intents, and extend the standard resolution types to include your app’s custom types.

        Data introspection

        App entities

        Make core types or concepts discoverable to the system by declaring them as app entities.

        Entity queries

        Help the system find the entities your app defines and use them to resolve parameters.

        System integration

        App Shortcuts

        Integrate your app’s intents and entities with the Shortcuts app, Siri, Spotlight, and the Action button on supported iPhone and Apple Watch models.

        Intent discovery

        Donate your app’s intents to the system to help it identify trends and predict future behaviors.

        Focus
        Adjust your app’s behavior and filter incoming notifications when the current Focus changes.

        Action button on iPhone and Apple Watch
        Enable people to run your App Shortcuts with the Action button on iPhone or to start your app’s workout or dive sessions using the Action button on Apple Watch.

        Utility types

        Common types
        Specify common types that your app supports, including currencies, files, and contacts.

        App Shortcuts

        • App Shortcuts are built with the App Intents framework, the Swift only framework built from the ground up to make it faster and easier to build great intents right in the Swift source code. All App Shortcuts begin by defining an intent in the source code.
        • Intents represent individual tasks that can be completed with the app, like creating a to-do list, summarizing its contents, or checking off an item.

        After creating an app intent, create an app shortcut with it, so it can be used from Spotlight or Siri. This associates the app intent with Siri trigger phrases, titles, and symbols that are needed.

        
        
        struct DemoAppShortcutsProvider: AppShortcutsProvider {
            static var appShortcuts: [AppShortcut] {
                AppShortcut (
                    intent: CreateList(),
                    phrasos: ["Create a new \(copplicationName) list"],
                    shorttitio: "Creato List",
                    systemimageName: "checklist"
                )
            }
        }
        
        

        By running the app you can immediately start creating to-do lists right from Siri or the Shortcuts app, all by only creating two structs in the code.

        Update Live Activities with push notifications

        2023年11月11日 01:42

        Update Live Activities with push notifications

        Update Live Activities remotely in an app when you push content through Apple Push Notification service (APNs). Here will explain how to configure Live Activity and push locally. you should be familiar with ActivityKit and Live Activities

        Assuming that you've already know about how an App and server interacts with Apple Push Notifications Services(APNS). When a new Live Activity is started, ActivityKit will obtain a push token from Apple Push Notification service, or APNs for short. This push token is unique for each Live Activity you request. That's why your app needs to send it to your server before it can start sending push updates. Then, whenever you need to update the Live Activity, your server sends the push request using the token to APNs. Finally, APNs will send the payload to the device, and it will wake your widget extension to render the UI.

        Acitivity Framework

        With the ActivityKit framework, you can Display a Live Activity to share live updates from an app Lock Screen. Especially for apps that push the limit of notifications to provide updated information, a Live Activities can offer a richer, interactive and highly glanceable way for people to keep track of an event or activity over a couple of hours.

        Live Activity Type

        Live Activities display your app’s most current data on the iPhone or iPad Lock Screen and in the Dynamic Island, allowing people to see live information at a glance and perform quick actions that are related to the displayed information.

        To support Live Activities:

        1. Create a widget extension if you haven’t added and make sure to select “Include Live Activity” when you add a widget extension target to your Xcode project.
        1. Add the Supports Live Activities entry in info.plist, and set its Boolean value to YES. Alternatively, open the Info.plist file as source code, add the NSSupportsLiveActivities key, then set the type to Boolean and its value to YES. If your project doesn’t have an Info.plist file, add the Supports Live Activities entry to the list of custom iOS target properties for your iOS app target and set its value to YES.
        1. Add code that defines an ActivityAttributes structure to describe the static and dynamic data of your Live Activity.
        1. Use the ActivityAttributes defined to create the ActivityConfiguration.
        1. Add code to configure, start, update, and end your Live Activities.
        1. Make Live Activity interactive with Button or Toggle as described in Adding interactivity to widgets and Live Activities.
        1. Add animations to bring attention to content updates as described in Animating data updates in widgets and Live Activities.

        Construct the ActivityKit push notification payload

        To successfully update or end a Live Activity with an ActivityKit push notification, send an HTTP request to APNs that conforms to the following

        Requirements: Set the value for the apns-push-type header field to liveactivity. Set the apns-topic header field using the following format: .push-type.liveactivity. * Set the value for the apns-priority header field to 5 or 10. ### Sample Payload for APNS to Update Activity ```swift /// APNS Sample body for updating live activity { "aps": { "timestamp": 1685952000, "event": "update", "content-state": { "currentHealthLevel": 0.0, "eventDescription": "Power Panda has been knocked down!" }, "alert": { "title": "Power Panda is knocked down!", "body": "Use a potion to heal Power Panda!", "sound": "default" } } } /// Sample Curl Command to update live acitivty curl \ --header "apns-topic: com.example.apple-samplecode.Emoji-Rangers.push-type.liveactivity" \ --header "apns-push-type: liveactivity" \ --header "apns-priority: 10" \ --header "authorization: bearer $AUTHENTICATION_TOKEN" \ --data '{ "aps": { "timestamp": '$(date +%s)', "event": "update", "content-state": { "currentHealthLevel": 0.941, "eventDescription": "Power Panda found a sword!" } } }' \ --http2 https://api.sandbox.push.apple.com/3/device/$ACTIVITY_PUSH_TOKEN ``` ## Determine the update frequency The system allows for a certain budget of ActivityKit push notifications per hour. Set the HTTP header field apns-priority for your requests to specify the priority of an ActivityKit push notification: * If you don’t specify the apns-priority value, APNs delivers the ActivityKit push notification immediately with the default priority of 10 and counts it toward the notification budget that the system imposes. * If you exceed the budget, the system may throttle your ActivityKit push notifications. > To avoid throttling, you can send a low-priority ActivityKit push notification that doesn’t count toward the budget by setting the HTTP header field apns-priority to 5. Consider this lower priority first before using the priority of 10. In many cases, choosing a mix of priority 5 and 10 for updates prevents your Live Activity updates from being throttled.

        What's new in Background Assets

        2023年11月8日 01:32

        Background Assets

        With the Background Assets framework, you can improve the experience of your app or game by reducing or eliminating the time people have to wait while your app downloads any required assets at first launch

        Add a Background Assets extension to your app’s target, and let the system notify that extension about an app installation or subsequent update. Then use the download manager to schedule background downloads of required content from your servers or content delivery network (CDN), and have those downloads finish even when the app isn’t running Check for updated content when the system periodically launches the extension (dependent on app usage) and, when content is available, schedule it for immediate download.

        [!Important] Use the framework only to download additional assets for your app; don’t use it for any other purposes. For example, don’t collect or transmit data to identify a user or device or to perform advertising or advertising measurement.

        The framework leverages ExtensionKit and common types like URLRequest.

        What’s new in Background Assets

        In this New Background Assets approach, Users can be able to download assets while installing the app or updating unlike old version while launching the app. This means that downloads are completely integrated into the iOS Home Screen, macOS Launchpad, and the App Store. To the end user, the download of assets appear as if the app is currently still being downloaded from the App Store. This also means that while essential downloads are in-flight, app cannot be launched by the user. All the user can do is cancel or pause installation.

        Changes required

        Below are the essential changes that required compared to background assets that were introduced back year.

        
        
        /// Old Changes in info.plist
        
        |Key|Type|Description|
        |-|-|-|
        |BAInitialDownloadRestrictions|Dictionary|The restrictions that apply to the set of assets that download prior to first app launch.
        |BADownloadAllowance|Number|The combined size of the initial set of non-Essential asset downloads. Stored inside the BAInitialDownloadRestrictions dictionary.
        |BADownloadDomainAllowList|Array|Array of domains that can assets can be downloaded from prior to first app launch. Stored inside the BAInitialDownloadRestrictions dictionary.
        |BAMaxInstallSize|Number|The combined size (in bytes) on disk of the Non-Essential assets that download immediately after app installation.
        |BAManifestURL|String|URL of the application's manifest.
        
        
        /// New Change in info.plist
        
        |Key|Type|Description|
        |-|-|-|
        |BAInitialDownloadRestrictions|Dictionary|The restrictions that apply to the set of assets that download prior to first app launch.
        |BADownloadAllowance|Number|The combined size of the initial set of non-Essential asset downloads. Stored inside the BAInitialDownloadRestrictions dictionary.
        |**BAEssentialDownloadAllowance**|Number|The combined size (in bytes) of the initial set of Essential asset downloads, including your manifest. Stored inside the BAInitialDownloadRestrictions dictionary.
        |BADownloadDomainAllowList|Array|Array of domains that can assets can be downloaded from prior to first app launch. Stored inside the BAInitialDownloadRestrictions dictionary.
        |BAMaxInstallSize|Number|The combined size (in bytes) on disk of the Non-Essential assets that download immediately after app installation.
        |**BAEssentialMaxInstallSize**|Number|The combined size (in bytes) on disk of the Essential downloads that occur during app installation.
        |BAManifestURL|String|URL of the application's manifest.
        
        

        [!Important] The above keys are not only essential but are also necessary in order to submit the app to AppStore.

        There are two new keys that are required to support Assential Assets BAEssentialDownloadAllowance and BAEssentialMaxInstallSize. The essential download allowance is represented in bytes and defines an upper bound on how large the sum of all of your essential assets will take to download. It's important to try to get this number as close as possible to the size of the essential assets you enqueue so that download progress is smooth for the user when they install your app. The other new key, BAEssentialMaxInstallSize, represents the maximum size of those assets extracted onto the user's device.

        Your guide to keyboard layout

        2023年11月6日 00:51

        UIKeyboardLayoutGuide

        Here is the old way of listening for keyboard notifications and adjusting your layout:

            //...
            keyboardGuide.bottomAnchor.constraint(
                equalTo: view.bottomAnchor
            ).isActive = true
            keyboardGuide.topAnchor.constraint(
                equalTo: textView.bottomAnchor
            ).isActive = true
            keyboardHeight = keyboardGuide.heightAnchor.constraint(
                equalToConstant: view.safeAreaInsets.bottom
            )
            NotificationCenter.default.addObserver(
                self,
                selector: #selector(respondToKeyboard),
                name: UIResponder.keyboardWillShowNotification,
                object: nil
            )
        }
        
        @objc func respondToKeyboard(notification: Notification) {
            let info = notification.userInfo
            if let endRect = info?[.keyboardFrameEndUserInfoKey] as? CGRect {
                var offset = view.bounds.size.height - endRect.origin.y
                if offset == 0.0 {
                    offset = view.safeAreaInsets.bottom
                }
                let duration = info?[.keyboardAnimationDurationUserInfoKey] as? TimeInterval ?? 2.0
                UIView.animate(
                    withDuration: duration,
                    animations: {
                        self.keyboardHeight.constant = offset
                        self.view.layoutIfNeeded()
                    }
                )
            }
        }
        

        New in iOS15. Here is the suggested new way using UIKeyboardLayoutGuide:

        view.keyboardLayoutGuide.topAnchor.constraint(
            equalToSystemSpacingBelow: textView.bottomAnchor,
            multiplier: 1.0
        ).isActive = true
        
        • Use view.keyboardLayoutGuide
        • For most common use cases, simply update to use .topAnchor
        • Matches keyboard animations like bring up and dismiss
        • Follows height changes as the keyboard can be taller or shorter based on content (showing emojis, etc)
        • When the keyboard is undocked the guide will drop to the bottom of the screen and be the width of your window
          • Anything you've tied to the top anchor will follow
          • It accounts for safe-area insets

        Why did Apple create a new custom layout guide vs a generic layout guide?

        Answer: giving the developer more control over how their layout responds to keyboards

        • You have the ability to fully follow the keyboard in all its incarnations, if you so choose, by using a new property: .followsUndockedKeyboard.
          • If you set it to true, the guide will follow the keyboard when it's undocked or floating, giving you a lot of control over how your layout responds to wherever the keyboard may be.
          • No more automatic drop-to-bottom. No listening for hide keyboard notifications when undocking. The layout guide is where the keyboard is.

        UITrackingLayoutGuide

        UIKeyboardLayoutGuide is a subclass of the new UITrackingLayoutGuide

        • Tracks the constraints you want to change when it moves around the screen.
        • You can give it an array of constraints that activate when near a specific edge, and deactivate when leaving it
        • You can give it an array that activates when specifically away from an edge, and deactivates when near it.

        Example Code

        Default keyboard with custom top actions:

        Default keyboard with custom top actions

        Example 1

        Moving keyboard up causes top actions to shift to bottom:

        Moving the keyboard up causes top actions to shift to bottom
        let awayFromTopConstraints = [
            view.keyboardLayoutGuide.topAnchor.constraint(
                equalTo: editView.bottomAnchor
            )
        ]
        view.keyboardLayoutGuide.setConstraints(
            awayFromTopConstraints,
            activeWhenAwayFrom: .top
        )
        

        Code specifies that the constraint that anchors custom keyboard actions to the top of the keyboard is only active when keyboard is away from the top

        let nearTopConstraints = [
            view.safeAreaLayoutGuide.bottomAnchor.constraint(
                equalTo: editView.bottomAnchor
            )
        ]
        view.keyboardLayoutGuide.setConstraints(
            nearTopConstraints,
            activeWhenNearEdge: .top
        )
        

        Code specifies that the constraint that will anchor custom keyboard actions to bottom safe area of the view is only active when keyboard is near the top edge

        Example 2

        Moving keyboard to the right or left edge causes underlying content to shift into open area

        Moving keyboard to the right causes underlying content to shift to left
        //set default behavior
        let awayFromSides = [
            imageView.centerXAnchor.constraint(
                equalTo: view.centerXAnchor
            )
        ]
        view.keyboardLayoutGuide.setConstraints(
            awayFromSides,
            activeWhenAwayFrom: [
                .leading,
                .trailing
            ]
        )
        
        //when nearing sides, shift imageView appropriately
        let nearTrailingConstraints = [
            imageView.leadingAnchor.constraint(
                equalToSystemSpacingAfter: view.safeAreaLayoutGuide.leadingAnchor,
                multiplier: 1.0
            )
        ]
        view.keyboardLayoutGuide.setConstraints(
            nearTrailingConstraints,
            activeWhenNearEdge: .trailing
        )
        
        let nearLeadingConstraints = [
            view.safeAreaLayoutGuide.trailingAnchor.constraint(
                equalToSystemSpacingAfter: imageView.trailingAnchor, 
                multiplier: 1.0
            )
        ]
        view.keyboardLayoutGuide.setConstraints(
            nearLeadingConstraints,
            activeWhenNearEdge: .leading
        )
        

        What is .near and .awayFrom?

        • A docked keyboard is considered to be near the bottom and awayFrom the other edges
        • Undocked and split keyboards can be awayFrom all edges, or they can get near the top edge.
        • When it's the floating keyboard, it can be near or awayFrom any edge, and it can even be near two adjacent edges at the same time.

        Above only applies when you set followsUndockedKeyboard to true.

        Things to Know

        • Camera text input can be launched by most keyboards. This camera UI can be full screen and it's behaviour conforms to UIKeyboardLayoutGuide
        • New in iOS15, the shortcuts bar that shown at the bottom while a hardware keyboard is attached is no longer full width. Now it's size adapts to language and how many buttons are present.
          • If you're following the undocked keyboard, you can actually use the real leading and trailing edges of the bar
          • This is always .near the bottom and, in its normal position, it's .awayFrom the other three edges.
          • If you collapse it into a small launch button, it can be moved around so it can be .near the leading or trailing edge.

        Animate symbols in your app

        2023年11月4日 01:22

        Animate symbols in your app

        Bring delight to your app with animated symbols. Explore the new Symbols framework, which features a unified API to create and configure symbol effects. Learn how SwiftUI, AppKit, and UIKit make it easy to animate symbols in user interfaces. Discover tips and tricks to seamlessly integrate the new animations alongside other app content. To get the most from this session, we recommend first watching “What's new in SF Symbols 5.”.

        SF Symbols are an iconic part of Apple interfaces..

        They look gorgeous in menus, toolbars, sidebars, and more. And because people are familiar with symbols, they make your app more intuitive to use. In iOS 17 and macOS Sonoma, we're enhancing symbols with animation, bringing more life into your apps than ever before..

        I recommend checking out the "What's new in SF Symbols 5" session to dive deeper into the animations themselves, including best practices for designing interfaces with them.

        Symbol Effects

        In the API, these animations are called "symbol effects," and the new Symbols framework is home to all of them. It's included for free when you use SwiftUI, AppKit, or UIKit to build your app. A really cool feature of the Symbols framework is that each effect has a simple dot-separated name. So to create a bounce effect, you can simply write ".bounce" in your code.

        These dot-separated names also extend to the way you configure effects. For example, you can specify that the symbol should bounce upwards or downwards, but most of the time, you won't need to specify anything. The frameworks will automatically use the most appropriate direction. Some effects feature many configuration options. For example, Variable Color has three different settings. By chaining options together, you can configure very specific effects with ease.

        The effect names are real Swift code. There's no strings attached. Xcode will autocomplete each part of the name, and if an effect is configured incorrectly, you'll get an error at compile time. The best way to explore all the new animations is the SF Symbols app. In the new animation tab, you can learn about all the available configuration options for each effect. You can even copy a dot-separated effect name to be used directly in your code. With all of the effect types and configuration options, there's a massive variety of animations available. But all of these effects actually encompass a small set of behaviors.

        Bounce, for example, plays a one-off animation on the symbol. This is considered discrete behavior. Adding a Scale effect, on the other hand, changes the symbol's scale level and keeps it there indefinitely. Scale is said to support indefinite behavior. Unlike discrete effects, indefinite effects only end when explicitly removed.

        Appear and Disappear support transition behavior. They can transition the symbol in and out of view.

        And finally, Replace is a content transition. It animates from one symbol to another.

        So that's the four different behaviors: discrete, indefinite, transition, and content transition. In the Symbols framework, each behavior corresponds to a protocol. Effects declare their supported behaviors by conforming to these protocols. Here is a breakdown of all available effects, as well as their supported behaviors. I'll cover this in more detail in this session. Just know that an effect's behavior determines which UI framework APIs can work with them. And speaking of UI framework APIs, let's talk about how to add all of these cool effects in your SwiftUI, UIKit, and AppKit apps.

        Symbol effects in SwiftUI

        In SwiftUI, there is a new view modifier, symbolEffect.

        // Symbol effects in SwiftUI
        Image(systemName: "wifi.router")
            .symbolEffect(.variableColor.iterative.reversing)
            .symbolEffect(.scale.up)
        

        Simply add the modifier and pass in the desired effect. Here, I pass in variableColor, and now the symbol is playing the default variable color animation. It's easy to do this in AppKit and UIKit too. Just use the new addSymbolEffect method on an image view to add a variable color effect. I can configure the variable color effect using the dot syntax. Here, I change the effect to variableColor.iterative.reversing, resulting in a different variable color animation. It's a great way to show that my app is connecting to the network. It's even possible to combine different effects. Here, I add a scale.up effect. Now the symbol is animating variable color while also scaled up.

        These APIs provide a simple way to add indefinite effects to symbol images. Recall that indefinite effects change some aspect of a symbol indefinitely, until the effect is removed.

        So using the symbolEffect modifier, I can apply a variable color effect, which continuously plays an animation.

        Symbol effects in AppKit and UIKit

        In AppKit and UIKit, use addSymbolEffect and pass in .disappear or .appear.

        The takeaway here is that indefinite effects don't change the layout at all. They only alter the rendering of the symbol within the image view. So that covers the first behavior. How do I jump to the parallel universe, where the surrounding layout changes? This is where the transition behavior comes in. Transition effects can be used with SwiftUI's built-in transition modifier, which animates a view's insertion or removal from the view hierarchy. Let's convert the previous code to use the transition behavior. Instead of conditionally applying a Disappear effect, I'll instead conditionally add the symbol to the view hierarchy.

        You can also use a unique transition effect called Automatic. This effect will automatically perform the most appropriate transition animation for this symbol.until the effect is removed.

        So using the symbolEffect modifier, I can apply a variable color effect, which continuously plays an animation.

        // Symbol effects in AppKit and UIKit
        let imageView: NSImageView = ...
        
        imageView.addSymbolEffect(.variableColor.iterative.reversing)
        imageView.addSymbolEffect(.scale.up)
        

        Indefinite symbol effects in SwiftUI

        But I also need a way to control when the effect is active. I wouldn't want this animation to keep playing after my app successfully connects to the network.

        This can be done by adding the boolean isActive parameter. Here, I apply the effect only when connecting to the internet. Once the app finishes connecting, the symbol animation seamlessly ends.

        In AppKit and UIKit, use the removeSymbolEffect method to end indefinite effects. What about discrete effects, which perform one-off animations? I mentioned Bounce as an example of this earlier. Your app may trigger Bounce effects in response to certain events.

        // SwiftUI
        struct ContentView: View {
            @State var isConnectingToInternet: Bool = true
            
            var body: some View {
                Image(systemName: "wifi.router")
                    .symbolEffect(
                        .variableColor.iterative.reversing,
                        isActive: isConnectingToInternet
                    )
            }
        }
        
        // UIKit
        let imageView: NSImageView = ...
        
        imageView.addSymbolEffect(.variableColor.iterative.reversing)
        
        // Later, remove the effect
        imageView.removeSymbolEffect(ofType: .variableColor)
        

        Discrete Effects

        What about Discrete effects, which perform one-off animations? I mentioned Bounce as an example of this earlier. Your app may trigger Bounce effects in response to certain events.

        In SwiftUI, I can use the same symbolEffect modifier to add discrete effects. However, I must also provide SwiftUI a value. Whenever the value changes, SwiftUI triggers the discrete effect.

        Let's add a button that, when pressed, bounces the symbol. The button's handler simply needs to increment bounceValue. SwiftUI will see the change in bounceValue and trigger the bounce. I can do this in AppKit and UIKit by adding a Bounce effect to the image view. Because Bounce only supports discrete behavior, then adding the effect performs a single bounce. There's no need to remove the effect afterwards.

         // Discrete symbol effects in SwiftUI
        
         struct ContentView: View {
            @State var bounceValue: Int = 0
            
            var body: some View {
                VStack {
                    Image(systemName: "antenna.radiowaves.left.and.right")
                        .symbolEffect(
                            .bounce,
                            options: .repeat(2),
                            value: bounceValue
                        )
                    
                    Button("Animate") {
                        bounceValue += 1
                    }
                }
            }
        }
        
        
        // Discrete symbol effects in AppKit and UIKit
        let imageView: NSImageView = ...
        
        // Bounce
        imageView.addSymbolEffect(.bounce, options: .repeat(2))
        

        Now, let's say I don't want the symbol to bounce just once. How about bouncing twice? SwiftUI, AppKit, and UIKit support an options parameter, where I can specify a preferred repeat count. Now, the symbol bounces twice when the effect is triggered. Bounce isn't the only effect which can have discrete behavior. Two of the effects I covered earlier, Pulse and Variable Color, support not only indefinite behavior, but also discrete behavior. In other words, they can play one-off animations, just like Bounce. That means I can take the earlier Bounce example and change it to variableColor. Variable Color switches to use its discrete behavior, since it's applied in a non-repeating fashion.

        Content transition effects

        Next, let's talk about content transition effects.
        The Replace effect, which animates between two different symbol images, is the main example of this. Here, I have an image that switches between a pause symbol and a play symbol.

        SwiftUI has a new contentTransition type called symbolEffect, which can be used with Replace. So if I put the Image in a Button that toggles which symbol is displayed, the change is now animated. In AppKit and UIKit, you can use the new setSymbolImage method to change the image using a symbol content transition.

        // Content transition symbol effects in SwiftUI
            struct ContentView: View {
                @State var isPaused: Bool = false
            
                var body: some View {
                    Button {
                       isPaused.toggle()
                    } label: {
                       Image(systemName: isPaused ? "pause.fill" : "play.fill")
                           .contentTransition(.symbolEffect(.replace.offUp))
                    }
                }
            }
        

        Finally, we have Appear and Disappear, which can show and hide symbols with unique animations. These effects are uniquely classified as transition effects. But before we get into that, we need to talk about parallel universes. Don't worry, though. It's not as complicated as it seems. In one universe, the image disappears, but the image view is still in the hierarchy. In other words, there's no change to the layout. The square and circle remain the same distance to each other. In the parallel universe, the image view is truly added and removed from the hierarchy. As a result, the layout of surrounding views may change.

        The great news is that Appear and Disappear support both behaviors.

        The first behavior is possible because Appear and Disappear are indefinite effects.

        You know how to use indefinite effects already. In SwiftUI, use the .symbolEffect modifier and pass in .disappear. As the value of isMoonHidden updates, the Disappear effect is applied.

        In AppKit and UIKit, use addSymbolEffect and pass in .disappear or .appear.

        The takeaway here is that indefinite effects don't change the layout at all. They only alter the rendering of the symbol within the image view. So that covers the first behavior. How do I jump to the parallel universe, where the surrounding layout changes? This is where the transition behavior comes in. Transition effects can be used with SwiftUI's built-in transition modifier, which animates a view's insertion or removal from the view hierarchy.

            // Content transition symbol effects in AppKit and UIKit
                let imageView: UIImageView = ...
                imageView.image = UIImage(systemName: "play.fill")
        
            // Change the image with a Replace effect
                let pauseImage = UIImage(systemName: "pause.fill")!
                imageView.setSymbolImage(pauseImage, contentTransition: .replace.offUp)
        

        variable value.

        iOS 16 and macOS Ventura introduced variable value as another dimension for symbols, representing concepts like volume levels and signal strengths.

        In iOS 17 and macOS Sonoma, we are making it super easy to crossfade between arbitrary variable values.

        In SwiftUI, you don't need to do anything at all. Here, I have a Wi-Fi symbol whose variable value is based on some state– in this case, the current signal strength. As the signal strength changes, the Wi-Fi symbol automatically updates, while also animating across variable values. In AppKit and UIKit, use the automatic symbol content transition. It detects if the new symbol image just has a different variable value, and, if so, crossfades to the new value.

            // Variable value animations in SwiftUI
                struct ContentView: View {
                    @State var signalLevel: Double = 0.5
            
                    var body: some View {
                       Image(systemName: "wifi", variableValue: signalLevel)
                    }
                }
        
            // Variable value animations in AppKit and UIKit
                
                let imageView: UIImageView = ...
                imageView.image = UIImage(systemName: "wifi", variableValue: 1.0)
        
            // Animate to a different Wi-Fi level
                let currentSignalImage = UIImage(
                    systemName: "wifi",
                    variableValue: signalLevel
                )!
                imageView.setSymbolImage(currentSignalImage, contentTransition: .automatic)
        

        Thanks so much. There's a lot of ways to animate symbols, so use the SF Symbols app to discover what's possible. Explore the Symbols framework, and try the new symbol effect APIs in SwiftUI, AppKit, and UIKit. And finally, adopt the animations to make your app's interface more delightful than ever.

        Check out the other symbols sessions, too, for Human Interface guidelines on symbol animation, as well as updating custom symbols to support all the effects.

        Create animated symbols

        Thanks.

        Share files with SharePlay

        2023年11月5日 01:49

        Limitations of GroupSessionManager

        Previously, if you were creating a group drawing app and wanted to drop a photo onto the canvas, this wasn't possible due to the size limitations of GroupSessionManager.

        New in iOS17, file attachment transfers are not only possible but are very fast and end to end encrypted.

        GroupSessionJournal

        New to iOS17. GroupSessionJournal an object that stays consistent for everyone across the GroupSession. Actions you take on the journal affect everyone and properties on the journal are similarly synced for everyone.

        public final class GroupSessionJournal {
            public lazy var attachments: Attachments
            public func addItemType: Transferable>(_ item: ItemType) async throwsAttachment
            public func remove(attachment: Attachment) async throws
        }
        

        You can upload any of your custom attachments with the "add" function. Just be sure your type conforms to the Transferable protocol.

        When calling add orremove everyone in the GroupSession will observe their "attachments" AsyncSequence being updated with the event.

        • Attachments are limited to 100MB.
        • The lifecycle of the attachment is as long as members of the GroupSession are connected
          • If the person who uploaded the attachment leaves the GroupSession the attachment remains until everyone disconnects

        GroupSessionJournal: Late Joiners

        Another advantage of GroupSessionJournal over GroupSessionMessenger is how it handles late joiners.

        In GroupSessionMessenger, late joiners are "caught up" on the session by having each session member re-upload their interactions behind the scenes. Obviously, having each member re-upload 100MB file attachments is not efficient so GroupSessionJournal ensures that late joiners receive attachments without any re-uploads.

        Code Example: Syncing an image and it's location across devices

        1. Create a struct for the image data and add it to your Canvas model
        struct CanvasImage: Identifiable {
            var id: UUID
            let location: CGPoint
            let imageData: Data
        }
        
        class Canvas: ObservableObject {
            @Published var images = [CanvasImage]()
            //...
        }
        
        1. In your Canvas model, setup your GroupSessionJournal and listen for changes to its attachments
        func configureGroupSession(_ groupSession: GroupSession<DrawTogether>) {
            self.groupSession = groupSession
            let messenger = GroupSessionMessenger(session: groupSession)
            self.messenger = messenger
            let journal = GroupSessionJournal(session: groupSession)
            self.journal = journal
            
            //set up groupSession, handle messenger events...
            
            task = Task {
                for await images in journal.attachments {
                    await handle(images)
                }
            }
            tasks.insert(task)
        
            //...
        }
        
        1. In your Canvas model, create the image handler to convert journal attachments to CanvasImage
        func handle(_ attachments: GroupSessionJournal.Attachments.Element) async {
            // Now make sure that our canvas always has all the images from this sequence.
            self.images = await withTaskGroup(of: CanvasImage?.self) { group in
                var images = [CanvasImage]()
                attachments.forEach { attachment in
                    group.addTask {
                        do {
                            let metadata = try await attachment.loadMetadata(
                                of: ImageMetadataMessage.self
                            )
                            let imageData = try await attachment.load(
                                Data.self
                            )
                            return .init(
                                id: attachment.id,
                                location: metadata.location,
                                imageData: imageData
                            )
                        } catch { return nil }
                    }
                    for await image in group {
                        if let image {
                            images.append(image)
                        }
                    }
                    return images
                }
            }
        }
        
        1. At this stage, simply update your UI to reflect Canvas.images
        Example: Sync image across devices

        Customize on-device speech recognition

        2023年11月3日 06:35

        Previous Iteration of Speech Recognizer

        By default, when you embed Apple's Speech framework into your app it uses a general language model to reject transcription candidates that it feels are less likely. This doesn't work well if your app is geared toward less common verbiage.

        Default Speech Recognition Workflow

        For example, in a chess app, you may want to tell the app "Play the Albin counter gambit" but this verbiage is so rare in the general language model that it incorrectly interprets this as "Play the album Counter Gambit".

        Custom Speech Recognition Workflow

        Language Model Customization

        New in iOS 17, you'll be able to customize the behavior of the SFSpeechRecognizer's language model, tailor it to your application, and improve its accuracy.

        Steps: 1. Create a collection of training data (during development process) 1. Prepare the training data 1. Configure a recognition request 1. Run it

        Data Generation

        Training data will consist of bits of text that represent phrases your app's users are likely to speak.

          import Speech
        
          let data = SFCustomLanguageModelData(
              locale: Locale(identifier: "en_US"),
              identifier: "com.apple.SampleApp",
              version: "1.0"
          ) {
              SFCustomLanguageModelData.PhraseCount(
                  phrase: "Play the Albin counter gambit",
                  count: 10
              )
          }
        

        In the above example, we feed our custom phrase into the model 10 times. Experiment often, you could be surpised at how quickly the model learns your phrases.

        Only so much data can be accepted by the system, so balance your need to boost phrases against your overall training data budget.

        Furthermore, you can declare classes of words and put them into a pattern to represent every possible combination.

          SFCustomLanguageModelData.PhraseCountsFromTemplates(
              classes: [
                  "piece" : ["pawn", "rook", "knight", "bishop", "queen", "king"],
                  "royal" : ["queen", "king"],
                  "rank" : Array(1...8).map({String($0)})
              ]
          ) {
              SFCustomLanguageModelData.TemplatePhraseCountGenerator.Template(
                  "‹piece> to <royal> <piece> <rank>",
                  count: 10000
              )
          }
        

        When you are done building up the data object, export it to a file and deploy into your app like any other asset.

          try await data.export(to: URL(filePath: "/var/tmp/SampleApp.bin"))
        

        If your app makes use of specialized terminology, for example, a medical app that includes the names of pharmaceuticals, you can define both the spelling and pronunciations of those terms, and provide phrase counts that demonstrates their usage.

        • Pronunciations are accepted in the form of X-SAMPA strings
        • Each locale supports a unique subset of pronunciation symbols
          SFCustomLanguageModelData.CustomPronunciation(
              grapheme: "Winawer",
              phonemes: ["w I n aU @r"]
          )
          SFCustomLanguageModelData.PhraseCount(
              phrase: "Play the Winawer variation",
              count: 10
          )
        

        The model can also be trained at runtime, for example, if you want to train it on commonly used names from the user's contacts.

          func scrapeDataForLmCustomization() {
              Task.detached {
                  let data = SFCustomLanguageModelData(
                      locale: Locale(identifier: "en_US"),
                      identifier: "SampleApp", 
                      version: "1.0"
                  ) {
                      for (name, timesCalled) in getCallHistory() {
                          SFCustomLanguageModelData.PhraseCount (
                              phrase: "Call \(name)",
                              count: timesCalled
                          )
                      }
                      //...
                  }
              }
          }
        

        Once the training data is generated, it is bound to a single locale. If you want to support multiple locales within a single script, you can use standard localization facilities like NSLocalizedString to do so.

        Deploying Your Model

          public func prepCustomlm() {
              self.customLmTask = Task.detached {
                  self.hasBuiltLm = false
                  try await SFSpeechLanguageModel.prepareCustomLanguageModel(
                      for: self.assetPath,
                      clientIdentifier: "com.apple.SampleApp",
                      configuration: self.ImConfiguration
                  )
                  self.hasBuiltLm = true
              }
          }
        

        This method call can have a large amount of associated latency, so it's best to call it off the main thread, and hide the latency behind some UI, such as a loading screen.

          public func startRecording(updateRecognitionText: @escaping (String) -> Void) throws {
              recognitionRequest = SFSpeechAudioBufferRecognitionRequest ()
              // keep recognition data on device
              recognitionRequest.requires0nDeviceRecognition = true
              recognitionRequest. customizedLanguageModel = self.ImConfiguration
              //...
          }
        

        When your app constructs the speech recognition request, you first enforce that the recognition is run on device. Failing to do so will cause requests to be serviced without customization.

        Explore enhancements to App Intents

        2023年11月3日 02:26

        Speaker: Roman Efimov, Shortcuts Engineering

        Widgets

        New options to connect App Intents with Widgets through interactivity and configuration.

        Widget configuration

        The options found on the back of a configurable widget are called Parameters, and they're added with Intents. Previously Intents had to be declared in an Intent Definition file, but now they can be declared directly in the Widget extension code.

        • Use the new AppIntentConfiguration WidgetConfiguration type, instead of IntentConfiguration
        • Define a type that conforms to the WidgetConfigurationIntent protocol
        • Use @Parameter to add widget configurations
        // App Intents widget configuration
        @main
        struct UpNextWidget: Widget {
        	let kind: String = "UpNext"
        		var body: some WidgetConfiguration {
        			AppIntentConfiguration( // NEW, instead of IntentConfiguration()
        			kind: kind, intent: UpNextConfiguration.self,
        			provider: Provider()
        		) { entry in
        			UpNextWidgetView(entry: entry)
        		}
        	}
        }
        
        struct UpNextConfiguration: AppIntent, WidgetConfigurationIntent {
        	static var title: LocalizedStringResource = "Up Next"
        	
        	@Parameter(title: "Example")
        	var example: Example
        }
        

        Providing dynamic options can be done right here too, instead of creating a separate Intents extension. Queries and dynamic option providers can be implemented.

        struct ExampleQuery: EntityStringQuery {
        	func entities(
        		matching string: String
        	) async throws -> [Example] { ... }
        }
        

        See more in the session "Dive into App Intents" from WWDC22.

        Migrating widgets from SiriKit to App Intents

        • Support latest and previous OS
        • Enable continued use of existing widgets
        • Remove SiriKit Intent Definition file (do not do this if you plan to support previous OS versions)

        Migration is automatic. In the Intent definition file, go to the SiriKit widget configuration Intent, and click "Convert to App Intent...". Make sure to test.

        Interactive widgets

        Widgets now support button taps and toggles. Swift UI buttons and toggles now support intents.

        struct SetAlarm: AppIntent {
        	static var title: LocalizedStringResource = "Set Alarm"
        	
        	@Parameter (title: "Bus Stop")
        	var busStop: BusStop
        	
        	// Other parameters...
        	
        	func perform() async throws -> some IntentResult {
        		AlarmManager.shared.addAlarm(forTime: arrivalTime)
        		return .result()
        	}
        }
        
        struct NextBusView: View {
        	var body: some View {
        		Button(intent: SetAlarm(arrivalTime: arrivalTime)) {
        			Text(arrivalTime.asString)
        		}
        	}
        }
        

        AppIntents are also available outside of Widgets, in regular SwiftUI apps. App intents can serve as a configuration, so sharing code can reduce redundancy and ensure consistent behavior. WidgetConfigurationIntents can also serve as Shortcuts actions.

        See more in the session "Bring your widget to life" from WWDC23.

        Dynamic options

        Conform to DynamicOptionsProvider or the EntityQuery protocols to provide the available values of a parameter in the App Intent.

        struct BusStopQuery: EntityStringQuery {
        	func entities(
        		matching string: String
        	) async throws -> [BusStop] {
        		BusStop.allStops.filter {
        			$0. name .contains(string)
        		}
        	}
        	
        	func entities(
        		for identifiers: [BusStop.ID]
        	) async throws -> [BusStop] {
        		BusStop.allStops.filter {
        			identifiers.contains($0.id)
        		}
        	}
        }
        

        Conditionally show options based on other parameter with @IntentParameterDependency.

        struct BusRouteQuery: EntityQuery {
        	@IntentParameterDependency<ShowNextBus>(
        		\.$busStop
        	)
        	var showNextBus
        	
        	func suggestedEntities() async throws -> [Route] {
        		guard let showNextBus else { return [] }
        		return Route.allRoutes.filter {
        			$0.busStops.contains(showNextBus.busStop)
        		}
        	}
        }
        

        Limit the size of array parameters for different widget sizes.

        struct ShowFavoriteRoutes: AppIntent, WidgetConfigurationIntent {
        	// Pass an int for a fixed array size
        	@Parameter(title: "Favorite Routes", size: 3)
        	var routes: [Route]
        	
        	// Or pass an array for multiple widget sizes
        	@Parameter(title: "Favorite Routes", size: [
        		.systemSmall: 3, .systemLarge: 5
        	])
        	var routes: [Route]
        }
        

        Define which parameters are shown, and when, with ParameterSummary. Use When to display conditionally based on widget size.

        struct ShowFavoriteRoutes: AppIntent, WidgetConfigurationIntent {
        	@Parameter(title: "Favorite routes", size: 3)
        	var routes: [Route]
        	
        	@Parameter(title: "Include weather info")
        	var includeWeatherInfo: Bool?
        	
        	static var parameterSummary: some ParameterSummary {
        		When(widgetFamily: .equalTo, .systemLarge) {
        			Summary("Show favorite \(\.$routes)") {
        				\.$includeWeatherInfo
        			}
        		} otherwise: {
        			Summary("Show favorite \(\.$routes)")
        		}
        	}
        }
        

        In this case, array routesand toggle includeWeatherInfo are shown, in that order, on a large widget, and only routes is shown on small widgets.

        Continue user activity

        Show relevant information when the user taps on the widget.

        • Call the widgetConfigurationIntent on the user activity to get the configuration Intent.
        • Use that configuration data to display relevant information in the app.
        WindowGroup {
        	ContentView()
        		.onContinueUserActivity("NextBus") { userActivity in
        			let configuration: Configuration? =
        				userActivity.widgetConfigurationIntent()
        			
        			// Navigate to the corresponding view
        			navigate(
        				toView: .busStopView,
        				busStop: configuration?.busStop,
        				route: configuration?.route
        			)
        		}
        }
        

        Use the RelevantContext APIs to suggest when to display the widget in a Smart Stack. The new RelevantIntentManager and RelevantIntent are more Swift-friendly and work seamlessly with App Intents.

        let relevantIntents = gameTimes.map {
        	RelevantIntent(SportsWidgetIntent(), "SportsWidget", .date(from: $0.start, to: $0.end))
        }
        RelevantIntentManager.shared.updateRelevantIntents(relevantIntents)
        

        See more about Relevance in "Build widgets for the Smart Stack on Apple Watch" from WWDC23.

        Developer experience

        Framework support

        In iOS 17 and Xcode 15, frameworks can now expose App Intents. This reduces code duplication. The AppIntentsPackage APIs can recursively import dependencies. By conforming types to the AppIntentsPackage protocol, both your app and frameworks can re-export metadata from other frameworks.

        The example shown connects different frameworks in various snippets. Please watch from 15:45 to 17:00 for more.

        AppShortcutsProvider and App Shortcuts can now be created in App Intents extensions, previously they could only be defined in the main app bundle. This helps code stay modular, and helps performance since the app doesn't have to launch in the background every time an App Shortcut runs.

        Static metadata extraction

        All these features rely on static metadata extraction, which has been significantly improved in Xcode 15. Errors are shown directly during this process, so problems can be fixed faster.

        Continue execution

        • The ForegroundContinuableIntent protocol continues the execution of an Intent even if that Intent was previously running in the background.
        • Use needsToContinueInForegroundError to stop the Intent execution and require action to continue.
        • Use requestToContinueInForeground to get a result from the person and use it to complete the App Intent's perform.

        Apple Pay

        Initiate an Apple Pay transaction directly in the perform method with PKPaymentRequest and PKPaymentAuthorizationController.

        struct RequestPayment: AppIntent {
        	static var title: LocalizedStringResource = "Request Payment"
        
        	func perform() async throws -> some IntentResult {
        		let paymentRequest = PKPaymentRequest()
        		// Configure your payment request
        		let controller = PKPaymentAuthorizationController(
        			paymentRequest: paymentRequest
        		)
        		guard await controller.present() else {
        			return .result(dialog: "Unable to process payment")
        		}
        		return .result(dialog: "Payment Processed")
        	}
        }
        

        Shortcuts app integration

        • App Intents have been used to build Shortcuts actions, for use with Siri and the Shortcuts app; as well as Focus Filters and the Action button on Apple Watch Ultra. In iOS 17, are now integrated with Interactive Live Activities, Widget Configuration and Interactivity, and SwiftUI.
        • App Shortcuts now include support for Spotlight Top Hits and Automations.
        • With all this integration, it's important to make sure parameter summaries are well written.
        • If an App Intent is only for use inside an app or widget, set isDiscoverable to false to hide it elsewhere.
        • For App Intents that run more slowly, make them conform to the ProgressReportingIntent protocol. Update the progress by setting progress.totalUnitCount and progress.completedUnitCount.
        • EntityPropertyQuery is joined by the new EnumerableEntityQuery for integrating Find actions in Shortcuts. To use EnumerableEntityQuery, return all possible values for the entity in the allEntities() method, and Shortcuts and App Intents generates find actions automatically. Prefer EnumerableEntityQuery when the number of entities is small. When dealing with a large number of entities, use EntityPropertyQuery, and run the search on behalf of the user.
        • IntentDescription, which is used to show action information in the Shortcuts UI, now has a property called resultValueName so we can adda more descriptive name for the output of the action.

        See more in the session "Spotlight your app with App Shortcuts" from WWDC23.

        ❌
        ❌