Writing chapters to an audio asset in AVFoundation

I'm creating a library for adding and editing metadata and chapter data in .m4a/.m4b audio files (such as audiobooks and podcasts.)

I was doing fine handling metadata and writing it to disk using AVAssetExportSession, but apparently there is no way to add an [AVTimedMetadataGroup] array to an export session.

        let exportSession = AVAssetExportSession(
            asset: self.asset,
            presetName: AVAssetExportPresetPassthrough)
        exportSession?.outputURL = url
        exportSession?.outputFileType = fileType
        exportSession?.metadata = tag.metadata
        exportSession?.exportAsynchronously(completionHandler: { })

I've found several posts in ObjectiveC relating to adding chapters to video assets, but I'm not familiar enough with ObjectiveC to implement them successfully. They also involve the use of AVAssetWriter, which I'm also not familiar with, particularly how to create AVAssetWriterInput items, or how to access CMFormatDescription information to use as a parameter for creating the AVAssetWriterInput item.

This is the closest I've come, and it produces a zero-byte, corrupt file.

        let audioTrack = asset.tracks(withMediaType: .audio).first
        let audioDesc = audioTrack?.formatDescriptions.first as! CMFormatDescription
        let audioInput = AVAssetWriterInput(
            mediaType: .audio,
            outputSettings: nil,
            sourceFormatHint: audioDesc)
        let writer = try AVAssetWriter(outputURL: url, fileType: .m4a)
        for group in tag.tableOfContents.timedMetadataGroups ?? [] {
            let desc = group.copyFormatDescription()
            // create text input
            let textInput = AVAssetWriterInput(mediaType: .text,
                                               outputSettings: nil,
                                               sourceFormatHint: desc)
            textInput.marksOutputTrackAsEnabled = false
            textInput.expectsMediaDataInRealTime = false
            let metadataAdaptor = AVAssetWriterInputMetadataAdaptor(
                assetWriterInput: textInput)
            textInput.requestMediaDataWhenReady(on: DispatchQueue(label: "metadataqueue", qos: .userInitiated), using: {
            }) // no idea if I'm using this correctly
            if audioInput.canAddTrackAssociation(
                withTrackOf: textInput, type: AVAssetTrack.AssociationType.chapterList.rawValue) {
                    withTrackOf: textInput, type: AVAssetTrack.AssociationType.chapterList.rawValue)
            if writer.canAdd(textInput) {
        if writer.canAdd(audioInput) {
        writer.finishWriting(completionHandler: {
            if writer.status == .failed {
                error = true
            inProgress = false
        while inProgress {
            RunLoop.current.run(until: Date(timeIntervalSinceNow: 0.1))
        if error == true {
            throw Mp4File.Error.WritingError

Does anyone have some samples of working code that attempts to do this sort of thing?