scanning through async/await and concurrency manifestos it looks like proposed design is basically a thin syntax sugar that gets rid of the pyramid of doom.
to recap, given this synchronous code to start with:
func processImageData1() -> Image {
let dataResource = loadWebResource("dataprofile.txt")
let imageResource = loadWebResource("imagedata.dat")
let imageTmp = decodeImage(dataResource, imageResource)
let imageResult = dewarpAndCleanupImage(imageTmp)
return imageResult
}
one puts some minor syntactic markers here and there:
func processImageData1() async -> Image {
let dataResource = await loadWebResource("dataprofile.txt")
let imageResource = await loadWebResource("imagedata.dat")
let imageTmp = await decodeImage(dataResource, imageResource)
let imageResult = await dewarpAndCleanupImage(imageTmp)
return imageResult
}
and this code is internally de-sugared into this async code:
func processImageData1(completionBlock: (result: Image) -> Void) {
loadWebResource("dataprofile.txt") { dataResource in
loadWebResource("imagedata.dat") { imageResource in
decodeImage(dataResource, imageResource) { imageTmp in
dewarpAndCleanupImage(imageTmp) { imageResult in
completionBlock(imageResult)
}
}
}
}
}
the end result is that the pyramid is still in there internally, we just don't see it. so far so good.
i wonder, what happens if the source synchronous fragment is more complicated, e.g. contains loops?
consider this fragment:
func processImageData2(iterationCount: Int) -> Image {
var image: Image = #imageLiteral(resourceName: "image")
for _ in 0 ..< iterationCount {
image = sharpenImage(image)
image = blurImage(image)
}
return image
}
That desugaring is a simple way to think about what happens with async/await, but in reality the transformation is more sophisticated and, yes, it works with loops.
To build on what @John_McCall said, what it would probably do (at least what C# does) is turn processImageData1 into a state machine so that it can keep track of local variables and where it is in the function. For an explanation of how C# does it see here or just search for "async await state machine".
func processImageData1() -> Image {
let dataResource = loadWebResource("dataprofile.txt")
let imageResource = loadWebResource("imagedata.dat")
let imageTmp = decodeImage(dataResource, imageResource)
let imageResult = dewarpAndCleanupImage(imageTmp)
return imageResult
}
correct me if i am wrong: from todays standpoint, I/O bound tasks are best served with coroutines/ async-await/ cooperative_multitasking and CPU bound tasks are best served with real threads (e.g. when number of threads equal to a number of cores). assuming this observation is correct, obviously one would want the decode+dewarp pairs calls above (presumably CPU-bound ones) to be executed in parallel in respect to similar pairs of different invocations of processImageData1. for example if my machine has 16 cores and i run 16 processImageData1 calls "simultaneously" i do not mind if the loadWebResource portions of those are running sequentially on a single thread, but when it comes to decode+dewarp i'd prefer all 16 cores working on it. equally well if the number of tasks is much more than 16 i still want the CPU-bound portion of it to run on 16 cores. that will make the whole process somewhat faster by making the CPU-bound portion of the process 16x faster. is there anything in the coroutine and/or async-await proposal facilitating this? if not, what would be the best manual approach to this. so that I/O bound tasks are done in async/await manner while the CPU bound tasks (like in the above example) are done in a truly parallel manner?
in real life code the situation would be similar, i believe. e.g. a web server needs to read a request from a socket (I/O bound) and then convert raw bytes to structures (CPU-bound) do something about those structures, e.g. a search or a merge, etc (CPU-bound as well), convert the result from structs to raw bytes (CPU-bound) and write response back to client (I/O bound).
async/await is basically just a compiler transform for coroutines, which can simplify callback-based async code. It's not about multithreading per se, and it's not a generic solution to parallelism. That's typically a library feature, not a language feature. However, the library feature can interoperate with async/await. For example, see .Net's Task.WaitAny and Task.WaitAll, which are themselves await-able.
I think an important measure of how well designed the async/await feature is will be how well these higher-level constructs can be built on top of it. C#'s design allows for a lot of flexibility, customization, and composability. I hope that Swift's design works as well as that.