Yes, I showed the older version of the expression to make it clear what all the pieces were under the covers. The use of lambdas and type inference simplifies what you have to type, but doesn’t change the fact that the interface is still instantiated as an anonymous class.
I am using Java as an example because it exists, and Apple was, for some time, shipping Java and suggesting its usability on their platform. To me, this implies that Apple engineers should know about this aspect of Java so that I could use it as a description of what I wanted Protocols to do.
We’ve spent 30 years with Java now, and many different languages have sprung up after and around it, changing what you type, but not adding any different real features. ML kinds of language models do some things to simplify life cycles and ownership in particular. My specific frustration here, is that something like Interfaces should exist. The fact that Enum’s have become a stand in for some of the interface missing details in Protocol further demonstrates, to me, that there’s not a lot of actual language design going on here. Rather it feels like plastic pieces that are bent/melted into place wherever they can go the easiest.
protocols do what you want them to. The problem here is that they do more than you want them to, and in particular, that ObservableObjectuses that "excess" of features, and it then (with good reason) doesn't work like you want it to.
There's nothing of substance in this thread other than a personal vendetta against associated types & Self constraints, a lot of inflamatory language, and I feel, a resistance to actually learning how to use Swift.
I suggest you use ObjC to program for Apple platforms; it's still supported and doesn't have the feature you hate.
If it provided abstract classes or classes with pure virtual functions, rather than dogmatically dictating the use of protocols, Swift would be a more pleasant language to speak, especially for those coming from C++.
I hate doing this whenever I must use classes:
class Bar {
virtual f () -> void {
fatalError (#function, "not implemented")
}
virtual g () -> void {
fatalError (#function, "not implemented")
}
...
}
Compare:
class Bar {
virtual f () -> void = 0
virtual g () -> void = 0
...
}
Or
abstract class Bar {
virtual f () -> void
virtual g () -> void
...
}
Yeah, you’re confirming what Keith wrote. There is no domain in which Swift is mandatory. Objective-C is still supported on every Apple platform and every other platform that has Swift support has other languages and ecosystems that are better-supported.
Speaking as one of Swift's language designers and a former professional Java programmer, I am quite aware of Java interfaces. They are pretty simple tools — almost liberatingly so, because there are a lot of simple type relationships that you simply cannot express in any way in Java, and so you have no choice but to fall back on type erasure instead. It is one of the things that pushes Java systems inevitably towards teetering towers of abstractions where every implementation is hidden behind three layers of interfaces and adapters.
The language tools that Swift provides are much more geared towards preserving type information. If you give Swift's idioms a little more of a chance, rather than just trying to write Java programs in Swift, you might find that helpful in some ways.
In the example you've described here, you really don't need a protocol at all, though. You have an abstract interface for audio handling that you'd like to give different implementations on different platforms. You're never going to have multiple implementations active in the same process; #if around the core implementation is a perfectly good way to implement this rather than building up all these hierarchies of unnecessary protocols.
I am, in fact, currently using conditional compilation and it complicates readability. Ultimately, I will need to copy all the code, to create separate files where I can focus on what I have to do for MacOS vs iOS. AVAudioEngine and AVAudioUnits are all in common and my compressor/eq/FFT processing is all the same. But, what isn’t, is what I need to make real time mic samples stream. The AudioEngine in iOS seems to solve my problems removing jitter and stuttering audio sampling. It’s going to be best to just use “MacOS (designed for iPad)” instead and just give up it seems.
Java’s type erasure was a problem before generics. I’ve largely used Generics everywhere now so that I can have types in place and not risk runtime issues. But, Java also teaches the benefits of having RuntimeException so that “features” break, not the whole application when something that you don’t have an easy way to prohibit happens. When you use things like URLClassloader and dynamic binding, a lot, you learn pretty quickly that you can manage better with exceptions as Java has done it, than in many other places. Swifts throws exceptions in code that can’t be caught and that make some parts of the platform pretty fragile.
Swift code does not throw exceptions because there are no exceptions in Swift. There are errors, which must be caught eventually. There is no such thing as an uncaught error in Swift.
What you are probably seeing are C++ exceptions being thrown by the audio stack. Core Audio and AVFoundation use C++ exceptions to report failures. It is not safe to throw an exception through a Swift frame, so all you can do at that point is crash.
I did a bunch of low-level real time audio projects on mac and iOS and from my experience mac is more capable IRT keeping up with realtime and keeping the latency as low as 3-10ms. DM me if you want to discuss any particular pitfalls or need an advice.
The viability of iOS vs MacOS with these AVAudioEngine APIs pretty much points out that MacOS was not designed for streaming at this layer. The mobile environment requires a lot more “schedule” keeping because of streaming being in everything. Everywhere documentation seems to suggest that only the lowest level MacOS APIs are going to create a solution there, and thus my desire to create a Protocol for the various domain level functions I’d like to have as my public API.
But again, I just have no reason to keep fighting with this problem. I can get rid of two problems by just using “designed for iPad.” My users will probably not actually care about the slight differences they will experience.
It seems like your complaint has more to do with the different media processing APIs on each platform than the language you are using to write your code. This is not the proper forum for such discussion.
In terms of the language itself, I think the easiest way to manage different implementations is to have them in different files and then include the appropriate file in each target. You can have the same interface in each implementation without the use of protocols, or you could enforce uniformity via a simple protocol if you’d like. In this instance, you’d be using protocols in much the same way you would use a Java interface.
You have an abstract interface for audio handling that you'd like to give different implementations on different platforms. You're never going to have multiple implementations active in the same process; #if around the core implementation is a perfectly good way to implement this rather than building up all these hierarchies of unnecessary protocols.
Speaking of JVM, Kotlin has the keywords expect and actual for this exact case. Instead of an Interface, a library defines what it wants a concrete class (or just function) to do, and each per-platform implementation (i.e. a different library) does something different. This leaves no additional information at runtime, and discourages multiple implementations on a single platform (which is what interfaces and Swift's protocols are designed to do).
I suppose Swift could get something similar, but conditional compilation works for the time being.