Giant footguns, typealias extensions, and the C OpaquePointer

Hello again :slightly_smiling_face:

I unfortunately ran into a pretty annoying issue; it can probably be solved by rebinding memory but I thought I'd mention it anyway, because it's something unintuitive I noticed.

The context looks something like this:
I'm creating some abstractions for SDL2 (again) because the default api looks quite unidiomatic (and it's just a cool way of testing Swift features). I'm trying to do this with as little overhead as possible, just extending the SDL types rather than wrapping them in some way, especially avoiding classes.

Consider this example usage of an SDL window api:

let window = SDL_CreateWindow(
    800, 600, SDL_WINDOW_SHOWN.rawValue |
defer { SDL_DestroyWindow(window) }

The somewhat more idiomatic extension:

let window = try! SDL.Window(title: "Window", [.resizable, .hidpi])
defer { window.destroy() }

it takes advantage of Swift features, providing defaults and labels for all the settings but the title, centering the window and allowing hidpi by default, and it also uses an OptionSet as a safer interface for the bitmask. It also makes error handling easier which is usually a separate call to SDL_GetError().

Unfortunately, structs have no deinit so there is an issue of still manually managing memory; window also has reference semantics but I heard that this should be solved in the future with move only types.

Neither of these is why I'm writing this post however.
Something extremely common in C apis is using opaque pointers. This is a problem here, because SDL.Window is just a typealias for an OpaquePointer, so I don't have to wrap everything in structs, possibly allocating even more unnecessary metadata etc.

The first issue is that at first I didn't even notice this behavior, the code worked.
However, once I got to extending SDL.Renderer which is also an OpaquePointer I became really confused when I realised extension SDL.Window was actually extending the underlying type OpaquePointer, creating a conflict between the "types". I'm not sure what the point of allowing typealias extensions is in the first place.

It seems to be doing nothing more than adding confusion to what is actually being extended without checking the definition of the extended type.

I feel like this should either be deprecated and made an error in Swift 6, or implemented in a more intuitive way, because currently it's a pretty big footgun :slight_smile:

extending typealiases is useful in at least 2 scenarios:

  1. you have an underscored protocol (_SmurfFooable) that you want to reference by its namespaced typealias (Smurf.Fooable).

  2. you want to extend a type that varies across platforms (e.g., extension CChar).

i used to do 1) a lot, but a lot of our IDE/documentation tooling in the swift world does not support these kinds of APIs, so 1) is not that useful in practice.

i think 2) is still a legitimate use case, and i would not like for this to become deprecated or require additional ceremony.

to make the footgun a little safer, can you configure your IDE to highlight typealiases differently from direct type references? modern sourcekit-based tooling distinguishes between:

  • typealiases,
  • references to allocated/reference-counted types (actor, class), and
  • references to unallocated types (struct, enum)

which i find very helpful for avoiding this kind of mistake.

1 Like

Unfortunately when I'm on macOS I like to use Xcode which does not seem to make such a distinction. I already made classes and structs different but that's as far as customization goes, and VS Code seems to get very confused by my code and highlight everything incorrectly.

Makes sense, I didn't think of these use cases. Still, it would have been nice to have some kind of distinction whether an extension is for the typealias or underlying type, and so the ability to restrict functionality to a subset of that type.

I had to get around it with a very boilerplate heavy implementation, using a marker protocol, empty structs, pointer typealiases and code like this

extension SDL.Window where Pointee == SDL._Window { ... }

and this

extension UnsafePointer {
    func erase() -> OpaquePointer { unsafeBitCast(self, to: OpaquePointer.self) }

extension OpaquePointer {
    func recover<T: OpaqueRecoverable>() -> UnsafePointer<T> {
        unsafeBitCast(self, to: UnsafePointer<T>.self)

Which is not ideal anyway because it exposes all the UnsafePointer methods and properties pointing to that empty struct

the last time i checked, semantic highlighting was disabled by default (i do not know why!) and you have to manually enable it in the vscode-swift extension. there are steps for how to do this in this issue:

1 Like

structs with just a single stored property are guaranteed to have the same memory layout as the type of the property:

You can therefore wrap an OpaquePointer in a struct without introducing any additional memory footprint.


Oh, thanks, that's a really cool feature! Makes this a lot easier :slight_smile:

I do think the conflation of opaque pointers was a mistake—at least partly my mistake! But it would be extremely source-breaking to change now. :-(


I've been working on some similar stuff with SDL, and I've come to the conclusion that it becomes much more pleasant to work with these APIs if they are simpler wrapped in a final class with a deinit that calls SDL_DestroyWindow. I suspect that the performance costs will be rather negligible (though I welcome being proven wrong :sweat_smile:).


Yeah, I doubt the performance cost would be very noticeable; I would only really expect issues passing the renderer around during a frame.
I have not tested this though, maybe the compiler can perform some optimisations, however Swift seems to be doing reference counting even when just passing a class to print() so I doubt it.

I avoid classes mostly because I just don't like the feeling of knowing my code isn't as fast as it possibly can be, not for any practical reason :slight_smile:

And managing the class manually defeats the point of deinit here in the first place :smile:

This pattern is exactly the right pattern for working with C APIs in Swift. Wrap them in safe APIs as quickly as possible, and have those safe APIs manage their lifetimes automatically as we do in Swift, which today requires using classes. You can see my version of this in the BoringSSL layers of swift-crypto, which define abstraction types that wrap the underlying BoringSSL types. Some examples:


Are there any good existing benchmarks of class vs struct performance? I'm curious what the cost is exactly, but I never found anything

classes aren't slow, allocating and “copying” (meaning: incrementing and decrementing the reference count of) classes is slow.

as long as you make sure you never pass the object __owned to an non-inlined initializer or setter, i would expect them to have similar performance to a struct.

A simple benchmark is hard to produce because costs aren’t all incurred in one place. The answer for performance trade offs is all “it depends”. In particular, it depends on what is stored in your struct/class and how it’s used.

Within a resilience domain (i.e. within a single module), a struct is “free”: it is exactly the same as passing around its fields. But this means all its fields: a struct with 30 refcounted fields requires 30 refcount operations to copy!

Remember, though, if you want a struct that performs like a class you can create a struct that stores a simple backing class as its only stored property, and implement CoW yourself. This struct will now perform identically to a class (within the same resilience domain, or if you apply enough attributes across resilience domains), but behave like a struct.

Choose between struct and class in the API of your types for semantics, not for performance.

1 Like

for OP’s benefit, the attributes in question are:

  • @inlinable
  • @frozen

use @frozen if all of your struct fields are public and mutable, and you don’t mind if other modules can bypass your designated inits.

@inlinable is the microplastic of high-performance swift. but you really only need it for generics, if you are just trying to reduce ARC traffic, it is way better to use the correct passing conventions:

  • __shared
  • __owned
1 Like

If they aren't noticeably slower then I find it even more overwhelming, there's so many conflicting ways of designing my code and the choice is less obvious.

The general impression I get from watching different talks and interviews is that Swift classes are more of a compatibility thing for the [square brackets] language.

I think the Val research language (which I happened to find out about today) perfectly captures how I see Swift. I do understand why the classes are there, Apple platforms were built with them.
But why should someone use classes as what is essentially just a reference counted smart pointer with the memory layout of an obj-c class and usability issues like not getting init for free.
Wouldn't it make more sense to just implement a smart pointer with a generic struct?
Because that's pretty much the only use case I see recommended, not using them to design UIViewControllers, hard to read delegate patterns, etc. There's protocols now and they're much simpler.

this is exactly what classes are for, this is how you allocate (managed) memory in swift without falling back to an UnsafeMutableBufferPointer wrapped in a final class.

ManagedBuffer may be of interest to you; this is how you implement “allocated things” without double indirection.

this is what a final class is.

1 Like