Swift Assembly Makes No Sense

In general, I never understand swift assembly - something as simple as

let a = 6 / 2

gives (see https://godbolt.org/z/svqzTM)

        jmp     .LBB0_7
        jmp     .LBB0_8
        lea     rdi, [rip + .L__unnamed_1]
        mov     esi, 11
        mov     eax, 2
        mov     edx, eax
        lea     rcx, [rip + .L__unnamed_2]
        mov     r8d, 31
        mov     r9d, eax
        lea     r10, [rip + .L__unnamed_3]
        mov     qword ptr [rsp], r10
        mov     qword ptr [rsp + 8], 154
        mov     dword ptr [rsp + 16], 2
        mov     qword ptr [rsp + 24], 13089
        mov     dword ptr [rsp + 32], 1
        call    ($ss17_assertionFailure__4file4line5flagss5NeverOs12StaticStringV_A2HSus6UInt32VtF)@PLT

...just for the division. Personally I am unsure why it doesn't just move 2 to [rbp+8+(return address/guard/etc)].

Also, we shouldn't need -O to optimize out

        jmp     .LBB0_7
        jmp     .LBB0_8

With optimization passed, it makes even less sense:

        mov     qword ptr [rip + (output.a : Swift.Int)], 3

Why is the instruction pointer used in the first place?

1 Like

Have you taken a look at what those string constants are doing?

        .asciz  "Division results in an overflow"

        .asciz  "/home/buildnode/jenkins/workspace/oss-swift-5.2-package-linux-ubuntu-16_04/build/buildbot_linux/swift-linux-x86_64/stdlib/public/core/8/IntegerTypes.swift"

        .asciz  "Fatal error"

What you're seeing is the error path for the denominator being 0. Why does that show up when the denominator is a constant? Because / is a function in Swift, with the real primitive operations hidden in the standard library, and -Onone doesn't do more than the bare minimum (as you already noticed with the redundant jumps from composing primitive operations).

Now, there have been calls here and there for a proper -Odebug mode that would eliminate all this really simple stuff without significantly affecting compile time (or debuggability), but it hasn't happened yet. There's only so much engineer-time to go around, and since most code isn't performance-sensitive, the overwrought code you get in -Onone isn't necessarily a problem for testing apps that are mostly UI-based and operate at the speed of users.


Just to add onto this a little bit.

The idea behind -Odebug is that really most users do not care about not running optimizations vs running optimizations. What they care about is the ability to debug at the source level without issue. There are things like devirtualization/partial apply elimination/arc removal that do not effect the ability to debug at the source level thus that could be optimized.

The way I put it is that -Odebug obeys a modified "as-if" rule: "the debugger as-if rule". This rule is the user should not be able to tell that any optimization happened when debugging at the source level. (NOTE: this does not mean at the assembly level. So one would see better codegen there).


Why is the instruction pointer used in the first place?

Because a is a global variable and thus is being accessed PC relative.

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

1 Like

I see...and local variables would be offset from rbp

local variables would be offset from rbp

Right. Intel code typically use rbp (or ebp for 32-bit) as a traditional frame pointer [1], with locals at a negative offset and parameters at a positive offset (remember that the stack grown down).

Share and Enjoy

Quinn “The Eskimo!” @ DTS @ Apple

[1] This is not guaranteed. The compiler is free to optimise the frame away in a variety of situations.

1 Like
Terms of Service

Privacy Policy

Cookie Policy