JIT compilation for server-side Swift

Hi,

Last year a small group of developers from the IBM Runtimes compiler team
undertook a project to explore JIT compilation for Swift, primarily aimed
at server-side Swift. The compilation model we settled on was a hybrid
approach that combined static compilation via swiftc with dynamic
compilation via a prototype JIT compiler based on Eclipse OMR.[1]

This prototype JIT compiler (targeting Linux specifically) functioned by
having itself loaded by a Swift process at runtime, patching Swift
functions so that they may be intercepted, recompiling them from their SIL
representations, and redirecting callers to the JIT compiled version. In
order to accomplish this we needed to make some changes to the static
compiler and the target program's build process.

* First, we modified the compiler to emit code at the beginning of main()
that will attempt to dlopen() the JIT compiler, and if successful, call its
initialization routine. If unsuccessful the program would simply carry on
executing the rest of main().

* Second, we modified all Swift functions to be patchable by giving them
the "patchable-function" LLVM attribute (making the first instruction
suitable to be patched over with a short jump) and attaching 32 bytes of
prefix data (suitable to hold a long jump to a JIT hook function and some
extra data) to the function's code. This was controlled by a frontend
"-enable-jit" switch.

* Third, when building the target program we first compiled the Swift
sources to a .sib (binary SIL) file, then via ld and objcopy turned the
.sib into a .o containing a .sib data section, then compiled the sources
again into an executable, this time linking with the .o containing the
binary SIL. This embedded SIL is what was consumed at runtime by the JIT
compiler in order to recompile Swift functions on the fly. (Ideally this
step would be done by the static compiler itself (and is not unlike the
embedding of LLVM bitcode in a .llvmbc section), but that would have been a
significant undertaking so for prototyping purposes we did it at target
program build time.)

That's the brief, high level description of what we did, particularly as it
relates to the static side of this hybrid approach. The resulting prototype
JIT was able to run and fully recompile a non-trivial (but constrained)
program at comparable performance to the purely static version. For anyone
interested in more details about the project as a whole, including how the
prototype JIT functioned, the overhead it introduced, and the quality of
code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]

Having said that, it is with the static side in mind that I'm writing this
email. Despite the prototype JIT being built on OMR, the changes to the
static side outlined above are largely compiler agnostic APIs/ABIs that
anyone can use to build similar hybrid JITs or other runtime tools that
make sense for the server space. As such, we felt that it was a topic that
was worth discussing early and in public in order to allow any and all
potentially interested parties an opportunity to weigh in. With this email
we wanted to introduce ourselves to the wider Swift community and solicit
feedback on 1) the general idea of JIT compilation for server-side Swift,
2) the hybrid approach in particular, and 3) the changes mentioned above
and future work in the static compiler to facilitate 1) and 2). To that
end, we'd be happy to take questions and welcome any discussion on this
subject.

(As for the prototype itself, we intend to open source it either in its
current state [based on Swift 3.0 and an early version of OMR] or in a more
up-to-date state in the very near future.)

Thank you kindly,
Younes Manton

[1] OMR Technology & GitHub - eclipse/omr: Eclipse OMR™ Cross platform components for building reliable, high performance language runtimes
[2] http://www.ustream.tv/recorded/105013815 (Swift JIT starts at ~28:20)

No, this is completely unrelated. This is about runtime optimization of already-running swift programs.

···

On Jul 10, 2017, at 10:40 AM, Jacob Williams via swift-evolution <swift-evolution@swift.org> wrote:

Pardon my lack of knowledge about JIT compilation, but does this open the realm of possibilities to a client-side swift that would allow web developers to write swift code rather than javascript?

On Jul 10, 2017, at 10:40 AM, Younes Manton via swift-evolution <swift-evolution@swift.org <mailto:swift-evolution@swift.org>> wrote:

Hi,

Last year a small group of developers from the IBM Runtimes compiler team undertook a project to explore JIT compilation for Swift, primarily aimed at server-side Swift. The compilation model we settled on was a hybrid approach that combined static compilation via swiftc with dynamic compilation via a prototype JIT compiler based on Eclipse OMR.[1]

This prototype JIT compiler (targeting Linux specifically) functioned by having itself loaded by a Swift process at runtime, patching Swift functions so that they may be intercepted, recompiling them from their SIL representations, and redirecting callers to the JIT compiled version. In order to accomplish this we needed to make some changes to the static compiler and the target program's build process.

* First, we modified the compiler to emit code at the beginning of main() that will attempt to dlopen() the JIT compiler, and if successful, call its initialization routine. If unsuccessful the program would simply carry on executing the rest of main().

* Second, we modified all Swift functions to be patchable by giving them the "patchable-function" LLVM attribute (making the first instruction suitable to be patched over with a short jump) and attaching 32 bytes of prefix data (suitable to hold a long jump to a JIT hook function and some extra data) to the function's code. This was controlled by a frontend "-enable-jit" switch.

* Third, when building the target program we first compiled the Swift sources to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a .o containing a .sib data section, then compiled the sources again into an executable, this time linking with the .o containing the binary SIL. This embedded SIL is what was consumed at runtime by the JIT compiler in order to recompile Swift functions on the fly. (Ideally this step would be done by the static compiler itself (and is not unlike the embedding of LLVM bitcode in a .llvmbc section), but that would have been a significant undertaking so for prototyping purposes we did it at target program build time.)

That's the brief, high level description of what we did, particularly as it relates to the static side of this hybrid approach. The resulting prototype JIT was able to run and fully recompile a non-trivial (but constrained) program at comparable performance to the purely static version. For anyone interested in more details about the project as a whole, including how the prototype JIT functioned, the overhead it introduced, and the quality of code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]

Having said that, it is with the static side in mind that I'm writing this email. Despite the prototype JIT being built on OMR, the changes to the static side outlined above are largely compiler agnostic APIs/ABIs that anyone can use to build similar hybrid JITs or other runtime tools that make sense for the server space. As such, we felt that it was a topic that was worth discussing early and in public in order to allow any and all potentially interested parties an opportunity to weigh in. With this email we wanted to introduce ourselves to the wider Swift community and solicit feedback on 1) the general idea of JIT compilation for server-side Swift, 2) the hybrid approach in particular, and 3) the changes mentioned above and future work in the static compiler to facilitate 1) and 2). To that end, we'd be happy to take questions and welcome any discussion on this subject.

(As for the prototype itself, we intend to open source it either in its current state [based on Swift 3.0 and an early version of OMR] or in a more up-to-date state in the very near future.)

Thank you kindly,
Younes Manton

[1] OMR Technology & GitHub - eclipse/omr: Eclipse OMR™ Cross platform components for building reliable, high performance language runtimes
[2] http://www.ustream.tv/recorded/105013815 (Swift JIT starts at ~28:20)

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org <mailto:swift-evolution@swift.org>
https://lists.swift.org/mailman/listinfo/swift-evolution

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Hi,

Last year a small group of developers from the IBM Runtimes compiler team undertook a project to explore JIT compilation for Swift, primarily aimed at server-side Swift. The compilation model we settled on was a hybrid approach that combined static compilation via swiftc with dynamic compilation via a prototype JIT compiler based on Eclipse OMR.[1]

This prototype JIT compiler (targeting Linux specifically) functioned by having itself loaded by a Swift process at runtime, patching Swift functions so that they may be intercepted, recompiling them from their SIL representations, and redirecting callers to the JIT compiled version. In order to accomplish this we needed to make some changes to the static compiler and the target program's build process.

* First, we modified the compiler to emit code at the beginning of main() that will attempt to dlopen() the JIT compiler, and if successful, call its initialization routine. If unsuccessful the program would simply carry on executing the rest of main().

* Second, we modified all Swift functions to be patchable by giving them the "patchable-function" LLVM attribute (making the first instruction suitable to be patched over with a short jump) and attaching 32 bytes of prefix data (suitable to hold a long jump to a JIT hook function and some extra data) to the function's code. This was controlled by a frontend "-enable-jit" switch.

* Third, when building the target program we first compiled the Swift sources to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a .o containing a .sib data section, then compiled the sources again into an executable, this time linking with the .o containing the binary SIL. This embedded SIL is what was consumed at runtime by the JIT compiler in order to recompile Swift functions on the fly. (Ideally this step would be done by the static compiler itself (and is not unlike the embedding of LLVM bitcode in a .llvmbc section), but that would have been a significant undertaking so for prototyping purposes we did it at target program build time.)

That's the brief, high level description of what we did, particularly as it relates to the static side of this hybrid approach. The resulting prototype JIT was able to run and fully recompile a non-trivial (but constrained) program at comparable performance to the purely static version. For anyone interested in more details about the project as a whole, including how the prototype JIT functioned, the overhead it introduced, and the quality of code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]

Having said that, it is with the static side in mind that I'm writing this email. Despite the prototype JIT being built on OMR, the changes to the static side outlined above are largely compiler agnostic APIs/ABIs that anyone can use to build similar hybrid JITs or other runtime tools that make sense for the server space.

Do you have example APIs to discuss in more detail?

As such, we felt that it was a topic that was worth discussing early and in public in order to allow any and all potentially interested parties an opportunity to weigh in. With this email we wanted to introduce ourselves to the wider Swift community and solicit feedback on 1) the general idea of JIT compilation for server-side Swift, 2) the hybrid approach in particular, and 3) the changes mentioned above and future work in the static compiler to facilitate 1) and 2). To that end, we'd be happy to take questions and welcome any discussion on this subject.

I think that there’s a lot of potential gains for runtime optimization of Swift programs, but the vast majority of benefits will likely fall out from:

1. Smashing resilience barriers at runtime.
2. Specializing frequently executed generic code, enabling subsequent inlining and further optimization.

These involve deep knowledge of Swift-specific semantics. They are probably better handled by running Swift’s own optimizer at runtime rather than teaching OMR or some other system about Swift. This is because Swift’s SIL representation is constantly evolving, and the optimizations already in the compiler are always up to date. I’m curious, what benefits of OMR are you hoping to gain, and how does that weigh against the complexity of making the two systems interact?

···

On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution <swift-evolution@swift.org> wrote:

(As for the prototype itself, we intend to open source it either in its current state [based on Swift 3.0 and an early version of OMR] or in a more up-to-date state in the very near future.)

Thank you kindly,
Younes Manton

[1] OMR Technology & GitHub - eclipse/omr: Eclipse OMR™ Cross platform components for building reliable, high performance language runtimes
[2] http://www.ustream.tv/recorded/105013815 (Swift JIT starts at ~28:20)

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

Pardon my lack of knowledge about JIT compilation, but does this open the realm of possibilities to a client-side swift that would allow web developers to write swift code rather than javascript?

···

On Jul 10, 2017, at 10:40 AM, Younes Manton via swift-evolution <swift-evolution@swift.org> wrote:

Hi,

Last year a small group of developers from the IBM Runtimes compiler team undertook a project to explore JIT compilation for Swift, primarily aimed at server-side Swift. The compilation model we settled on was a hybrid approach that combined static compilation via swiftc with dynamic compilation via a prototype JIT compiler based on Eclipse OMR.[1]

This prototype JIT compiler (targeting Linux specifically) functioned by having itself loaded by a Swift process at runtime, patching Swift functions so that they may be intercepted, recompiling them from their SIL representations, and redirecting callers to the JIT compiled version. In order to accomplish this we needed to make some changes to the static compiler and the target program's build process.

* First, we modified the compiler to emit code at the beginning of main() that will attempt to dlopen() the JIT compiler, and if successful, call its initialization routine. If unsuccessful the program would simply carry on executing the rest of main().

* Second, we modified all Swift functions to be patchable by giving them the "patchable-function" LLVM attribute (making the first instruction suitable to be patched over with a short jump) and attaching 32 bytes of prefix data (suitable to hold a long jump to a JIT hook function and some extra data) to the function's code. This was controlled by a frontend "-enable-jit" switch.

* Third, when building the target program we first compiled the Swift sources to a .sib (binary SIL) file, then via ld and objcopy turned the .sib into a .o containing a .sib data section, then compiled the sources again into an executable, this time linking with the .o containing the binary SIL. This embedded SIL is what was consumed at runtime by the JIT compiler in order to recompile Swift functions on the fly. (Ideally this step would be done by the static compiler itself (and is not unlike the embedding of LLVM bitcode in a .llvmbc section), but that would have been a significant undertaking so for prototyping purposes we did it at target program build time.)

That's the brief, high level description of what we did, particularly as it relates to the static side of this hybrid approach. The resulting prototype JIT was able to run and fully recompile a non-trivial (but constrained) program at comparable performance to the purely static version. For anyone interested in more details about the project as a whole, including how the prototype JIT functioned, the overhead it introduced, and the quality of code it emitted, I'll point you to Mark Stoodley's recent tech talk.[2]

Having said that, it is with the static side in mind that I'm writing this email. Despite the prototype JIT being built on OMR, the changes to the static side outlined above are largely compiler agnostic APIs/ABIs that anyone can use to build similar hybrid JITs or other runtime tools that make sense for the server space. As such, we felt that it was a topic that was worth discussing early and in public in order to allow any and all potentially interested parties an opportunity to weigh in. With this email we wanted to introduce ourselves to the wider Swift community and solicit feedback on 1) the general idea of JIT compilation for server-side Swift, 2) the hybrid approach in particular, and 3) the changes mentioned above and future work in the static compiler to facilitate 1) and 2). To that end, we'd be happy to take questions and welcome any discussion on this subject.

(As for the prototype itself, we intend to open source it either in its current state [based on Swift 3.0 and an early version of OMR] or in a more up-to-date state in the very near future.)

Thank you kindly,
Younes Manton

[1] OMR Technology & GitHub - eclipse/omr: Eclipse OMR™ Cross platform components for building reliable, high performance language runtimes
[2] http://www.ustream.tv/recorded/105013815 (Swift JIT starts at ~28:20)

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

I'm aware that Java re-compiles sections to make incremental performance
improvements based on a statistical analysis of usage. I'm not familiar
enough with other uses of JIT on the backend to know what advantages it
would have, beyond less time from begin compile to launch. Could you list
a few benefits?

The primary benefit is better performance by being able to exploit
knowledge about the program's behaviour and the hardware and software
environment it's running in that is only (or at least more easily and
accurately) available at runtime. In the case we're interested in "less
time from begin compile to launch" would not be a benefit, since we're
interested in JIT compiling an already built program.

One of the goals of the Swift team appears to have been to achieve
predictable performance, to the tune of finding object deallocations as too
unpredictable. So would you envision this as being an opt-in per compile?

Variations in behaviour and performance are, unfortunately, a common hazard
with JIT compilation. It depends on the implementation of the JIT in
question of course; if you put emphasis on predictability you can certainly
engineer a JIT that favours predictability over peak performance. I
personally think anyone embarking on this sort of thing would hopefully
keep their users' concerns in mind and try not to subvert the language's
design goals and principles if it can be helped.

Opt-in/out is an interesting idea (that I've personally considered more so
for my own debugging purposes). An annotation that works like the familiar
inline/alwaysinline/neverinline might be a useful with the hope being that
a JIT would hopefully do "the right thing" and free you from having to care.

···

On Mon, Jul 10, 2017 at 1:31 PM, Benjamin Spratling <bspratling@mac.com> wrote:

Having said that, it is with the static side in mind that I'm writing this
email. Despite the prototype JIT being built on OMR, the changes to the
static side outlined above are largely compiler agnostic APIs/ABIs that
anyone can use to build similar hybrid JITs or other runtime tools that
make sense for the server space.

Do you have example APIs to discuss in more detail?

Yes, I've prepared patches for the 3 items I discussed in my initial email.
I've rebased onto swift/master patches that we think are a decent starting
point: a high level -enable-jit-support frontend option [1] and patchable
function support.[2]

Another patch (still based on Swift 3.0 because it needs to be implemented
differently for master) for inserting in main() a call to an stdlib routine
that will attempt to dlopen() an external "runtime" library, e.g. a JIT, is
on another branch.[3] If ported to master as-is it would probably emit an
apply to the stdlib routine at beginning of main() before argc/argv are
captured. Having said that there are other ways to inject yourself into a
process (I've been looking into LD_PRELOAD/exec(), for example, which
wouldn't require changes to swiftc) so alternatives are welcome for
discussion.

I think that there’s a lot of potential gains for runtime optimization of
Swift programs, but the vast majority of benefits will likely fall out from:

1. Smashing resilience barriers at runtime.
2. Specializing frequently executed generic code, enabling subsequent
inlining and further optimization.

These involve deep knowledge of Swift-specific semantics. They are
probably better handled by running Swift’s own optimizer at runtime rather
than teaching OMR or some other system about Swift. This is because Swift’s
SIL representation is constantly evolving, and the optimizations already in
the compiler are always up to date. I’m curious, what benefits of OMR are
you hoping to gain, and how does that weigh against the complexity of
making the two systems interact?

Yes, #1 and #2 are prime candidates.

We're not so interested in retreading the same ground as the SIL optimizer
if we can help it; ideally we would consume optimized SIL and be able to
further optimize it without overlapping significantly with the SIL
optimizer, but I think some level overlap and a non-trivial coupling with
the SIL representation will be likely unfortunately.

Having access to and being able to re-run the SIL optimizer at runtime,
perhaps after feeding it runtime information and new constraints and
thereby enabling opportunities that weren't available at build time is a
naturally interesting idea. I haven't actually looked at that part of the
Swift code base in detail, but I imagine it's not really in the form of an
easily consumable library for an out-of-tree code base; our prototype
re-used the SIL deserializer at runtime and that was painful and hacky so I
imagine a similar experience with the SIL optimizer as it currently is.

The benefits of the OMR compiler is that it is a JIT compiler first and
foremost and has evolved over the years for that role. More practically,
it's a code base we're much more familiar with so our knowledge currently
goes a lot farther and it was a quicker path to prototyping something in a
reasonable amount of time. The learning curve for Swift the language +
swiftc & std libs + SIL was already a significant in and of itself. Having
said that I fully recognize that there are obvious and natural reasons to
consider a SIL optimizer + LLVM JIT in place of what we've been hacking
away on. I don't think we're at a point where we can answer your last
question, it might end up that a SIL-consuming out-of-tree compiler based
on a different IL will have a hard time keeping up with Swift internals and
will therefore not be able to do the sorts of things we think a JIT would
excel at, but we're open to a little exploration to see how well it works
out. At the very least the changes to the static side of the equation
are/will be useful to any other hybrid JIT or whatever other runtime tools
people can envision, so from the Swift community's perspective I hope there
will at least be some benefits.

Thanks for taking the time.

[1]

[2]

[3]

···

On Mon, Jul 10, 2017 at 1:53 PM, Michael Ilseman <milseman@apple.com> wrote:

On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution < > swift-evolution@swift.org> wrote:

Hi,

First of all let me welcome the project. My knowledge to JITs is little but
I come from Java world where JITs takes a major role. Let me share my
initial thoughts on this:

1. Runtime code optimization. Java JIT does this pretty much well. But how
can a Swift code already optimized compile time benefit from it?
2. Hot code swap. This is an interesting area. This feature would enable
developers rapid development by seeing their changes as soon as the server
JIT replaces modified code blocks.
3. Code injection. Java already enjoys this for like AOP, runtime
dependency injection, code instrumentation, etc.

Regards,

Gábor

Younes Manton via swift-evolution <swift-evolution@swift.org> ezt írta
(időpont: 2017. júl. 11., K, 0:02):

···

On Mon, Jul 10, 2017 at 1:53 PM, Michael Ilseman <milseman@apple.com> > wrote:

On Jul 10, 2017, at 9:40 AM, Younes Manton via swift-evolution < >> swift-evolution@swift.org> wrote:

Having said that, it is with the static side in mind that I'm writing
this email. Despite the prototype JIT being built on OMR, the changes to
the static side outlined above are largely compiler agnostic APIs/ABIs that
anyone can use to build similar hybrid JITs or other runtime tools that
make sense for the server space.

Do you have example APIs to discuss in more detail?

Yes, I've prepared patches for the 3 items I discussed in my initial
email. I've rebased onto swift/master patches that we think are a decent
starting point: a high level -enable-jit-support frontend option [1] and
patchable function support.[2]

Another patch (still based on Swift 3.0 because it needs to be implemented
differently for master) for inserting in main() a call to an stdlib routine
that will attempt to dlopen() an external "runtime" library, e.g. a JIT, is
on another branch.[3] If ported to master as-is it would probably emit an
apply to the stdlib routine at beginning of main() before argc/argv are
captured. Having said that there are other ways to inject yourself into a
process (I've been looking into LD_PRELOAD/exec(), for example, which
wouldn't require changes to swiftc) so alternatives are welcome for
discussion.

I think that there’s a lot of potential gains for runtime optimization of
Swift programs, but the vast majority of benefits will likely fall out from:

1. Smashing resilience barriers at runtime.
2. Specializing frequently executed generic code, enabling subsequent
inlining and further optimization.

These involve deep knowledge of Swift-specific semantics. They are
probably better handled by running Swift’s own optimizer at runtime rather
than teaching OMR or some other system about Swift. This is because Swift’s
SIL representation is constantly evolving, and the optimizations already in
the compiler are always up to date. I’m curious, what benefits of OMR are
you hoping to gain, and how does that weigh against the complexity of
making the two systems interact?

Yes, #1 and #2 are prime candidates.

We're not so interested in retreading the same ground as the SIL optimizer
if we can help it; ideally we would consume optimized SIL and be able to
further optimize it without overlapping significantly with the SIL
optimizer, but I think some level overlap and a non-trivial coupling with
the SIL representation will be likely unfortunately.

Having access to and being able to re-run the SIL optimizer at runtime,
perhaps after feeding it runtime information and new constraints and
thereby enabling opportunities that weren't available at build time is a
naturally interesting idea. I haven't actually looked at that part of the
Swift code base in detail, but I imagine it's not really in the form of an
easily consumable library for an out-of-tree code base; our prototype
re-used the SIL deserializer at runtime and that was painful and hacky so I
imagine a similar experience with the SIL optimizer as it currently is.

The benefits of the OMR compiler is that it is a JIT compiler first and
foremost and has evolved over the years for that role. More practically,
it's a code base we're much more familiar with so our knowledge currently
goes a lot farther and it was a quicker path to prototyping something in a
reasonable amount of time. The learning curve for Swift the language +
swiftc & std libs + SIL was already a significant in and of itself. Having
said that I fully recognize that there are obvious and natural reasons to
consider a SIL optimizer + LLVM JIT in place of what we've been hacking
away on. I don't think we're at a point where we can answer your last
question, it might end up that a SIL-consuming out-of-tree compiler based
on a different IL will have a hard time keeping up with Swift internals and
will therefore not be able to do the sorts of things we think a JIT would
excel at, but we're open to a little exploration to see how well it works
out. At the very least the changes to the static side of the equation
are/will be useful to any other hybrid JIT or whatever other runtime tools
people can envision, so from the Swift community's perspective I hope there
will at least be some benefits.

Thanks for taking the time.

[1]
Add a frontend -enable-JIT-support flag · ymanton/swift@8f5f53c · GitHub
[2]
Emit patchable funcs under -enable-jit-support · ymanton/swift@54e7736 · GitHub
[3]
Load external runtime lib on entry to main(). · ymanton/swift@f59a232 · GitHub
_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution