[Pitch] Version-pinned patching of public declarations


(Joe Groff) #1

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
  @patch(OSX 10.22, iOS 17)
  func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
  // Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
  // However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
  // we can't change it.
  @accepts_patch_from(AwesomeApp, OSX 10.22)
  public func foo() { ... }
}

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

-Joe


Dynamic method replacement
(Dave Abrahams) #2

Hi Joe,

Can you compare the developer experience with/without this feature, e.g. paint some scenarios and describe what one would have to do to deal with it?

Thanks,
Dave

···

On Dec 31, 2015, at 11:13 AM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
  @patch(OSX 10.22, iOS 17)
  func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
  // Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
  // However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
  // we can't change it.
  @accepts_patch_from(AwesomeApp, OSX 10.22)
  public func foo() { ... }
}

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

-Dave


(Rod Brown) #3

I definitely feel this is a great direction for a compromise. This outlines the issues involved well.

Rod

···

On 1 Jan 2016, at 6:13 AM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
  @patch(OSX 10.22, iOS 17)
  func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
  // Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
  // However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
  // we can't change it.
  @accepts_patch_from(AwesomeApp, OSX 10.22)
  public func foo() { ... }
}

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Félix Cloutier) #4

How would this extend to third-party libraries that evolve independently of Apple's release schedule and to the Linux compiler?

Is a patch scoped to the executable object that declares it? For instance, if I have a patch in a library, do applications that link against it see the patch? If I have a Mach-O plugin, does it see the patches that my program made (and vice-versa)? (I'm assuming that the patches go at the PLT/stub level, but let's be sure that this is what we need and that it doesn't cause any security interference)

Should patching be allowed in contexts where DYLD_INSERT_LIBRARIES/LD_PRELOAD are currently disallowed?

Félix

···

Le 31 déc. 2015 à 14:13:47, Joe Groff via swift-evolution <swift-evolution@swift.org> a écrit :

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
  @patch(OSX 10.22, iOS 17)
  func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
  // Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
  // However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
  // we can't change it.
  @accepts_patch_from(AwesomeApp, OSX 10.22)
  public func foo() { ... }
}

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Matthew Johnson) #5

Sorry if this should be clear already, but are you suggesting that @patch would allow patching of final or sealed types thus making them more palatable as defaults? This would surprise me, but if that is not what you are suggesting I don't follow how it relates to the final / sealed discussion.

Matthew

···

Sent from my iPad

On Dec 31, 2015, at 1:13 PM, Joe Groff via swift-evolution <swift-evolution@swift.org> wrote:

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
  @patch(OSX 10.22, iOS 17)
  func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
  // Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
  // However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
  // we can't change it.
  @accepts_patch_from(AwesomeApp, OSX 10.22)
  public func foo() { ... }
}

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution@swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution


(Joe Groff) #6

If a binary framework ships with a bug, and that bug is exercised as a second-order effect of other framework functionality, then without dynamic patching, your choices amount to trying to avoid or work around the bug, or reimplementing the functionality between you and the bug yourself (or replacing it with a third party library). If the bug lies somewhere deep like in text layout or affine transform math, or in something complex like PDF rendering (all bugs that have really shipped), replacing the functionality might not be practical, and avoiding the bug might not be possible.

-Joe

···

On Dec 31, 2015, at 1:20 PM, Dave Abrahams <dabrahams@apple.com> wrote:

Hi Joe,

Can you compare the developer experience with/without this feature, e.g. paint some scenarios and describe what one would have to do to deal with it?


(Louis Gerbarg) #7

As one of the engineers working on dyld I have a few thoughts on this:

1) I am not comfortable with one image altering the lazy pointers/PLT/stubs used to call between other images, at least during normal operation. While most OSes don’t actively prevent people from doing this today (if you want to walk through the image lists and the symbol data you can totally find where another image’s function pointers are and rewrite them), allowing it is a way to subvert control flow (think ROP/JOP). There is a good write up about it at <https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/di-frederico>. Suffice it to say, I think most platforms are going to end up having to (further) restrict the ability to interpose in order to protect against these sorts of attacks, so building a language feature around it seems like a bad idea in terms of future proofing/portability.

2) I am conceptually fine with the semantics that would be exposed by a library rewriting its own lazy pointers/PLT/stubs, though at a practical level I am not comfortable guaranteeing enough stability in the internal interfaces to allow a binary image to embed that machinery inside of itself necessary to do that. Of course, those semantics could be achieved purely in the compiler/static linker by having the @patch generate glue wrapping the patched function that decides whether to call through or use the patched implementation.
  This has the virtue of being insulated from the semantic differences between dyld and ld.so, though it is also admittedly a much weaker form of patching then what can be achieved by directing the dynamic linkers to rewrite the pointers of other images.

More comments inline:

How would this extend to third-party libraries that evolve independently of Apple's release schedule and to the Linux compiler?

Is a patch scoped to the executable object that declares it? For instance, if I have a patch in a library, do applications that link against it see the patch? If I have a Mach-O plugin, does it see the patches that my program made (and vice-versa)? (I'm assuming that the patches go at the PLT/stub level, but let's be sure that this is what we need and that it doesn't cause any security interference)

It has to be scoped to the executable image declaring it if you implement it as glue. If it were done through a dynamic linker feature it could conceivably allow other images to be patched, but as stated above, in my view that makes the feature a lot more fragile and insecure.

Should patching be allowed in contexts where DYLD_INSERT_LIBRARIES/LD_PRELOAD are currently disallowed?

If it is done as glue then this falls out naturally, it should be perfectly safe to allow patches in restricted contexts as it only patches the code explicitly requesting it get it, though it would require more thought. It also should provide rational behavior for dlopen()’ing dylibs that contain patches.

Félix

A lot of the discussion around the final/sealed-by-default issue focused on the ability in ObjC to extend frameworks or fix bugs in unforeseen ways. Framework developers aren't perfect, and being able to patch a broken framework method can be the difference between shipping and not. On the other hand, these patches become compatibility liabilities for libraries, which have to contend not only with preserving their own designed interface but all the undesigned interactions with shipping apps based on those libraries. The Objective-C model of monkey-patchable-everything has problems, but so does the buttoned-down everything-is-static C++ world many of us rightly fear. However, with the work we're putting into Swift for resilience and strong versioning support, I think we're in a good position to try to find a reasonable compromise. I'd like to sketch out a rough idea of how that might look. Public interfaces fundamentally correspond to one or more dynamic library symbols; the same resilience that lets a new framework version interact with older apps gives us an opportunity to patch resilient interfaces at process load time. We could embrace this by allowing applications to provide `@patch` implementations overriding imported non-fragile public APIs at specific versions:

import Foundation

extension NSFoo {
@patch(OSX 10.22, iOS 17)
func foo() { ... }
}

By tying the patch to a specific framework version, we lessen the compatibility liability for the framework; it's clear that, in most cases, the app developer is responsible for testing their app with new framework versions to see if their patch is still needed with each new version. Of course, that's not always possible—If the framework developer determines during compatibility testing that their new version breaks a must-not-break app, and they aren't able to adopt a fix on their end for whatever reason (it breaks other apps, or the app's patch is flawed), the framework could declare that their new version accepts patches for other framework versions too:

// in Foundation, OSX 10.23
public class NSFoo {
// Compatibility: AwesomeApp patched the 10.22 version of NSFoo.foo.
// However, RadicalApp and BodaciousApp rely on the unpatched 10.22 behavior, so
// we can't change it.
@accepts_patch_from(AwesomeApp, OSX 10.22)
public func foo() { ... }
}

In a glue model @accepts_patch_from still can be implemented, but rather than the dynamic linker deciding what is patched it would be used to feed data into a runtime call that the @patch generated glue would call into. Again this has the plus side of insulating us from the underlying platforms.

I would really want @accepts_patch_from to also take the version of the AwesomeApp so it conditionally turns off when the version is revved, causing the developer to re-evaluate their code and determine if they can do away with the patch, or if they need to explicitly extend it to the new version.

A sufficiently smart dynamic linker could perhaps resolve these patches at process load time (and probably summarily reject patches for dylibs loaded dynamically with dlopen), avoiding some of the security issues with arbitrary runtime patching. For public entry points to be effectively patchable, we'd have to also avoid any interprocedural optimization of the implementations within the originating module, so there is a performance cost to allowing this patching by default. Sufficiently mature (or arrogant) interfaces could perhaps declare themselves "unpatchable" to admit IPO within their own module. (Note that 'fragile' interfaces which admit cross-module inlining would inherently be unpatchable, and those are likely to be the most performance-sensitive interfaces to begin with.)

Sufficiently smart dynamic linkers! If you want this implemented as a dynamic linker feature it has to be done in dyld for Darwin and ld.so for Linux. And then implemented in the dynamic linkers for every other port people do. Also various environments may have runtimes that complicate this. For example in some cases (most symbols shipped with iOS/tvOS/watchOS for instance) the jumps into the PLT may be replaced by direct cross image jumps, which breaks the ability to interpose a symbol in between them. As you mentioned above, any patchable interfaces would require jumps through the PLT not just between images, but even within a single dynamic image. While it is an optimization and could be turned off, in practice it is a large optimization and would result in significant performance regressions if we disabled it.

In a glue model the dlopen()ed images behave just like all other images (their patches only impact themselves). In a model where you get the dynamic linker to rewrite other peoples pointers it poses a security risk, but simply disabling it may not viable either. What happens when an image is both dlopen()ed and dynamically linked by another dylib (that is itself perhaps dlopen()ed)? I think it would require a lot of effort to figure out how it would work, how to do it safely, what the semantics would be like, how hot would cope with differences between OSes (Two level namespaces vs flat namespaces, etc).

Again, the glue model is substantially weaker patching (really it is just sugar for writing wrappers and runtime management for activating them), but as I said above, building in support for patching out our stubs and adding all the machinery to support general interposing for all APIs seems like it would present portability, performance, and security issues.

Louis

···

On Jan 1, 2016, at 12:28 PM, Félix Cloutier <felixcca at yahoo.ca> wrote:

Le 31 déc. 2015 à 14:13:47, Joe Groff via swift-evolution <swift-evolution at swift.org> a écrit :

-Joe

_______________________________________________
swift-evolution mailing list
swift-evolution at swift.org
https://lists.swift.org/mailman/listinfo/swift-evolution

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.swift.org/pipermail/swift-evolution/attachments/20160101/84e83830/attachment.html>


(Dave Abrahams) #8

Sorry, I had something much more specific, including code examples, in mind. You’re not telling me how you’d use this feature to solve the problem, or what the alternative solutions would look like without this feature. That’s what I need to see in order to understand the proposal.

-Dave

···

On Dec 31, 2015, at 4:16 PM, Joe Groff <jgroff@apple.com> wrote:

On Dec 31, 2015, at 1:20 PM, Dave Abrahams <dabrahams@apple.com> wrote:

Hi Joe,

Can you compare the developer experience with/without this feature, e.g. paint some scenarios and describe what one would have to do to deal with it?

If a binary framework ships with a bug, and that bug is exercised as a second-order effect of other framework functionality, then without dynamic patching, your choices amount to trying to avoid or work around the bug, or reimplementing the functionality between you and the bug yourself (or replacing it with a third party library). If the bug lies somewhere deep like in text layout or affine transform math, or in something complex like PDF rendering (all bugs that have really shipped), replacing the functionality might not be practical, and avoiding the bug might not be possible.