Building resilience ... on Windows

I took a quick stab at this and realized that we don't have some critical bit of information to do this :confused:

The problem is that symbols are strongly bound with two-level namespaces on Windows. This means that we need to know the name of the runtime module which we should be binding to (even if weakly) (the import library specifies the runtime name of the module, which may be different - e.g. linking against ucrt.lib introduces a dependency on win-core-stdio-1-1.dll). Symbols cannot be looked up by name as all symbols are tuples of (module, ordinal). Linking by named function is permissible (and recommended), though the loader will resolve the symbol name to ordinal and it is preferable to use the name to avoid ordinal renumbering from incorrect binding.

Even if we restrict this to Swift modules (as we do not have sufficient information for symbols imported through the clang importer), we still face the problem that -module-name and -module-link-name can be different values (e.g. libdispatch). If we have the module link name serialized into the swift module and tie that information into the IR somehow, I suppose that we should be able to construct the appropriate constructor.

Also, is my understanding correct that the table approach is to avoid size costs? Emitting a constructor per symbol is possible, but would be expensive in terms of code size.

The Windows model doesn't sound that different from the Darwin model. Dynamic symbols are also two-level-namespaced with dyld, and the linker resolves which symbols come from which libraries at link time. The compiler still sees an effectively flat namespace for the most part. If a symbol migrates to a different dynamic library in the future, then compatibility symbols are needed for existing binaries. IIUC, the effect of the Windows process should be pretty much the same, except that the import libraries encode the symbol-to-dll mapping rather than the linker directly consulting dylib symbol tables. It's not clear to me what this changes from the compiler's point of view.

I think Saleem is saying that there's no API like dlsym that just looks up a symbol name at runtime without knowing what DLL is supposed to provide it, because GetProcAddress takes a module handle. So we'd have to mark the defining DLL name on the IR declaration somehow before we could write that IR pass. That's tenable but difficult, and in any case it would prevent symbols from migrating between DLLs in the future, although that might be forced by Windows anyway unless PE supports a reexport feature like MachO does.

I don't know how acceptable it would be to write our own function that searches all loaded modules.

Ah, yeah, that makes sense. That would certainly be something that'd ideally be handled at "link time".

@Joe_Groff, exactly as @John_McCall said - I meant that I need the DLL name. Sorry if I did a poor job of explaining that.

As to @John_McCall's question - yes, PE/COFF does have a re-export like concept known as forwarding, and in fact, is used heavily to provide the backwards compatibility by Microsoft. It does require some special handling, in particular, you need to add a DEF file that specifies the information.

A custom search is certainly plausible, however, the problem is race conditions (i.e. a module is loaded while you are scanning the module list).

I don't think we're concerned about race conditions here. If the library isn't loaded before we're resolving weak references into it, we've got serious problems.


Actually, now, I'm even more confused. I was taking a look at what the @_weakLinked attribute does on a Swift definition. On ELFish targets it seems to be doing the following:

public func f() { }
define protected swiftcc void @"$s7reduced1fyyF"() #0 {
  ret void

As per the LangRef, this indicates __attribute__((__visibility__("protected"))) which means that it is non-interposable and does take part in symbol resolution, however, this will be exported with strong linkage not weak. Is the attribute on definitions be ignored?

I have no idea what that attribute does. Are you sure it's meant to be used on definitions?

It's there for testing, but not intended to be used for anything directly.

One additional problem with the manual iteration for flattening the namespace: weak linking means that multiple modules may define the symbol and on Linux, the load order determines the symbol resolution where as here we would have non-determinism and would need to do a topological sort to guarantee the ordering (which is really what the GNU loader does anyways).

Weak imports wouldn't have any impact on the definition, since that's purely a decision by the client referencing the symbol to allow it to be resolved to null if no definition is found. The defining image would still provide a strong definition.

Ah, okay, its just not an error to place it on the definition then? That works well enough. It seemed odd that it was getting dropped. I suppose it serves to annotate the (serialized swiftmodule) declaration.

Well, @_weakLinked really isn't intended to be used for anything but testing. Normally the decision to weak-link or not is made based on the availability of the declaration and the deployment target.

Oh, thats actually a pretty critical bit of information especially as it relates to overall priority. I think that its better to actually temporarily disable the tests on Windows and continue the work that uncovered this issue - getting the extended test suite running on Windows then. This is something we will need to address subsequently, but, seems less critical than previously.

Cleaning up any accidental leaks of the weak linking would be prudent nonetheless.

Thank you @Joe_Groff for explaining this and pointing it out to me, I wasn't aware of the intended usage of @_weakLinked. I've been trying to finish the work to reach the functional parity and it seems that we are slowly nearing it.

1 Like