Partial word search for "safe" would find both "safe" and "unsafe".
Putting it differently, if you only have one rule "1 - only safe functions can be called from a safe function" and one built-in exception hidden inside standard library:
func safe(execute: unsafe () -> Void) {
execute() // some compiler magic here to make it possible
}
then from a POV of normal safe functions they are making no exceptions at all only calling safe functions:
and if "safe {" here was actually spelled "unsafe {" it would feel like these exceptions are scattered everywhere rather than being localised in one built-in function inside the standard library.
Edit: note, there will be no "safe" comments in the actual code (well, normally), so the only "safe" that search will reveal would be "safe {" blocks and "unsafe func" functions.
I appreciate that's how it is done now. IMHO it would make sense to make a distinction between these two colorisation modes... while it is possible to make a safe call from unsafe (with enough care or compiler verification support if possible, etc), it is absolutely impossible to make a "noLock" function from a function that takes locks. If we had this distinction - it would not be possible for people to make the whole class of mistakes – they'll just won't be able "to lie" to the compiler to begin with!
This is not actually true. There can be locks incurred by runtime calls that are present at the SIL level but which the author can verify are eliminated by later optimizer stages, or which are present on branches which the author knows will never be taken but the compiler does not have sufficient information to prove are statically dead. Ditto for allocations.
I believe using the same marker for both is actually doing you a disservice: instead of focusing on just the latter (which are the things that require attention) the search results would be polluted with the former, and why would you want to be constantly reminded that a function that is marked unsafe is unsafe, and why would you want to spend time verifying it? The function is unsafe because it is marked so, and even if it so happened that at the moment it actually not using anything unsafe - that it is marked unsafe is part of its contract – the callers will treat it unsafe anyway. And if it does contain unsafe constructs inside - well, no surprise either.
In other words: you just "declare" unsafety (and there's no need to prove it), and you only need to prove/verify/audit safety (when you are making it from unsafe constructs). And safety of functions that only use safe constructs is automatic.
+1 on the general idea, and @tera’s arguments have convinced me that unsafe { } would be a distracting spelling for the escape hatch. I’m not totally convinced by safe { }, since that spelling seems to me like it could easily be misunderstood by some to mean that it somehow ensures safety. It’s more like ostensiblySafe { }…
Is it though? Was this implemented recently? Where can I see it in the codebase? Based on the text of SE-0327 I was under impression that this is not the case yet. And in [Pitch #2] SE-0371 Isolated & async deinit I'm suggesting this as a follow-up. So if it is already updated, I should update the proposal text.
That does sound more Swifty than ostensiblySafe - I just wonder whether it aligns well or poorly with the current usages of "checked" (e.g., withCheckedContinuation). I worry it might align poorly, because the "checking" that goes on in withCheckedContinuation is at runtime, but in this case the lack of checking that's being communicated is a lack of compile-time checking.
In my experience, both cases require at least equal attention. Even if a function is outwardly unsafe, it almost always has a contract specified in documentation or somewhere else that, if you follow the right preconditions, you get predictable behavior coming out of the function. That still can't generally be checked by the compiler (because if it could, then we could write it as a safe function) and needs human auditing.
The fact that C#, D, and Rust all also use unsafe [function declarator] and unsafe {} blocks also seems to me like it sets a pretty strong prevailing wind in similar languages' design, and if we don't have a really good reason to go our own way, it's nice to follow what similar languages do.
While I am more than happy to agree to disagree, I believe the two usages here are the opposites of each other and we could do better than the precedents in those other languages. The real life equivalent to me looks like:
unsafe func foo() { ... }
Judge. "How do you plead?"
Defendant. "Guilty"
Judge. "Here's your term ...."
Job done.
unchecked_safe { ... }
Judge. "How do you plead"?
Defendant. "Not guilty"
Judge. "We'll enter the trials 10:00 two weeks from today at which time..."
By way of another much closer analogy: consider we had takesLocks instead of the current @noLocks (with the opposite meaning obviously, so the absence of the marker would mean "doesn't take locks").
@takesLocks func bar() { ... }
func foo() {
@takesLocks { // to mean: "I know this definetely doesn't take locks"
switch UInt8.random(in: 0...1) {
case 0: return
case 1: return
default:
bar() // this takes locks
// but it won't happen
}
}
}
Would you want to mark a block of code with "takesLocks" about which you are certain it definietly doesn't take locks?
In D it works with three function decorators: @safe, @trusted, and @system. @safe and @trusted functions have a safe interface while @trusted and @system functions can call unsafe @system functions. There is no "unsafe" block in D: you make the whole function @trusted, or split it in smaller functions (or closures) that can be marked @trusted.
I think it's important to have a different spelling for "this API is safe/unsafe" and "this code uses unsafe APIs in a safe way". I find "trusted" is a nice word to describe the later.
Thanks, I was mistaken (or D has changed up their approach since last I looked). Having an outward distinction between "safe" and "trusted" doesn't strike me as a very useful outward distinction, since from a client's perspective it doesn't affect them. Allowing code to declare itself "trusted" also seems the wrong way around—it's the client that should decide whether a dependency is trusted to use unsafe code, not the other way around. Having the distinction at the function level also feels like an anti-feature, since it seems like it'd encourage either overly-broad "trusted" scopes where an entire function is marked with the attribute to do one unsafe thing, or over-factoring into small separate unsafe functions, which makes auditing harder by defeating local reasoning if you take the unsafe code out of its context. My initial reaction is that the C#/Rust style is simpler and more effective, whether or not the keywords are the same or not.
I deliberately put "WHATEVER_THE_NAME_IS" as a placeholder for the "safe code done by unsafe means" brackets. The other difference is that we could have the code safe by default, so we could leave explicit "safe" out.
I agree. From a client perspective @trusted and @safe are the same thing in D. I don't really have an opinion about which is better for trusted code (function-level or block-level), but the distinction is not useful outwardly.
But I think there's a case to be made against being too narrow with the trusted annotation: calls to unsafe functions are often safe within a specific context. Consider malloc and free: it's safe to call malloc, but not to call free (the pointer is safe to use if you never deallocate). But if you decide to call free, the real thing you need to audit starts with malloc and ends at the disappearance of the last pointer value. This is going to be a whole function, or maybe all functions of a type so they never escape the pointer. The surface area you need to trust is much bigger than the actual unsafe call.
Maybe function-level trusted isn't a good approximation though. I suppose a better approach could be to label specific variables as unsafe, forcing a trusted annotation on every use of the variable. This way you can write a struct wrapping a pointer, make the pointer property unsafe, and force yourself to write trusted everywhere you use that pointer property (and hopefully consider whether it escapes at every corner). Illustration:
struct WrappedPointer: ~Copyable {
unsafe var pointer = malloc(10)
func printMe() {
// even if print is safe, we must trust `print` will not escape pointer
trusted { print(pointer) }
}
deinit {
// both free and pointer are unsafe, so trust is needed
trusted { free(pointer) }
}
}
Bonus points if the call to free in deinit can somehow warn when var pointer is not labeled unsafe, which will cause the programmer to add unsafe to pointer's declaration, which in turn will force every use of this variable to become trusted and in need to being audited like in the above.
There have been times that I’ve needed to mark an entire struct unsafe in C#:
unsafe struct PointerHolder
{
void* ptr;
}
Without the word unsafe there, it doesn’t even let you have the pointer-typed field. I believe you can mark individual properties unsafe, but then you can’t access it even from the struct’s own methods (unless they’re also unsafe), which kind of defeats the point.
It's an interesting question whether types need to be marked essentially "unsafe" or not. One could argue that simply holding onto a pointer value isn't unsafe in and of itself, but if all of the possible operations for constructing, loading from, and storing to the referenced memory are marked unsafe, then you can only practically use the pointer value from code that's allowed to do unsafe things, without needing the presence of the type itself to be contagiously unsafe.