well.. thank you! that's a rare thing to hear this days
sure, sure. i just used a collective term for non-ascii characters )
well.. thank you! that's a rare thing to hear this days
sure, sure. i just used a collective term for non-ascii characters )
do not take that too religiously. there are legitimate uses for macros and the particular example of "let" vs "const auto" is a change that makes real difference: chances your fellow C++ developer uses the latter is slim for the very reason of the ugliness of "const auto". for "Int" i do not even use a macro, it's a typedef.
I programmed in C and C++ for many years. C(++) programmers rarely write “auto” or “const auto” because auto is the default storage class for variables.
Jeremy
That's C. In C++ auto makes the compiler deduce the type for you.
same as myself (20+ years in my case). but sorry, the excuse you use is... lame . you are obviously missing something here for not using auto / const auto in C++:
first, you are repeating yourself:
SomeType* x = expressionThatReturns_SomeType();
second, you are not using the benefit of making things constant (believe it or not, 90% or so of things are constant, really). compare this:
int x = 123; // even if it is a const.
.... // using x here
... // x might be mistakenly changed
... // compiler has to treat "x" as a variable that can change even it doesn't
to this:
const auto x = 1; // 1. const, 2. type inferred
.... // using x here
... // x can not accidentally change
... // compiler can have extra optimisation having the knowledge that x is constant.
i have a sneaking suspicion that swift's:
if let x = expression {
...
}
has it very deep roots in C++'s:
if (const auto x = expression) {
...
}
with let being effectively a define for "const auto", for brevity.
second, you are not using the benefit of making things constant (believe it or not, 90% or so of
You misunderstand.
C(++) programmers use const all the time, of course.
I was commenting that auto is unnecessary, and rarely used.
But that’s apparently wrong for C++11 (which I haven’t used).
Jeremy
sort of. in my experience working with new C++ people of different levels coming from different backgrounds - const is mostly used by them to denote constant class/struct variables (vs constant pointers to variables) and non mutating methods.
you could see them using this:
const SomeType* p = ...;
but rarely would you encounter them writing:
void foo() {
SomeType* const p = pointerToSomeTypeExpression();
....
// use p here. p can not be reassigned.
}
or even this:
void foo() {
const int p = 123;
....
// use p here. p can not be reassigned.
}
they really just don't bother. the typical response to my question to them: "do you really need 'p' here to be a variable" - first they do not understand what i mean. and then when they finally do they ask me back "so you want me to put const there? but why?". in part that's understandable given that const was relatively recent bolt-on addition to the language and was not in it's core from the very beginning. besides it is so ugly, so verbose and so easy to mistakenly put const on the other side of * making a const variable instead of a const pointer:
const SomeType* p;
SomeType* const p;
const SomeType* const p;
SomeType const* const p;
even more rare would they use type inference:
void foo() {
const auto p = someTypeExpression();
....
}
hardly any of them will use it with references:
auto& p = someExpression();
have you ever seen C++ developers making function arguments const?
void foo(const int x) {
...
}
i am yet to see one. those are few and far between.
in my team we quickly adopted the "swifty" version:
Void foo() {
let p = pointerToSomeTypeExpression();
var p = pointerToSomeTypeExpression(); // if you want to reassign p later
let& r = if you need a reference to a constant;
var& r = if you need a reference to a variable;
....
}
now, we are officially miles away from the topic so long it's fine with the OP, it's fine with me.
did you ever see C++ developers making function arguments const?
We used to avoid passing non-const reference parameters as a matter of style. It’s hard to know whether a value is changed by a function if it’s passed as a non-const reference (since it’s not clear at the calling site that the parameter is being passed by reference).
As a general rule, we used const reference parameters and non-const pointer parameters. We generally didn’t use const for value parameters.
Jeremy
agree. that you can not see on the caller side that something can change being passed by reference is a big drawback and in this cases we pass by pointer instead. swift did it right.
my comment was mainly related to this:
foo(const int a, SomeType * const b);
rather than this:
foo(const SomeType& a, const SomeType* b);
the latter is of course widely used in C++. the former - have never seen it.
I really like && and ||. I have been using them in probably a dozen languages for about 30 years and have never had any trouble remembering which was which or what they did. What I really hate is new syntax for the same old operations in new languages just for the sake of being identifiably different to some other language with which the inventor had some grievance. When you have to work in many languages each day what can end up being sometimes minor differences in syntax result in mostly constant annoyance but occasionally subtle bugs that are later hard to trace. It might also be interesting to compare the performance of operator evaluation versus the method invocation that the OP found less objectionable than having to cope with all those horrible boolean logical operators.
thanks for sharing your views.
myself - i have no massive problem with && and || (maybe that has to do with the fact i have 20++ years of C++ experience). few things i can live with:
II is very similar to || which is in turn is very similar to ll (while all three are different).
should i had a time machine i'd go back and convince Kernighan/Ritchie to choose it the opposite way around (so & for bools and && for bits)
the fact they've chosen two different sets of operators for bools and bits is because they had to. there was no bool type initially, all was about ints. should there be a proper bool type initially we'd probably have the same & / I used for bools and for ints, the same way we have the same +/- operators working for ints and floats. today we do have a proper bool type and given that swift started almost from scratch they could have chosen the same set of operators for bools and bits.
(i also think that having those "pointwise" .&= .|= &>>= etc is very ugly. but maybe that's just me.)
As I wrote above, I disagree with this position. Addition is the same operation, no matter the numeric type. A logical and and a bitwise and are not the same operation, no matter the types.
That you can, in some languages (most notably C), perform a bitwise and of two values and use the result as a boolean expression is a side-effect of said languages treating non-zero values as logically true. Languages, such as Swift, with a dedicated boolean type, do not necessarily allow this.
Sure? Bool is a number with a single bit (at least in theory), isn‘t it?
I am speaking semantically. As I wrote, in C, the semantic is that any non-zero value is considered true, so any non-void expression can be treated as a boolean condition. That does not make bitwise and special, nor does it make bitwise and the same as boolean and.
How a boolean value is represented at the machine level is purely a matter of implementation.
sorry, i fail to see why you say that. to me bool & bit operations are exactly the same, just a different number of bits used. if we had UInt1 - that would give exactly the same result at the bit level. whether it is 64 bits or 8 bits or 1 bit - it doesn't change the nature of the operation.
the fact that C can promote bits to ints is the only reason to choose different sets of operations. had this been disallowed:
if 1 & 0 { // syntax error, wrong type
bool x = 1 & 0; // syntax error, wrong type
we would have the same & / | used for bools and bits.
You are thinking purely in terms of implementation, and not in terms of semantics. If C had chosen the value 156 to represent true and 211 for false, we'd not be having this discussion.
Note also that &
and &&
has different precedence.
But that‘s not how it‘s defined, and I firmly belive that choice was motivated semantically.
Try thinking in terms of vectors here:
A Bool has a dimension of 1, a Char has 8 dimensions, and that‘s the only difference (regarding bitwise operations).
What C added in top of that is to automatically collapse vectors of bits to a single scalar.
plus && short circuits and & doesn't.
i swift the latter is a feature of concrete func rather than the operator itself, so you could have:
func & (a: Bool, b: @autoclosure () -> Bool) -> Bool { ... }
while still having:
func & (a: Int, b: Int) -> Int { ... }
and one would not conflict with another.
the difference in precedence is not fundamental, C (or swift) would not be a completely different language if the precedence of the && and & were the same -- you'd just add or change a few parens as needed.
remove int to bool promotion from C/C++ and show us what bad it will cause to have & and && to share the same name. whether you use 156/211 or different pair of numbers for true / false.
Eh? The point of precedence is to avoid parentheses in most common scenarios.
There’s an element of design when Integer Operator > Comparison Operator > Boolean Operator.
It is so important to the point that we (Swift) introduced .&
because &
has the wrong precedence (and &&
short-circuit).