-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Better addition semantics #7416
Comments
@SpexGuy Are you suggesting to allow implicit casting? I dont understand the advantage than being forced to explicitly annotate type conversions. You also can do typeconversion with overflow check. |
I think you have misunderstood my examples. The compiler is not allowed to propagate literals through the |
@SpexGuy sorry, my bad. I was not reading the examples correctly. Rereading... And I think this is an interesting idea. The non-biased versions makes the rules somewhat complex and opaque though. I would offer I somewhat more arbitrary solution to this: make the biased version the leftmost type. |
The goal of the biases is to catch cases where you are mixing unsigned and signed inputs and expecting the compiler to infer a result type. I think that should remain a compile error, and require you to explicitly specify the result type you want. The simple way to explain them is this: if all inputs to a tree of Using the leftmost type is nice in some ways because it simplifies the definition of |
An even simpler idea: make conversion implicit but UB on lossy conversion, default to signed version: Basically |
A variant of this proposal is – I think – simple implicit bit widening and narrowing. I have described this elsewhere For example u16var = some_i16 + some_u16;
// =>
u16var = @intCast(u16, @intCast(i32, some_i16) + @intCast(i32, some_u16)); One can also choose to simply cast to i17. That might be more appropriate for Zig. The implicit Having a small hack that only works in certain cases just adds corners to the language and makes the cost much higher. There is precedent for promoting to a common, wider type: C# uses this strategy... i32 + u32 results in a i64. However, this does not extend to i64 + u64... there is no corresponding i128 promotion. In Zig this is not a limiting factor though. However, it has some other consequences. For example, if the we now introduce a special implicit (trapped) narrowing from For example if this works: u16var = some_i16 + some_u16; – by inserting an explicit trap on the result being negative. Then why couldn't this work: u16var = some_i16 + 0; The traps that are build into Zig for addition and subtraction for something like // u16var = some_u16 + some_u16 behaves as:
u16var = @intCast(u16, @intCast(u17, some_u16) + @intCast(u17, some_u16)); In the above code the trap does not occur in the addition but in the final truncation instead. Still, one could argue that the Zig model on trapping addition could also be seen as stepwise implicit trapping truncation after addition using infinitely ranged integers. Viewed from that angle I am not advocating one method or the other, I just want to highlight this. I also want to note that my proposal #7967 touches on the subject. |
I would like to suggest at minimum Zig disallows implicit widening of complex expressions. Where "complex" is any expression that has more than 1 potential way of doing widening. In practice that's most binary expressions, but not unary ones. To illustrate the problem
As we see, the first will not overflow on any sums that fit in an u64, but the latter will. It also turns out that the main reason you want implicit widening is in situations with simple expressions, e.g. The check for valid widening is fairly straightforward to implement: recursively search the expression to be implicitly widened, and if it isn't a "simple" expression type - disallow it. This avoids a whole class of bugs due to unexpected trapping/UB that could be found at compile time. |
As I understand it, the binary operators
+
and-
currently behave as follows:Peer type resolution only supports certain combinations though, so there are valid computations that are not expressible with these rules. For example: say I have a u32 with value
0xFFFFFFFF
, and I want to add to that the i32-4
, and store the result in a u32. This addition is valid, but there is currently no way to do this without using+%
and completely giving up overflow checks. To make this compile with overflow checks, you either need to cast the u32 to an i32 (trips overflow check), cast the i32 to a u32 (trips overflow check), or use i33 (unnecessary perf hit in release). This is a problem that needs to be fixed.Examples of things that I think should and shouldn't work:
To solve this, I'll introduce the concept of a "hybrid integer", which is simultaneously signed and unsigned. Before I do though, I want to make it clear that this type is not part of the type system. It's an internal type used only by the compiler. Hybrid integers have a bit width and a "bias", which can be signed, unsigned, or indeterminate. If the type of a hybrid integer is observed through the type system, it is an implicit cast to the bias type. If this cast overflows, it causes checked UB. If the bias is indeterminate, it causes a compile error indicating that peer type resolution cannot decide between unsigned and signed.
Without loss of generality, I'll use 16 bit integers throughout this proposal. Replace 16 with any other integer size and it will still work.
The range of a hybrid integer h16 extends from the minimum value of i16 to the maximum value of u16. Attempting to encode a value outside this range is checked UB. h16s are the result of peer type resolution between integers, as follows:
The
+
and-
operators are the only operators that are aware of hybrid integers. They do the following type coercions:When the two operands have different bit widths, the smaller one is promoted to the larger bit width, preserving its original signedness. If the smaller operand is a hybrid integer, it is first cast to its bias type, then extended.
Hybrid integers may coerce to signed or unsigned types of the same or larger bit width, regardless of bias. If the result does not fit, this is checked UB. If the type of a hybrid int is inferred (for example as an input to multiplication, or with
var x =
), it yields the bias type. If the bias type is indeterminate, this causes a compile error stating that peer type resolution does not work between signed and unsigned integers.The use of UB in this proposal has been carefully designed so that hybrid integers do not need extra bits in release-fast to distinguish between values > max(i16) or < 0. So these semantics give all the safety of using
i17
to do your math, without the runtime cost of that extra bit in release-fast.The text was updated successfully, but these errors were encountered: