For a project I am currently using the swift compiler front end to generate LLVM IR. I need to analyze the IR to find run time dependencies between variable read/writes with the end goal of finding parallelism.
For this it is important that I am able to generate LLVM IR that is unoptimized and all the load and store instructions are present. It works well with clang with the -O0 flag.
However with swiftc I have the following issue on this simple example:
func returnInputSum(test: Int) -> Int {
var a = 5
var b = 10
var c = a + b
return c
}
with swiftc -Onone this becomes(removed debug etc.):
entry:
%a = alloca i64, align 8
%b = alloca i64, align 8
%c = alloca i64, align 8
store i64 5, i64* %a, align 8
store i64 10, i64* %b align 8
store i64 15, i64* %c, align 8
ret i64 15
so instead of loading a and b, inserting an "add" instruction and storing the result in c, the compiler seems to evaluate 5 + 10 directory to store the 15 in c. Then the return does not even return c but an immediate value this is a problem for me because I want to see all load/store instructions.
when I use clang with c/c++ code it works just fine and I get what I expect there:
%a = alloca i32, align 4
%b = alloca i32, align 4
%c = alloca i32, align 4
store i32 5, i32* %a, align
store i32 10, i32* %b, align 4
%0 = load i32, i32* %b, align 4
%1 = load i32, i32* %a, align 4
%add = add nsw i32 %0, %1
store i32 %add, i32* %c, align 4
%2 = load i32, i32* %c, align 4
ret i32 %2
is there any way to actually make the compiler do no optimization at all on the LLVM IR code? or am I not understanding something important here? I know that before generating LLVM IR there is SIL pass with some optimization but I am under the assumption that with the -Onone flag this should also be turned "off"?
Thank you!