Our data model is getting larger and larger, and now our app is not able to run on device anymore.
After doing a lot of debugging, we think the reason is that the stack memory footprint of our MainViewState struct is getting too high.
(lldb) p MemoryLayout<MainViewState>.size
(Int) $R0 = 38704
So the first question is; Have anyone seen the same issue, and how did you solve the increasing size of the sate structs?
Ok, so I started testing; what if I move some of the data in the structs into classes? - The memory footprint of the struct decreases, but I am not sure about the consequences. I understand that the structs will just copy the value of the pointer to data objects, and not the object itself.
The second question is; Is it ok to point to objects inside the State structs, and what are the consequences?
We have tested the app with the latest TCA version, 0.14.0, and it performed better than with 0.8.0. The newer version works with larger MainViewState than 0.8.0. Would love to hear if someone has experience with this issue, or tips on how to proceed.
Latest development from us; we found a way to decrease the size by adopting copy-on-write by using arrays.
In MainViewState, we can map i.e. DashboardState this way:
var dashboardArr: [DashboardState]
var dashboardState: DashboardState {
get { return dashboardArr[0] }
set { dashboardArr[0] = newValue }
}
This reduces the impact of the dashboard from it's actual size to 8 bytes. So it seems like we can apply this strategy where we have large structs which increases the state too much. I am really interested to hear other solutions and experiences.
We've encountered similar problems in a project we are working on, but honestly they seemed more like Swift bugs than Swift limitations. We used to have the following data types to represent a type-safe 3 element array:
struct Three<A> { var first, second, third: A }
typealias Puzzle<A> = Three<Three<Three<A>>>
Usage of this type when built for release would crash . The types we plugged in for the generic A were not very big either, consisting of only a few fields and all simple data types such as ints, bools, enums without associated types, etc.
We ended up refactoring in a way similar to what you have:
public struct Three<Element> {
public var first: Element {
get { self.rawValue[0] }
set { self.rawValue[0] = newValue }
}
public var second: Element {
get { self.rawValue[1] }
set { self.rawValue[1] = newValue }
}
public var third: Element {
get { self.rawValue[2] }
set { self.rawValue[2] = newValue }
}
private var rawValue: [Element]
}
That fixed the crash.
So I don't think it necessarily the size of the data type (though that could be a factor), but also has something to do with how Swift is compiling certain data types.
Instead of using arrays, you could implement your own copy on write (CoW) type as the standard library also does for Array, Dictionary, Set etc.
A good introduction to this topic is in this video:
At 9:30 the presenter starts to implement a Copy on Write type. The talk’s topic is actually about performance but can also be applied to improve memory usage as well.
Short summary:
Move all stored properties of you struct to a new class called Storage.
Your struct now only stores an instance of this new class.
Add computed properties to your struct for each property which gets/sets the value on the class instance.
Before setting the value in your setter, check if the class instance has a reference count of 1 by using the isKnownUniquelyReferenced.
If it is not uniquely referenced, you need to copy your storage before setting the value.
That’s it.
Another solution would be indirect enums but that is only useful if you are using enums with associated values and not structs.
Moving the indirect still results in a EXC_BAD_ACCESS but the stack trace is different, might be related to CasePaths instead (it complains inside of withProjectedPayload).
Also, fwiw, we rely heavily on the copy-on-write property wrapper above to workaround the struct issues.
Technically, main stack size could be very large, say, half of installed RAM. It won't be a waste as only a fraction of it can be wired upfront (e.g. 100K), and more stack memory wired on when and if needed. I don't know why it is not done this way, however deep recursion on the main stack is probably not the best idea either. This day and age most code runs on background threads / queues anyway, so ability to control the size of main thread stack would hardly achieve anything material.