One issue I’ve noticed, is each bazel build invocation (e.g, when responding to buildTarget/prepare among other requests) can take on average ~30 seconds to 1 minute for large iOS projects with millions of lines of swift code.
This means, that when the buildTarget/prepare request (or any other request) is sent to the BSP, all jump to definition / autocomplete stops working completely until the BSP responds a minute later.
AFAIK, SourceKit-LSP makes the assumption that these requests to the BSP will complete within a couple hundred milliseconds instead of in the 30 sec to 1 minute range.
I’m wondering whats the best way to overcome this bottleneck?
My thought process was, it would be ideal if SemanticIndexManager could continue to serve requests from the VSCode client to the LSP (e.g, jump to definition, auto complete, etc) whilst waiting for the BSP in parallel. Yes, this would mean the LSP might serve stale diagnostics for up-to ~30 seconds - 1 minute, but I feel like in my case, this would be a much better user experience.
Does anyone have any advice, or ideas to explore here? Would the only way to overcome this be to fork SourceKit-LSP? It would be ideal if there is a way to potentially overcome this without forking the repo.
Thanks for raising this but I don’t think your analysis is quite correct. SemanticIndexManager is actor and while actors don’t allow concurrent execution of two code blocks, an actor is able to handle new function invocations while it awaits a the result of a call, which is the case when it waits for preparation – and target preparation is very much expected to take seconds or even minutes.
You are right, I apologize for being mistaken in my post
I just submitted an issue here and attached the logs reproducing the issue. Unfortunately, I wasn’t able to include the extended logging for privacy reasons.
Hopefully the logs can help us figure out the right direction to look in, it seems like its likely an issue with the BSP implementation/assumptions being made?