[CodeCompletion] Return less CompletionItems to improve perf of Web IDE?

Hi everyone,

First, many thanks to the great tools around Swift!

We're working on an online Swift IDE (https://bananaide.com/) built with SourceKit-LSP + Monaco Editor + monaco-languageclient. Currently an issue we face is that SourceKit-LSP returns too many CompletionItems in some cases, and as a result, a JSON response of a "textDocument/completion" request can be up to 10MB, which makes the web-based IDE extremely slow. Could some suggest some directions to address this performance issue in the short/mid term, to make auto-complete more light-weight for a web IDE?

Note that we already noticed that the Apple team is aware of the perf issue around global completions (as mentioned at Massive CPU usage and can't even do basic code completion), but we wonder if the fix will only focus on local experience (such as using SourceKit-LSP with VSCode), while the CompletionItems blob could still be too big for web apps.

In case it helps, I read the Swift source code a bit and wonder if the issue is related to this FIXME https://github.com/apple/swift/blob/5be1585d371c377a0c0fb761dfbd969257759842/lib/ClangImporter/ClangImporter.cpp#L2792 .

Suggestions on the issue, or any general suggestions on building a Swift online playground would be really appreciated.


I think improvements here should help both local and remote use cases. My intention is to have filtering happen server-side along with capping the number of results to avoid sending (or serializing!) MB of json data. We can leave the exact number of results configurable in case you want to tune it specifically, but I imagine we would pick a default somewhere between 100 and 1000. When I experimented with serialization performance a while ago, I found 200-300 was a good choice.

Is code-completion actually returning results that are not visible, or is this just reflecting the reality that importing a module in Swift makes more things visible than it does in clang? If it's the latter, this is not something we can fix in code-completion.

Thanks Ben!

I think it's likely the later. For example, once "import Foundation" is added at the top, auto-completion for "prin" (I was just expecting "print()") on VSCode will take 10 seconds on my mac and the completion list eventually shown in VSCode becomes very long.

That would be great! I think today many code-complete configuration options are parsed here, but then the parsed results are largely ignored. Would be great if the code-complete behavior gradually becomes configurable

FYI Code-completion performance improvement via server-side filtering

Terms of Service

Privacy Policy

Cookie Policy