Hello Swift community,
I am applying for Google Summer of Code 2026 with the Swift project and would really appreciate feedback on my proposal: SwiftGestureKit — Dual-Hand Vision Recognition & Swift Developer Experience Improvements.
While building HastaAkshar, a real-time Bharatanatyam mudra recognition app using the Vision framework, I encountered an underdocumented behavior in VNDetectHumanHandPoseRequest: the default maximumHandCount effectively results in only one reliable observation being processed, which caused false positives for double-hand gestures. Debugging this took significant time, and I could not find any canonical Swift resource explaining the issue.
I also encountered UIKit migration confusion around UIButton.Configuration in iOS 15+, where direct color assignments are silently overridden.
Based on these experiences, my proposal focuses on:
• SwiftGestureKit — A Swift package providing a structured dual-hand Vision pipeline, confidence-weighted gesture scoring, and Swift Concurrency support
• DocC-based documentation and tutorials demonstrating correct multi-hand gesture handling
• Example-driven developer guidance addressing real-world pitfalls discovered during app development
I would really appreciate feedback on:
-
Whether this scope is appropriate for Swift GSoC
-
Possible alignment with Swift documentation tooling (DocC, examples, tutorials)
-
Suggested repositories where parts of this work could integrate with existing Swift efforts
-
Any recommendations on narrowing or refining the proposal
Thank you for your time and guidance.