Watching Kubernetes resources with SwiftkubeClient

Hello everyone,

I am just starting out with Swift, I do have experience with other programming languages. The issue I am running into is seemingly concurrency.

I created a package like this: `swift package init --name MyCLI --type executable`

Added the SwiftkubeClient dependency in the Package.swift file:

// swift-tools-version: 6.2
// The swift-tools-version declares the minimum version of Swift required to build this package.

import PackageDescription

let package = Package(
    name: "MyCLI",
    platforms: [
        .macOS(.v13),
    ],
    dependencies: [
        .package(url: "https://github.com/swiftkube/client.git", from: "0.23.0"),
    ],
    targets: [
        .executableTarget(
            name: "MyCLI",
            dependencies: [
                .product(name: "SwiftkubeClient", package: "client"),
            ],
        ),
    ]
)

Starting simple - In the MyCLI.swift file:

import SwiftkubeClient

enum ApplicationError: Error {
    case configError(String)
}

@main
struct MyCLI {
    static func main() async throws {
        guard let client = KubernetesClient() else {
            throw ApplicationError.configError("Cannot create k8s client")
        }
        defer {
            print("closing client")
            try? client.syncShutdown()
        }

        let nodeList = try await client.nodes.list()
        print(nodeList.items.count)
    }
}

this works fine and I get the expected output:

$ swift run
Building for debugging...
[7/7] Applying MyCLI
Build of product 'MyCLI' complete! (2.24s)
33
closing client

Changing things up:

import SwiftkubeClient

enum ApplicationError: Error {
    case configError(String)
}

@main
struct MyCLI {
    static func main() async throws {
        guard let client = KubernetesClient() else {
            throw ApplicationError.configError("Cannot create k8s client")
        }
        defer {
            print("closing client")
            try? client.syncShutdown()
        }

        let task: SwiftkubeClientTask = try await client.pods.watch(
            in: .allNamespaces,
            retryStrategy: RetryStrategy(
                policy: .maxAttempts(20),
                backoff: .exponential(maximumDelay: 60, multiplier: 2.0),
                initialDelay: 5.0,
                jitter: 0.2
            )
        )
        defer {
            print("stopping task")
            Task { await task.cancel() }
        }
        
        let stream = await task.start()
        for try await event in stream {
            print(event.type)
        }
    }
}

the console output shows that nothing is happening:

swift run
Building for debugging...
[7/7] Applying MyCLI
Build of product 'MyCLI' complete! (2.25s)

I thought this may be accurate because nothing is actually changing in the Kubernetes cluster. So while this program was running I started creating and modifying pods but still saw no output from the program.

The docs for Swiftkube Client are [here](GitHub - swiftkube/client: Swift client for Kubernetes).

Any help and guidance is appreciated.
Thank you

pinging @iabudiab

@mkyrilov Hey there, Your code looks fine, so can't say what's wrong w/o actually trying it. I’ll take a look at this once I’m back from work.

You can create an issue in GitHub if you like. Just copy and paste your post there.

@t089 thanks for the ping :+1:

1 Like

@mkyrilov Hey :waving_hand:

I’ve just tested your code on a fresh cluster and it works as intended.

PS: I’ve changed the to this print("(event.type): (String(describing: event.resource.metadata?.name))") in order to log the pod names.

You should be seeing logs similar to these:

$ swift run
Building for debugging...
[7/7] Applying MyCLI
Build of product 'MyCLI' complete! (5.56s)
added: Optional("patch-demo-5ff5686457-5bnjz")
added: Optional("patch-demo-5ff5686457-klbxr")
added: Optional("swiftkube-dash-845dfdd9c4-b79xr")
added: Optional("coredns-668d6bf9bc-jbn8v")
added: Optional("etcd-minikube")
added: Optional("kube-apiserver-minikube")
added: Optional("kube-controller-manager-minikube")
added: Optional("kube-proxy-mvdr9")
added: Optional("kube-scheduler-minikube")
added: Optional("storage-provisioner")
added: Optional("nginx-5869d7778c-9d9j7")
added: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
modified: Optional("swiftkube-dash-845dfdd9c4-v5rhz")
deleted: Optional("swiftkube-dash-845dfdd9c4-v5rhz")

Can it be, that the client is connecting to another cluster, other than the one you’ve used to modify the pods?

If you don't specify the config explicitly, then KubernetesClient() would use the first context found, as described here: SwiftkubeClient - Configuring a Client

Hello @iabudiab,
Thank you for your response and taking the time to perform the test. I ran my code again and it just worked as expected.

I do have 2 clusters in my kubeconfig file, and it looks something like this: (some fields are omitted)

$ kubectl config view                                                                               
apiVersion: v1
clusters:
- cluster:
  name: cluster-A
- cluster:
  name: cluster-B
contexts:
- context:
    cluster: cluster-A
    user: cluster-A
  name: cluster-A
- context:
    cluster: cluster-B
    user: cluster-B
  name: cluster-B
current-context: cluster-B
kind: Config
users:
- name: cluster-A
- name: cluster-B

I changed the code a little bit, I now also print the number of namespaces in the cluster. I did this to see which cluster the program is connecting to (my cluster-A has 15 namespaces and cluster-B has 22):

import SwiftkubeClient

enum ApplicationError: Error {
    case configError(String)
}

@main
struct MyCLI {
    static func main() async throws {
        guard let client = KubernetesClient() else {
            throw ApplicationError.configError("Cannot create k8s client")
        }
        defer {
            print("closing client")
            try? client.syncShutdown()
        }

        let ns = try await client.namespaces.list()
        print(ns.items.count)

        let task: SwiftkubeClientTask = try await client.pods.watch(
            in: .allNamespaces,
            retryStrategy: RetryStrategy(
                policy: .maxAttempts(20),
                backoff: .exponential(maximumDelay: 60, multiplier: 2.0),
                initialDelay: 5.0,
                jitter: 0.2
            )
        )
        defer {
            print("stopping task")
            Task { await task.cancel() }
        }
        
        let stream = await task.start()
        for try await event in stream {
            let eventType = event.type
            let name = event.resource.metadata?.name ?? "<unknown>"
            print("\(eventType) - \(name)")
        }
    }
}

If current-context is set to cluster-B, the program works as expected - I see the expected number of namespace and I see output in the event stream. (the cluster-B context is listed second in the kubeconfig file)

However, if I change contexts with kubectl config use-context cluster-A, then the code seems to not work fully. I do see the correct number of namespaces printed out, but no events from the stream, even If I modify pods in that cluster.

I also tried to explicitly specify the config, like this:

import SwiftkubeClient
import Logging

enum ApplicationError: Error {
    case configError(String)
}

@main
struct MyCLI {
    static let logger = Logger(label: String(describing: Self.self))

    static func main() async throws {
        guard let kubeConfig = try KubeConfig.fromDefaultLocalConfig() else {
            throw ApplicationError.configError("Cannot load kubeconfig")
        }

        guard let config = try KubernetesClientConfig.from(
            kubeConfig: kubeConfig,
            contextName: "cluster-A",
            logger: logger
        ) else {
            throw ApplicationError.configError("Cannot create config")
        }
        let client = KubernetesClient(config: config)

        defer {
            print("closing client")
            try? client.syncShutdown()
        }

        let ns = try await client.namespaces.list()
        print(ns.items.count)

        let task: SwiftkubeClientTask = try await client.pods.watch(
            in: .allNamespaces,
            retryStrategy: RetryStrategy(
                policy: .maxAttempts(20),
                backoff: .exponential(maximumDelay: 60, multiplier: 2.0),
                initialDelay: 5.0,
                jitter: 0.2
            )
        )
        defer {
            print("stopping task")
            Task { await task.cancel() }
        }
        
        let stream = await task.start()
        for try await event in stream {
            let eventType = event.type
            let name = event.resource.metadata?.name ?? "<unknown>"
            print("\(eventType) - \(name)")
        }
    }
}

If I set contextName: "cluster-A" the result is the same - correct number of namespaces printed, but nothing in the event stream.
If I use contextName: "cluster-B" then the code works as expected.

The last piece of context that might be useful is that both of these clusters are GKE clusters.

@mkyrilov Hey there :waving_hand: Happy to help.

Could you please add a logger to the client, so we can see if there are some network-related issues are happening:

// Package.swift
...
    dependencies: [
        .package(url: "https://github.com/swiftkube/client.git", from: "0.23.0"),
        .package(url: "https://github.com/apple/swift-log", from: "1.6.0")
    ]
...
// main.swift
var logger = Logger(label: "com.example.MyCLI")
logger.logLevel = .debug
guard let client = KubernetesClient(logger: logger) else { ... }
...

Hello @iabudiab. Thank you for your help.

I’ve added the logger, the code looks like this:

import SwiftkubeClient
import Logging

enum ApplicationError: Error {
    case configError(String)
}

@main
struct MyCLI {
    static func main() async throws {
        var logger = Logger(label: String(describing: Self.self))
        logger.logLevel = .debug

        guard let client = KubernetesClient(logger: logger) else {
            throw ApplicationError.configError("Cannot create k8s client")
        }
        defer {
            print("closing client")
            try? client.syncShutdown()
        }

        let ns = try await client.namespaces.list()
        print("namespace count: \(ns.items.count)")

        let task: SwiftkubeClientTask = try await client.pods.watch(
            in: .allNamespaces,
            retryStrategy: RetryStrategy(
                policy: .maxAttempts(20),
                backoff: .exponential(maximumDelay: 60, multiplier: 2.0),
                initialDelay: 5.0,
                jitter: 0.2
            )
        )
        defer {
            print("stopping task")
            Task { await task.cancel() }
        }
        
        let stream = await task.start()
        for try await event in stream {
            let eventType = event.type
            let name = event.resource.metadata?.name ?? "<unknown>"
            print("\(eventType) - \(name)")
        }
    }
}

With kubectl config use-context cluster-B the output is below (I’ve redacted the IP address and the bearer token):

$ swift run
Building for debugging...
[7/7] Applying MyCLI
Build of product 'MyCLI' complete! (2.18s)
namespace count: 22
2025-10-31T09:54:56-0700 debug MyCLI: [SwiftkubeClient] Staring task for request: KubernetesRequest(url: https://35.205.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0A..., body: nil, deleteOptions: nil)
added - cert-manager-cainjector-5d6d6796fd-mcqdw
added - cert-manager-cf5b598f7-rm7tj
added - cert-manager-webhook-bf48cb58f-n2zd2
added - dragonfly-0
added - dragonfly-1
added - dragonfly-operator-controller-manager-54f8895cd-gh6l5
added - api-5d94b8b6bb-q4g2b
added - api-5d94b8b6bb-q5vcw
added - ingress-nginx-controller-65c58f694f-cj4kq
added - ingress-nginx-controller-65c58f694f-l6pqb
added - ingress-nginx-controller-65c58f694f-llfl5
added - ingress-nginx-controller-65c58f694f-qms5s
added - ingress-nginx-controller-65c58f694f-rbf6t
added - ingress-nginx-controller-65c58f694f-zjt4n
added - istio-ingressgateway-596b5d6874-8pm8r
added - istio-ingressgateway-596b5d6874-ghd4z
added - istiod-9c55c84b6-7j2q6
added - istiod-9c55c84b6-scqdn
2025-10-31T09:54:56-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://35.205.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0A...; resourceVersion: 1761670452064351017, body: nil, deleteOptions: nil) in 5.0 seconds
added - cert-manager-cainjector-5d6d6796fd-mcqdw
added - cert-manager-cf5b598f7-rm7tj
added - cert-manager-webhook-bf48cb58f-n2zd2
added - dragonfly-0
added - dragonfly-1
added - dragonfly-operator-controller-manager-54f8895cd-gh6l5
added - api-5d94b8b6bb-q4g2b
added - api-5d94b8b6bb-q5vcw
added - ingress-nginx-controller-65c58f694f-cj4kq
added - ingress-nginx-controller-65c58f694f-l6pqb
added - ingress-nginx-controller-65c58f694f-llfl5
added - ingress-nginx-controller-65c58f694f-qms5s
added - ingress-nginx-controller-65c58f694f-rbf6t
added - ingress-nginx-controller-65c58f694f-zjt4n
added - istio-ingressgateway-596b5d6874-8pm8r
added - istio-ingressgateway-596b5d6874-ghd4z
added - istiod-9c55c84b6-7j2q6
added - istiod-9c55c84b6-scqdn
2025-10-31T09:55:02-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://35.205.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0A...; resourceVersion: 1761670452064351017, body: nil, deleteOptions: nil) in 8.759008070837114 seconds
added - cert-manager-cainjector-5d6d6796fd-mcqdw
added - cert-manager-cf5b598f7-rm7tj
added - cert-manager-webhook-bf48cb58f-n2zd2
added - dragonfly-0
added - dragonfly-1
added - dragonfly-operator-controller-manager-54f8895cd-gh6l5
added - api-5d94b8b6bb-q4g2b
added - api-5d94b8b6bb-q5vcw
added - ingress-nginx-controller-65c58f694f-cj4kq
added - ingress-nginx-controller-65c58f694f-l6pqb
added - ingress-nginx-controller-65c58f694f-llfl5
added - ingress-nginx-controller-65c58f694f-qms5s
added - ingress-nginx-controller-65c58f694f-rbf6t
added - ingress-nginx-controller-65c58f694f-zjt4n
added - istio-ingressgateway-596b5d6874-8pm8r
added - istio-ingressgateway-596b5d6874-ghd4z
added - istiod-9c55c84b6-7j2q6
added - istiod-9c55c84b6-scqdn
2025-10-31T09:55:11-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://35.205.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0A...; resourceVersion: 1761670452064351017, body: nil, deleteOptions: nil) in 16.096752070899832 seconds
added - cert-manager-cainjector-5d6d6796fd-mcqdw
added - cert-manager-cf5b598f7-rm7tj
added - cert-manager-webhook-bf48cb58f-n2zd2
added - dragonfly-0
added - dragonfly-1
added - dragonfly-operator-controller-manager-54f8895cd-gh6l5
added - api-5d94b8b6bb-q4g2b
added - api-5d94b8b6bb-q5vcw
added - ingress-nginx-controller-65c58f694f-cj4kq
added - ingress-nginx-controller-65c58f694f-l6pqb
added - ingress-nginx-controller-65c58f694f-llfl5
added - ingress-nginx-controller-65c58f694f-qms5s
added - ingress-nginx-controller-65c58f694f-rbf6t
added - ingress-nginx-controller-65c58f694f-zjt4n
added - istio-ingressgateway-596b5d6874-8pm8r
added - istio-ingressgateway-596b5d6874-ghd4z
added - istiod-9c55c84b6-7j2q6
added - istiod-9c55c84b6-scqdn
2025-10-31T09:55:28-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://35.205.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0A...; resourceVersion: 1761670452064351017, body: nil, deleteOptions: nil) in 39.58572010879237 seconds

I am also noticing now that in the code we are asking to watch all pods across all namespaces. There are over 100 pods running in the cluster but the output above shows only 18 pods.

With kubectl config use-context cluster-A:

$ swift run
Building for debugging...
[1/1] Write swift-version--58304C5D6DBC2206.txt
Build of product 'MyCLI' complete! (0.21s)
namespace count: 15
2025-10-31T10:01:54-0700 debug MyCLI: [SwiftkubeClient] Staring task for request: KubernetesRequest(url: https://34.82.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0AT..., body: nil, deleteOptions: nil)
2025-10-31T10:01:54-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://34.82.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0AT..., body: nil, deleteOptions: nil) in 5.0 seconds
2025-10-31T10:02:00-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://34.82.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0AT..., body: nil, deleteOptions: nil) in 10.78624777718268 seconds
2025-10-31T10:02:10-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://34.82.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0AT..., body: nil, deleteOptions: nil) in 17.290774437504712 seconds
2025-10-31T10:02:29-0700 debug MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://34.82.xxx.xxx/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: Authorization: Bearer ya29.a0AT..., body: nil, deleteOptions: nil) in 43.502495158084876 seconds

Hey @mkyrilov sorry for delay. Been busy with Weekend and Halloween :sweat_smile:

Looking at the logs you've provided, it seems that the client is not connecting to the API Server at all. The connection is dropped instantly and then retried again and again. The weird thing is, that there are no errors logged (see: SwiftkubeClientTask/makeTask try-catch).

The closest thing I could do to get your behaviour is to configure the connect timeout on the client to 1ms, but then I still got the error logged from the underlying HTTP client, for example:

2025-11-03T18:50:38+0100 debug com.example.MyCLI: [SwiftkubeClient] Staring task for request: KubernetesRequest(url: https://127.0.0.1:58414/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: , body: nil, deleteOptions: nil)
2025-11-03T18:50:38+0100 debug com.example.MyCLI: [SwiftkubeClient] Error occurred while streaming data: The operation couldn’t be completed. (AsyncHTTPClient.HTTPClientError error 1.)
2025-11-03T18:50:38+0100 debug com.example.MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://127.0.0.1:58414/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: , body: nil, deleteOptions: nil) in 5.0 seconds
2025-11-03T18:50:44+0100 debug com.example.MyCLI: [SwiftkubeClient] Error occurred while streaming data: The operation couldn’t be completed. (AsyncHTTPClient.HTTPClientError error 1.)
2025-11-03T18:50:44+0100 debug com.example.MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://127.0.0.1:58414/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: , body: nil, deleteOptions: nil) in 11.661621648634746 seconds
2025-11-03T18:50:55+0100 debug com.example.MyCLI: [SwiftkubeClient] Error occurred while streaming data: The operation couldn’t be completed. (AsyncHTTPClient.HTTPClientError error 1.)
2025-11-03T18:50:55+0100 debug com.example.MyCLI: [SwiftkubeClient] Will retry request: KubernetesRequest(url: https://127.0.0.1:58414/api/v1/pods?watch=true, method: NIOHTTP1.HTTPMethod.GET, headers: , body: nil, deleteOptions: nil) in 22.966125772133278 seconds

  • What else can you tell me about your cluster or network setup?
  • What happens if you try it with curl?
curl -vvvv 'https://34.82.xxx.xxx/api/v1/pods?watch=true' -H "Authorization: Bearer ..."