GCP Stackdriver LogHandler (StackdriverLogging 0.1.0)

I made an apple/swift-log LogHandler for GCP stackdriver (currently only supports logging to a file monitored by logging-agent and a json config)

Feedback welcome!

2 Likes

I cannot comment on GCP stack driver in general, but I think you are using a clever way to push JSON values through the logger API. I'm wondering when we will have a Swift-native JSON type to avoid the NSString, NSNull, NSArray, ... conversion...

One thing you might want to consider: in its current design every call to the logger involves blocking filesystem IO. This might become a bottleneck for a high-concurrency server that serves a lot of requests on a single event loop and does a lot of logging. Maybe only do the blocking IO for errors and the rest can be dispatched to a background queue/thread and written async?

Hi @t089,

Thanks for the great feedback.

It's unfortunate that JSONSerialization.isValidJSONObject requires the top object to be an array (or dic) since the attempted array casting is done almost last after many comparisons and this slows down my metadata parsing. I could probably do the validation that the function is doing internally by myself though since in practice it essentially boils down to:

if obj is String || obj is NSNull || obj is Int || obj is Bool || obj is UInt ||
    obj is Int8 || obj is Int16 || obj is Int32 || obj is Int64 ||
    obj is UInt8 || obj is UInt16 || obj is UInt32 || obj is UInt64 {
    return true
}

I like your idea of "selective dispatching" based on the log's level. One thing though, it might be misguided, but it kind of doesn't "feel right" to use DispatchQueue in a SwiftNIO backed environment so I might just add that dependency (little to no client impact in practice).

Ah very interesting that you benchmarked the perf aspects.

I'm very new to cloud dev, but my understanding is that once all your threads are swamped, they are swamped and extra dispatching becomes counter productive.

Hm the thing with NIO is that threads should only ever be busy with actual work and never block just to wait for io.

Are even the OS level "Posix.write/sysWrite" calls non blocking?

It depends on what kind of file you're writing to and if it was opened in non-blocking mode (with the O_NONBLOCK or O_NDELAY flags).

Regular files and block devices will always block, however briefly, but other file types may potentially be truly non-blocking.

@t089 After further testing with NIO.NonBlockingFileIO, I am seeing good performance gains after all by tweaking the number of EventLoops used (which are then passed as a parameter to NBFileIO).

Need to do a PR on Swift NIO to allow for the write api of NBFileIO to write at an arbitrary position though, currently it always write from the beginning.

It's expected that using NonBlockingFileIO gives you slightly worse performance than "just doing it" on the EventLoop because as you point out, there is dispatching overhead.
If you IO path to the disk is completely free, this would also be fine to do, however if you disk IO is totally busy these calls will block the EventLoop and that will make your server stall, ie. your network connections won't be serviced which is bad.

Therefore I would suggest to dispatch it off the event loop. NIO.NonBlockingFileIO also uses a fairly bad (I wrote it so I'm allowed to say that :stuck_out_tongue_closed_eyes:) implementation of a thread pool that's certainly not as fast as it could be. There's definitely optimisation potential in that corner.

This was an ad-hoc thing we needed to do in NIO but we don't see file IO as NIO's core mission so we don't really invest much there, we did the bare minimum to allow you to do file IO in a non-blocking environment on UNIX.

As you describe, those do not work for files unfortunately. kevent and epoll_wait will both always return writable/readable for files and the "non-blocking" read/write operations will then still block until the kernel has read in the data from disk. You say they'll block "briefly", how briefly that is unfortunately depends on many factors such as how busy your IO system is and god forbid you're logging to NFS or some other networked file system...

For everybody else's benefit:

  • poll: "Regular files shall always poll TRUE for reading and writing."
  • epoll: "epoll is simply a faster poll(2), and can be used wherever the latter is used since it shares the same semantics."
  • kqueue: "Returns when the file pointer is not at the end of file."

As far as I know, the only truly working, supported, and portable enough way to do async file IO is to spawn a number of threads that are just using blocking IO which is exactly was NIO.NonBlockingFileIO does. That way all the blocking IO happens on threads that a spawned specifically for this purpose and they will no longer block the EventLoops.

Btw, Linux aio's (not to be confused with glibc's aio, that's different and worse) io_submit also still blocks sometimes. There's a new thing called io_uring which probably (once that's available in the kernels people run -- read a number of years) will solve the problem once and for all. This writeup seems to describe the dire situation quite well.

@johannesweiss That is what the library is now doing internally (using NonBlockingFileIO.defaultThreadPoolSize amount of threads, i.e 2). I have noticed very good performance improvement across the board actually, be it the number of request/sec but even the average length of a log call in an http request takes about half the time of a blocking one even on a non-busy system so the dispatching overhead is actually not bigger (contrary to my initial tests which had some issues).

These new tests were done locally on a machine with a top of the line m.2 ssd so it is very likely that the performance gains on a live system will be very significant as usually the hard-drive on a vm is much slower than that.

Thanks a lot for the help the other day!

1 Like

This is unrelated to the GCP Stackdriver LogHandler, but more about NIO in general. I checked out the LogHandler from this thread and then checked out the NIOFileIO stuff a bit and noticed that everything is tied to POSIX APIs using file descriptors. With Swift on Windows support underway, how is NIO going to support the Windows architecture? It seems pretty tightly coupled to POSIX APIs that don't exist on Windows.

Will a transition be made to use the C89/C99 fopen/fread/fwrite APIs instead of the POSIX open/read/write ones? Or will NIO be modified to use Windows specific APIs for the relevant code. I'm also not sure if epoll/kqueue exist in windows either.

Perhaps I should just create a new thread...


The reason I'm curious is because I've been working on a FileLogHandler (still very much a work in progress) using my own library based around C/POSIX APIs (because I hate FileManager and how un-swifty it feels) and I use the C89/C99 fopen/fread/fwrite APIs instead of the NIOFileIO stuff. In theory, it should be relatively easy for me to support windows as well once swift supports it because I'm using many APIs available through the C standards instead of purely POSIX standard APIs.

Terms of Service

Privacy Policy

Cookie Policy