How to create a `PostgresData` for one of Postgres' `point` types?

Currently there are convenience initializers for creating PostgresDatas for commonly used types. That's not the case for Postgres' point type. The initializer for a PostgresData looks like this:

public init(
    type: PostgresDataType, 
    typeModifier: Int32? = nil, 
    formatCode: PostgresFormat = .binary, 
    value: ByteBuffer? = nil

I'm trying to figure out what value should be provided for each parameter in order to get a PostgresData that represents a point.


For type, there already is a PostgresDataType for a point, namely PostgresDataType.point, so that's what I'm using.

For typeModifier, I'm confused as to what it's asking for. It is optional so may not be necessary in this case, but the documentation of the property hints otherwise. It says:

/// The type modifier (see pg_ attribute.atttypmod). The meaning of the modifier is type-specific.
public var typeModifier: Int32?

The documentation for the property states this:

atttypmod records type-specific data supplied at table creation time (for example, the maximum length of a varchar column). It is passed to type-specific input functions and length coercion functions. The value will generally be -1 for types that do not need atttypmod .

It seems like .none can be passed for the value because there doesn't really seem like there would be any type-specific data that would apply to a point column like it would for a varchar, but I was expecting a list of types with possible values for this property in the PostgreSQL documentation. This is because the PostgresNIO documentation indicates that the meaning of PostgresData.typeModifier is "type-specific", which to me, indicates that there would be a list of the different values for different types and what those values mean for each type. I haven't been able to find that list.

For formatCode it's either .text or .binary. I attempted to look over the wire protocol documentation for details regarding the format of different types, but found no such list. I noticed PostgresDatas for Strings are passed .binary for this parameter. That said point types — when using the PostgreSQL client — are included in the VALUES section of a statement as a tuple of two 64-bit floating point numbers, but this tuple is provided as a string (in quotes). But so are strings. So I'm not quite sure what should be passed for this value either.

Then there's value which for a String are just the backing UTF8 bytes. It seems like the most efficient way to do this would be to encode two 64-bit floating point values back to back, but trying to construct the correct format by trial and error doesn't seem particularly wise.

Hi @chrisbia,

I also tried to search online for some documentation, but sadly I was not able to find anything either! Nonetheless I was able to reverse engineer the interface. Let me tell you what I did:

  1. I created a test-case in PostgresNIO, that tries to get a Point from the database:

    func testSelectPoint() async throws {
        let eventLoopGroup = MultiThreadedEventLoopGroup(numberOfThreads: 1)
        defer { XCTAssertNoThrow(try eventLoopGroup.syncShutdownGracefully()) }
        let eventLoop =
        try await withTestConnection(on: eventLoop) { connection in
            let stream = try await connection.query("SELECT point(-87.6, 41.9);", logger: .psqlTest)
            let rows = try await stream.collect()
            let cell = rows.first!.first!
  2. Using the code above, I was able to get a Postgres binary representation from the database.

    po rows
        ▿ columns : 1 element
          ▿ 0 : Column
            - name : "point"
            - tableOID : 0
            - columnAttributeNumber : 0
            ▿ dataType : POINT
              - rawValue : 600
            - dataTypeSize : 16
            - dataTypeModifier : -1
            - format : binary
    po cell
    ▿ PostgresCell
      ▿ bytes : Optional<ByteBuffer>
        ▿ some : ByteBuffer { readerIndex: 0, writerIndex: 16, readableBytes: 16, capacity: 16, storageCapacity: 1024, slice: _ByteBufferSlice { 59..<75 }, storage: 0x000000010783e400 (1024 bytes) }
          ▿ _storage : <_Storage: 0x10130df30>
          - _readerIndex : 0
          - _writerIndex : 16
          ▿ _slice : _ByteBufferSlice { 59..<75 }
            - upperBound : 75
            ▿ _begin : 59
              ▿ _backing : 2 elements
                - .0 : 0
                - .1 : 59
      ▿ dataType : POINT
        - rawValue : 600
      - format : binary
      - columnName : "point"
      - columnIndex : 0

So we can see a couple of things here. First of all the Point database type is sent using the binary wire format. It is 16 bytes long (which is exactly two doubles). The dataTypeModifier is -1.

In the end this code should work for you:

struct MyPoint: PostgresDataConvertible {
    var x: Double
    var y: Double

extension MyPoint {
    var postgresData: PostgresData? {
        var byteBuffer = ByteBuffer()
        byteBuffer.writeMultipleIntegers(self.x.bitPattern, self.y.bitPattern)
        return PostgresData(type: .point, typeModifier: -1, formatCode: .binary, value: byteBuffer)

    static var postgresDataType: PostgresDataType {

    init?(postgresData: PostgresData) {
        guard var buffer = postgresData.value,
              let (x, y) = buffer.readMultipleIntegers(as: (UInt64, UInt64).self),
              postgresData.type == .point
        else {
            return nil

        self.x = Double(bitPattern: x)
        self.y = Double(bitPattern: y)

However I would like to emphasize, that PostgresData is a type that we want to remove in the future. The new and better way is to use PostgresQuery for your queries. To use PostgresQuery your MyPoint has to conform to PostgresEncodable and PostgresDecodable. if you want to learn more about these new types please have a look at PostgresNIO's Readme.

extension MyPoint: PostgresEncodable {
    static var psqlFormat: PostgresFormat {

    static var psqlType: PostgresDataType {

    func encode<JSONEncoder>(into byteBuffer: inout ByteBuffer, context: PostgresEncodingContext<JSONEncoder>) throws where JSONEncoder : PostgresJSONEncoder {
        byteBuffer.writeMultipleIntegers(self.x.bitPattern, self.y.bitPattern)

extension MyPoint: PostgresDecodable {
    init<JSONDecoder>(from byteBuffer: inout ByteBuffer, type: PostgresDataType, format: PostgresFormat, context: PostgresDecodingContext<JSONDecoder>) throws where JSONDecoder : PostgresJSONDecoder {
        guard type == .point, let (x, y) = byteBuffer.readMultipleIntegers(as: (UInt64, UInt64).self) else {
            throw MyError()

        self.x = Double(bitPattern: x)
        self.y = Double(bitPattern: y)

I hope that helps, if you have any further questions, please reach out.

I created some tickets based on the discussion here:

If you are interested, we welcome contributions!

Thanks! It certainly does work!

Unfortunately, the particular project that I'm working on — being an AWS Lambda — doesn't play nice with new Swift API's (async/await), so I've been apprehensive attempting to incorporate any libraries that do as dependencies.

Thanks for detailing the process with which you used to discover the format! Had I figured to do the same thing I would've definitely been hesitant to use any results found as absent explicit documentation of the behavior I would've figured whatever I found could be changed at any time without notice. But perhaps that's the state of things: sometimes leaning on behavior found via poking and prodding around is A-okay.

The FluentPostGIS library uses a WKCodable package to encode the data to Post goes and works well enough GitHub - brokenhandsio/fluent-postgis: Swift PostGIS support for FluentPostgreSQL and Vapor