I'm trying to write a basic ray tracer in Swift. My function produces raw RGB data, from which a CGImage is created, which is then rendered in a SwiftUI view using the Image(CGImage, ...)
constructor. However, the image renders as 4 separate side-by-side grayscale images.
I'm using RGBX (ignored alpha channel) format, when I tried switching to just RGB (3 channels), the image was rendered as 3 grayscale images, so I believe it is somehow related to the channels.
I'm using CGContext.makeImage to create the image, and my CGContext is created as such:
let colorSpace = CGColorSpaceCreateDeviceRGB()
CGContext(
data: self.imageBuffer,
width: self.width,
height: self.height,
bitsPerComponent: 8,
bytesPerRow: BYTES_PER_PIXEL * width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipLast.rawValue
)
where self.imageBuffer
is an UnsafeMutablePointer<UInt8>
, allocated with width * height * BYTES_PER_PIXEL
capacity, and BYTES_PER_PIXEL
is 4
.
I am wrapping the imageBuffer
in self.imageData: Data
using:
self.imageData = Data(bytesNoCopy: self.imageBuffer, count: width * height * BYTES_PER_PIXEL, deallocator: .free)
And I update pixels in the image with this:
let pixelBytes = [
UInt8(min(Float(1), pixel.x) * 255),
UInt8(min(Float(1), pixel.y) * 255),
UInt8(min(Float(1), pixel.z) * 255),
UInt8(0) // Alpha (unused, for padding)
]
let pixelDataRange = (x + width * y)..<(x + width * y + BYTES_PER_PIXEL)
self.imageData.replaceSubrange(pixelDataRange, with: pixelBytes)
Is there something I'm doing wrong? I've been working on this bug for a couple days now and cannot get the image to render properly.