I am trying to take a user input drawing and convert it into a float array of pixel values. The background of the drawing is white. I am retrieving the UIImage
from the PKDrawing
by calling the drawing.image
function. Afterwards, I use the following code to convert to float array
func normalized() -> [Float32]? {
guard let cgImage = self.cgImage else {
return nil
}
let w = cgImage.width
let h = cgImage.height
let bytesPerPixel = 4
let bytesPerRow = bytesPerPixel * w
let bitsPerComponent = 8
var rawBytes: [UInt8] = [UInt8](repeating:0, count: w * h * 4)
rawBytes.withUnsafeMutableBytes { ptr in
if let cgImage = self.cgImage,
let context = CGContext(data: ptr.baseAddress,
width: w,
height: h,
bitsPerComponent: bitsPerComponent,
bytesPerRow : bytesPerRow,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) {
let rect = CGRect(x: 0, y: 0, width: w, height: h)
context.draw(cgImage, in: rect)
}
}
var normalizedBuffer: [Float32] = [Float32](repeating: 0, count: w * h * 3)
for i in 0 ..< w * h {
normalizedBuffer[i] = Float32(rawBytes[i * 4 + 0]) / 255.0
normalizedBuffer[w * h + i] = Float32(rawBytes[i * 4 + 1]) / 255.0
normalizedBuffer[w * h * 2 + i] = Float32(rawBytes[i * 4 + 2]) / 255.0
}
return normalizedBuffer
}
However, running the code on the UIImage however, is giving me an array of all zero floats. However, when I load the image from imageAsset using UIImage(named:)
function and run the code, I am able to get the normalised intensity values. How should I modify the code to get the correct normalised intensity values?