首页 > 通过像素数据创建CVPixelBuffer,最终得到的图像出现变形

通过像素数据创建CVPixelBuffer,最终得到的图像出现变形

我在做这样一件事情:通过某种方式得到像素数据(OpenGLES的glReadPixels方法或其它方式),然后创建CVPixelBuffer,写入视频。但是,我在5c、5s和6上测试后发现,6的视频中的图像出现变形,另外两个正常……

有谁知道怎么解决吗?

如图:

其中一种转换方式的代码如下:

        CGSize viewSize=self.glView.bounds.size;
        NSInteger myDataLength = viewSize.width * viewSize.height * 4;
        
        // allocate array and read pixels into it.
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
        
        // gl renders "upside down" so swap top to bottom into new array.
        // there's gotta be a better way, but this works.
        GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
        for(int y = 0; y < viewSize.height; y++)
        {
            for(int x = 0; x < viewSize.width* 4; x++)
            {
                buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
            }
        }
        
        free(buffer);
        
        // make data provider with data.
        CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
        
        // prep the ingredients
        int bitsPerComponent = 8;
        int bitsPerPixel = 32;
        int bytesPerRow = 4 * viewSize.width;
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
        CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
        
        // make the cgimage
        CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
        CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        
//        UIImage *photo = [UIImage imageWithCGImage:imageRef];
        
        int width = CGImageGetWidth(imageRef);
        int height = CGImageGetHeight(imageRef);
        
        CVPixelBufferRef pixelBuffer = NULL;
        CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
        
        NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), @"create pixel buffer failed.");
        
        CVPixelBufferLockBaseAddress(pixelBuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
        NSParameterAssert(pxdata != NULL);//CGContextRef
        CGContextRef context = CGBitmapContextCreate(pxdata,
                                                     width,
                                                     height,
                                                     CGImageGetBitsPerComponent(imageRef),
                                                     CGImageGetBytesPerRow(imageRef),
                                                     colorSpaceRef,
                                                     kCGImageAlphaPremultipliedLast);
        NSParameterAssert(context);
        
        CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
        
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
        
        CGColorSpaceRelease(colorSpaceRef);
        CGContextRelease(context);
        CGImageRelease(imageRef);
        
        free(buffer2);
        
//        CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
        
        // record
        // ......
        
        CVPixelBufferRelease(pixelBuffer);

试试把width、height各减1。我之前也遇到过类似问题,但不清楚其中原理。

【热门文章】
【热门文章】