有些时候经常需要对系统本有的类方法进行扩充,于是可能需要对类采用继承或者分类的方式来实现需要。
最近写的一个项目用到了一些对图片进行处理的一些扩充方法。
涉及到的方法:
比较常用的类似于拉伸图片,拼接图片名称或者以颜色生成图片等。
拼接图片名称:
+ (UIImage *)imageMatchSizeWithName:(NSString *)imageName { if (__Device_Iphone_5__) //iphone5,5s { NSString *ext = [imageName pathExtension]; imageName = [imageName stringByDeletingPathExtension]; imageName = [imageName stringByAppendingString:@"-568h@2x"]; imageName = [imageName stringByAppendingPathExtension:ext]; } return [UIImage imageNamed:imageName]; }
通过宏来给图片名称拼接一个4寸标识。
拉伸图片到指定尺寸:
+ (UIImage *)compressImage:(UIImage *)imgSrc toSize:(CGSize)size { UIGraphicsBeginImageContext(size); CGRect rect = {{0,0}, size}; [imgSrc drawInRect:rect]; UIImage *compressedImg = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return compressedImg; }
通过上下文来绘制实现将图片拉伸到指定的尺寸。
指定位置获取像素点平铺拉伸图片
+ (UIImage *)strechImageWithName:(NSString *)imageName { UIImage *image = [UIImage imageNamed:imageName]; return [image stretchableImageWithLeftCapWidth:image.size.width*0.5 topCapHeight:image.size.height*0.5]; } + (UIImage *)strechImageWithName:(NSString *)imageName posX:(CGFloat)x posY:(CGFloat)y { UIImage *image = [UIImage imageNamed:imageName]; return [image stretchableImageWithLeftCapWidth:image.size.width*x topCapHeight:image.size.height*y]; }
这里涉及到一个端帽,不过如果不清楚只要知道是根据指定的像素位置平铺来实现拉伸即可。
截屏:
+ (UIImage *)screenshot { CGSize imageSize = [[UIScreen mainScreen] bounds].size; if (NULL != UIGraphicsBeginImageContextWithOptions) { UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0); } else { UIGraphicsBeginImageContext(imageSize); } CGContextRef context = UIGraphicsGetCurrentContext(); for (UIWindow *window in [[UIApplication sharedApplication] windows]) { if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen]) { CGContextSaveGState(context); CGContextTranslateCTM(context, [window center].x, [window center].y); CGContextConcatCTM(context, [window transform]); CGContextTranslateCTM(context, -[window bounds].size.width * [[window layer] anchorPoint].x, -[window bounds].size.height * [[window layer] anchorPoint].y); [[window layer] renderInContext:context]; CGContextRestoreGState(context); } } UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; }
增加水印:
+ (UIImage *)addImage:(UIImage *)image addMsakImage:(UIImage *)maskImage maskFrame:(CGRect)pos { UIGraphicsBeginImageContext(image.size); [image drawInRect:CGRectMake(0, 0, image.size.width, image.size.height)]; //水印图片的位置 [maskImage drawInRect:pos]; UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return resultingImage; }
指定颜色生成图片:
+ (UIImage *)imageWithColor:(UIColor *)color { CGRect rect = CGRectMake(0, 0, 1, 1); UIGraphicsBeginImageContext(rect.size); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetFillColorWithColor(context, [color CGColor]); CGContextFillRect(context, rect); UIImage *image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return image; }
以上三个方法其实均有些类似,都是通过图片上下文来进行操作。
相对于上述方法,进行模糊稍微麻烦一些,也可以使用CoreImage中的高斯模糊滤镜,CoreImage的简单使用可以参照之前的博客:
+ (UIImage *)blurImage:(UIImage *)src amount:(CGFloat)amount { if (amount < 0.0 || amount > 1.0) { amount = 0.5; } int boxSize = (int)(amount * 40); boxSize = boxSize - (boxSize % 2) + 1; CGImageRef img = src.CGImage; vImage_Buffer inBuffer, outBuffer; vImage_Error error; void *pixelBuffer; CGDataProviderRef inProvider = CGImageGetDataProvider(img); CFDataRef inBitmapData = CGDataProviderCopyData(inProvider); inBuffer.width = CGImageGetWidth(img); inBuffer.height = CGImageGetHeight(img); inBuffer.rowBytes = CGImageGetBytesPerRow(img); inBuffer.data = (void*)CFDataGetBytePtr(inBitmapData); pixelBuffer = malloc(CGImageGetBytesPerRow(img) * CGImageGetHeight(img)); outBuffer.data = pixelBuffer; outBuffer.width = CGImageGetWidth(img); outBuffer.height = CGImageGetHeight(img); outBuffer.rowBytes = CGImageGetBytesPerRow(img); error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend); if (!error) { error = vImageBoxConvolve_ARGB8888(&outBuffer, &inBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend); if (!error) { error = vImageBoxConvolve_ARGB8888(&inBuffer, &outBuffer, NULL, 0, 0, boxSize, boxSize, NULL, kvImageEdgeExtend); } } CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef ctx = CGBitmapContextCreate(outBuffer.data, outBuffer.width, outBuffer.height, 8, outBuffer.rowBytes, colorSpace, (CGBitmapInfo)kCGImageAlphaNoneSkipLast); CGImageRef imageRef = CGBitmapContextCreateImage (ctx); UIImage *returnImage = [UIImage imageWithCGImage:imageRef]; CGContextRelease(ctx); CGColorSpaceRelease(colorSpace); free(pixelBuffer); CFRelease(inBitmapData); CGColorSpaceRelease(colorSpace); CGImageRelease(imageRef); return returnImage; }需要注意的是,这个方法需要导入系统库:<Accelerate/Accelerate.h>
实现即使不是太清楚,也可以直接拿来用就行。
资源位置
GitHub:UIImage-HR
CSDN:iOS图片分类
以上就是本篇博客全部内容,欢迎指正和交流。转载注明出处~