layer.renderInContext não leva em conta layer.mask?

Estou tentando renderizar algumas UIImages em uma única imagem que posso salvar no meu álbum de fotos. Mas parece que o layer.renderInContext não leva em conta uma máscara de camada?

Comportamento atual: a foto salva e vejo mosaicLayer, sem o efeito de mascaramento de maskLayer.

Comportamento esperado: a foto salva e vejo a imagem na minha opinião, com uma camada de mosaico mascarada.

Eu uso o seguinte código para mascarar a imagem

UIImage *maskImg = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]
                        pathForResource:@"mask" ofType:@"png"]];

    maskLayer = [[UIImageView alloc] initWithImage:maskImg];
    maskLayer.multipleTouchEnabled = YES;
    maskLayer.userInteractionEnabled = YES;
    UIImageView *mosaicLayer = [[UIImageView alloc] initWithImage:img];
    mosaicLayer.contentMode = UIViewContentModeScaleAspectFill;

    mosaicLayer.frame = [imageView bounds]; 
    mosaicLayer.layer.mask = maskLayer.layer;

    [imageView addSubview:mosaicLayer];

E então eu uso esse código para salvar minha imagem composta:

UIGraphicsBeginImageContext(imageView.bounds.size);
    [imageView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *saver = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    UIImageWriteToSavedPhotosAlbum(saver, self, @selector(image:didFinishSavingWithError:contextInfo:), nil);

EDITAR: Aplica a máscara corretamente

-(IBAction) saveImage { 
UIImage * saver = nil;
CGImageRef image = imageView.image.CGImage;

size_t cWidth = CGImageGetWidth(image);
size_t cHeight = CGImageGetHeight(image);
size_t bitsPerComponent = 8; 
size_t bytesPerRow = 4 * cWidth;

//Now we build a Context with those dimensions.
CGContextRef context = CGBitmapContextCreate(nil, cWidth, cHeight, bitsPerComponent, bytesPerRow, CGColorSpaceCreateDeviceRGB(), CGImageGetBitmapInfo(image));

//The location where you draw your image on the context is not always the same location you have in your UIView, 
//this could change and you need to calculate that position according to the scale between you images real size, and the size of the UIImage being show on the UIView. Hence the mod floats...
CGContextDrawImage(context, CGRectMake(0, 0, cWidth,cHeight), image);

float mod = cWidth/(imageView.frame.size.width);
float modTwo = cHeight/(imageView.frame.size.height);

//Make the drawing right with coordinate switch
CGContextTranslateCTM(context, 0, cHeight);
CGContextScaleCTM(context, 1.0, -1.0);

CGContextClipToMask(context, CGRectMake(maskLayer.frame.origin.x * mod, maskLayer.frame.origin.y * modTwo, maskLayer.frame.size.width * mod,maskLayer.frame.size.height * modTwo), maskLayer.image.CGImage);

//Reverse the coordinate switch
CGAffineTransform ctm = CGContextGetCTM(context);
ctm = CGAffineTransformInvert(ctm);
CGContextConcatCTM(context, ctm);

CGContextDrawImage(context, CGRectMake(0, 0, cWidth,cHeight), mosaicLayer.image.CGImage);

CGImageRef mergeResult  = CGBitmapContextCreateImage(context);
saver = [[UIImage alloc] initWithCGImage:mergeResult];
CGContextRelease(context);
CGImageRelease(mergeResult);

UIImageWriteToSavedPhotosAlbum(saver, self, @selector(image:didFinishSavingWithError:contextInfo:), nil);}

questionAnswers(1)

yourAnswerToTheQuestion