High level summary: I would like to replace the rather low resolution ALAsset Group images [group posterImage] with a higher quality version so that they can be shown larger on the screen. Normally I would load them as needed by the interface, but [ALAssetsGroup enumerateAssetsAtIndexes] is very slow. (I COULD preload wider than visible by x amount and may still do that, but that seemed like more hassle than it was worth and still suffers from the slow response, especially in iOS5)
What I figured I could do was request the first asset in each group and then scale it, storing the result. However, even accounting for the larger size of the images, I am surprised by the memory allocations that are taking place. Using VM Tracker, I see a LARGE number of CGImage allocations as well as the 'mapped file' thumbnails I am creating. I am using ARC, so I expected the original large images to drop out, but my VM Tracker results don't bare that out.
If I use the default posterImage implementation, my Resident Mem ~= 30mb, Dirty Mem ~= 80mb and Virtual tops ~240mb (large in themselves). 'Live' < 10mb per the Allocation profiler.
If I use the following code instead, I crash loading the ~80th image out of 150. At that point my Resident Mem > 480mb, Dirty Mem > 420mb and Virtual was a whopping 750mb. Clearly this is untenable.
Here is the code I am using inside of an NSOperationQueue to grab the first image of each group to use as a hi-res poster image.
NSIndexSet* i = [NSIndexSet indexSetWithIndex:0];
ALAssetsGroupEnumerationResultsBlock assetsEnumerationBlock = ^(ALAsset *result, NSUInteger i, BOOL *stop) {
if (result) {
// pull the full resolution image and then scale it to fit our desired area
ALAssetRepresentation *assetRepresentation = [result defaultRepresentation];
CGImageRef ref = [assetRepresentation fullScreenImage];
CGFloat imgWidth = CGImageGetWidth(ref);
CGFloat imgHeight = CGImageGetHeight(ref);
CGFloat minDimension = MIN(imgWidth,imgHeight);
// grab a square subset of the image, centered, to use
CGRect subRect = CGRectMake(0, 0, minDimension, minDimension);
subRect.origin = CGPointMake(imgWidth / 2 - minDimension / 2, imgHeight / 2 - minDimension / 2);
CGImageRef squareRef = CGImageCreateWithImageInRect(ref,subRect);
// now scale it down to fit
CGFloat heightScale = dimension / minDimension;
UIImage* coverImage = [UIImage imageWithCGImage:squareRef scale:1/heightScale orientation:UIImageOrientationUp];
if (coverImage) {
[mainViewController performSelectorOnMainThread:@selector(imageDidLoad:)
withObject:[NSArray arrayWithObjects:coverImage, [NSNumber numberWithInt:photoIndex], nil]
waitUntilDone:NO];
}
CGImageRelease(squareRef);
// DO NOT RELEASE 'ref' it will be 'zombied' as it is already handled by the system
//CGImageRelease(ref);
*stop = YES;
}
else {
// default image grab....
}
};
[group enumerateAssetsAtIndexes:i options:NSEnumerationConcurrent usingBlock:assetsEnumerationBlock];
Am I doing something wrong with the above or am I just not being smart by loading all of the images? The more I think about it, the more I think loading a window of visible images plus a buffer around it is the way to go, but I would like to learn more about what I may have done wrong with the above code. Thanks!