8

I've got an NSImageView that takes up the full extent of a window. There's no border to the image view, and its set to display in the lower left. So this means that the origin of the view matches the origin of actual image, no matter how the window is resized.

Also, the image is much larger than what I can reasonably fit at full scale on the screen. So I also have the imageview set to proportionally scale down the size of the image. However, I can't seem to find this scale factor anywhere.

My ultimate goal is to map a mouse down event into actual image coordinates. To do this, I think I need one more piece of information...how big the displayed NSImage actually is.

If I look at the [imageView bounds], I get the bounding rectangle of the image view, which generally will be larger than the image.

wrjohns
  • 474
  • 4
  • 14
  • I've tried that. The frames and bounds and so on are all relative to the NSView/NSWindow type objects. So I could get the bounding frame of the view or window itself...the problem is that the NSImageView doesnt draw on all of its bounds. Part of the view is blank. And what I need is either the amount by which the NSImageView scaled down its image, or the bounding rectangle of where the ImageView actually drew in (as opposed to what it owns) – wrjohns Jul 29 '12 at 19:42
  • For now, I'm going with a workaround, where I'm disallowing the window to be resized, and removing all the other UI elements from the window (like the title bar), so that I know the super view bounds = the NSImageView bounds = the size of the actual image. – wrjohns Jul 29 '12 at 19:44
  • NSImages dont have frames. They **do** have sizes, but the size is of the full-scale image, not the size they are actually drawn at. – wrjohns Jul 29 '12 at 23:09

4 Answers4

4

I think that this gives you what you need:

NSRect imageRect = [imageView.cell drawingRectForBounds: imageView.bounds];

which returns the offset of the origin of the image within the view, and it's size.

And for you end goal of remapping the mouse coordinates, something like this on your custom view class should work...

- (void)mouseUp:(NSEvent *)event
{
    NSPoint eventLocation = [event locationInWindow];    
    NSPoint location = [self convertPoint: eventLocation fromView: nil];

    NSRect drawingRect = [self.cell drawingRectForBounds:self.bounds];

    location.x -= drawingRect.origin.x;
    location.y -= drawingRect.origin.y;

    NSSize frameSize = drawingRect.size;
    float frameAspect = frameSize.width/frameSize.height;

    NSSize imageSize = self.image.size;
    float imageAspect = imageSize.width/imageSize.height;

    float scaleFactor = 1.0f;

    if(imageAspect > frameAspect) {

        ///in this case image.width == frame.width
        scaleFactor = imageSize.width / frameSize.width;

        float imageHeightinFrame = imageSize.height / scaleFactor;

        float imageOffsetInFrame = (frameSize.height - imageHeightinFrame)/2;

        location.y -= imageOffsetInFrame;

    } else {
        ///in this case image.height == frame.height
        scaleFactor = imageSize.height / frameSize.height;

        float imageWidthinFrame = imageSize.width / scaleFactor;

        float imageOffsetInFrame = (frameSize.width - imageWidthinFrame)/2;

        location.x -= imageOffsetInFrame;
    }

    location.x *= scaleFactor;
    location.y *= scaleFactor;

    //do something with you newly calculated mouse location    
}
combinatorial
  • 8,221
  • 4
  • 38
  • 57
  • Sorry...the first line ended up not working. I NSLog'ed the computed rect,and as I resized the window such that the aspect ratio changed (but the imagecell is set to draw proportionally), the bounds of the imagecell matched the bounds of the window, not the displayed image. I'm beginning to think there is no API call for this. What I could do is compute the aspect ratio of the original, unscaled image...and use that to compute the image extent in the resized window – wrjohns Aug 06 '12 at 23:10
  • Given the description you have of how you have things set up that might be correct. The rest of the code above then calculates the offset into the image itself. Did you try the remainder of the code? I am doing a very similar thing to you except my view and window bounds do not match as I have a border. You may not need the final *= scaleFactor as that is there because I want the coordinates in the image space (i.e.. relative to the actual size of the image). – combinatorial Aug 07 '12 at 02:30
  • I did try the remainder of the code. But with the drawingRect being off, the final calculated coordinate was also off. I've decided on a 2 pronged approach to what I want: implement a few methods from NSWindowDelegate, and force any manual resizing to preserve aspect ratio, and then in my mouseUp: method, I can assume that image size = window size – wrjohns Aug 07 '12 at 03:14
  • This doesn't work because you're making the assumption that there are no margins or other decorations, and that the NSImageView is doing the exact same thing. You don't know where the image is drawn exactly, nor the scale factor used. This almost works, but if you look closely, ( I modified the function so that it returns the supposed location of the drawn image), there is a slight difference between the actual drawn image frame, and the one computed with this algorithm. – alecail Oct 10 '13 at 08:28
  • Still a good approach to calculate the difference between on-screen rects and in-image rects to apply scale and translation transformations. Works just as well (that is, not 100% exact :)) with `self.bounds` instead of the cell's method. – ctietze May 17 '17 at 12:17
2

Since I haven't found any solution to get the real image frame inside the NSImageView yet, I did the image calculation manually, respecting all its properties (scaling, alignment and border). This might not be the most efficient code and there may be minor deviations of 0.5-1 pixels from the real image but it's coming pretty close to the original image (I know this question is quite old but the solution might help others):

@implementation NSImageView (ImageFrame)

// -------------------------------------------------------------------------
// -imageFrame
// -------------------------------------------------------------------------
- (NSRect)imageFrame
{
    // Find the content frame of the image without any borders first
    NSRect contentFrame = self.bounds;
    NSSize imageSize = self.image.size;
    NSImageFrameStyle imageFrameStyle = self.imageFrameStyle;

    if (imageFrameStyle == NSImageFrameButton ||
        imageFrameStyle == NSImageFrameGroove)
    {
        contentFrame = NSInsetRect(self.bounds, 2, 2);
    }

    else if (imageFrameStyle == NSImageFramePhoto)
    {
        contentFrame = NSMakeRect(contentFrame.origin.x + 1,
                                  contentFrame.origin.y + 2,
                                  contentFrame.size.width - 3,
                                  contentFrame.size.height - 3);
    }

    else if (imageFrameStyle == NSImageFrameGrayBezel)
    {
        contentFrame = NSInsetRect(self.bounds, 8, 8);
    }


    // Now find the right image size for the current imageScaling
    NSImageScaling imageScaling = self.imageScaling;
    NSSize drawingSize = imageSize;

    // Proportionally scaling
    if (imageScaling == NSImageScaleProportionallyDown ||
        imageScaling == NSImageScaleProportionallyUpOrDown)
    {
        NSSize targetScaleSize = contentFrame.size;
        if (imageScaling == NSImageScaleProportionallyDown)
        {
            if (targetScaleSize.width > imageSize.width) targetScaleSize.width = imageSize.width;
            if (targetScaleSize.height > imageSize.height) targetScaleSize.height = imageSize.height;
        }

        NSSize scaledSize = [self sizeByScalingProportionallyToSize:targetScaleSize fromSize:imageSize];
        drawingSize = NSMakeSize(scaledSize.width, scaledSize.height);
    }

    // Axes independent scaling
    else if (imageScaling == NSImageScaleAxesIndependently)
        drawingSize = contentFrame.size;


    // Now get the image position inside the content frame (center is default) from the current imageAlignment
    NSImageAlignment imageAlignment = self.imageAlignment;
    NSPoint drawingPosition = NSMakePoint(contentFrame.origin.x + contentFrame.size.width / 2.0 - drawingSize.width / 2.0,
                                          contentFrame.origin.y + contentFrame.size.height / 2.0 - drawingSize.height / 2.0);

    // NSImageAlignTop / NSImageAlignTopLeft / NSImageAlignTopRight
    if (imageAlignment == NSImageAlignTop ||
        imageAlignment == NSImageAlignTopLeft ||
        imageAlignment == NSImageAlignTopRight)
    {
        drawingPosition.y = contentFrame.origin.y+contentFrame.size.height - drawingSize.height;

        if (imageAlignment == NSImageAlignTopLeft)
            drawingPosition.x = contentFrame.origin.x;
        else if (imageAlignment == NSImageAlignTopRight)
            drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;
    }

    // NSImageAlignBottom / NSImageAlignBottomLeft / NSImageAlignBottomRight
    else if (imageAlignment == NSImageAlignBottom ||
             imageAlignment == NSImageAlignBottomLeft ||
             imageAlignment == NSImageAlignBottomRight)
    {
        drawingPosition.y = contentFrame.origin.y;

        if (imageAlignment == NSImageAlignBottomLeft)
            drawingPosition.x = contentFrame.origin.x;
        else if (imageAlignment == NSImageAlignBottomRight)
            drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;
    }

    // NSImageAlignLeft / NSImageAlignRight
    else if (imageAlignment == NSImageAlignLeft)
        drawingPosition.x = contentFrame.origin.x;

    // NSImageAlignRight
    else if (imageAlignment == NSImageAlignRight)
        drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;


    return NSMakeRect(round(drawingPosition.x),
                      round(drawingPosition.y),
                      ceil(drawingSize.width),
                      ceil(drawingSize.height));
}

// -------------------------------------------------------------------------
// -sizeByScalingProportionallyToSize:fromSize:
// -------------------------------------------------------------------------
- (NSSize)sizeByScalingProportionallyToSize:(NSSize)newSize fromSize:(NSSize)oldSize
{
    CGFloat widthHeightDivision = oldSize.width / oldSize.height;
    CGFloat heightWidthDivision = oldSize.height / oldSize.width;

    NSSize scaledSize = NSZeroSize;
    if (oldSize.width > oldSize.height)
    {
        if ((widthHeightDivision * newSize.height) >= newSize.width)
        {
            scaledSize = NSMakeSize(newSize.width, heightWidthDivision * newSize.width);
        }  else {
            scaledSize = NSMakeSize(widthHeightDivision * newSize.height, newSize.height);
        }

    } else {

        if ((heightWidthDivision * newSize.width) >= newSize.height)
        {
            scaledSize = NSMakeSize(widthHeightDivision * newSize.height, newSize.height);
        } else {
            scaledSize = NSMakeSize(newSize.width, heightWidthDivision * newSize.width);
        }
    }

    return scaledSize;
}

@end
Toby
  • 174
  • 8
0

As I indicated in an above comment, here's the approach I took:

// the view that mouseUp: is part of doesnt draw anything. I'm layering it
// in the window hierarchy to intercept mouse events. I suppose I could have
// subclassed NSImageView instead, but I went this route.  isDragging is
// an ivar...its cleared in mouseDown: and set in mouseDragged:
// this view has no idea what the original unscaled image size is, so
// rescaling is done by caller
- (void)mouseUp:(NSEvent *)theEvent
{

    if (!isDragging)
    {
        NSPoint rawPoint = [theEvent locationInWindow];
        NSImageView *view = self.subviews.lastObject;

        point = [self convertPoint:rawPoint fromView:view];
        point.x /= view.bounds.size.width;
        point.y /= view.bounds.size.height;

        [owner mouseClick:point];

    }
}

And in my NSWindowController, which is my window delegate for the mouse view, I have:

static int resizeMode=-1;

- (void)windowDidEndLiveResize:(NSNotification *)notification
{
    if ([notification object]==frameWindow)
        self.resizeFrameSelection=0;
    resizeMode = -1;
}

- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize
{

    if (sender==frameWindow)
    {
        float imageAspectRatio = (float)movie.movieSize.width / (float)movie.movieSize.height;
        float newH = frameSize.height;
        float newW = frameSize.width;
        float currH = sender.frame.size.height;
        float currW = sender.frame.size.width;
        float deltaH = abs(newH-currH);
        float deltaW = abs(newW-currW);

        // lock onto one dimension to key off of, per drag.
        if  ( resizeMode==1 || (resizeMode==-1 && deltaW<deltaH ))
        {
            // adjust width to match aspect ratio
            frameSize.width = frameSize.height * imageAspectRatio;
            resizeMode=1;
        }
        else
        {
            // adjust height to match aspect ratio
            frameSize.height = frameSize.width / imageAspectRatio;
            resizeMode=2;
        }
    }

    return frameSize;
}
wrjohns
  • 474
  • 4
  • 14
0

Swift 5 implementation of @Toby's solution

extension NSImageView {

    /** Returns an `NSRect` of the drawn image in the view. */
    func imageRect() -> NSRect {
        // Find the content frame of the image without any borders first
        var contentFrame = self.bounds
        guard let imageSize = image?.size else { return .zero }
        let imageFrameStyle = self.imageFrameStyle

        if imageFrameStyle == .button || imageFrameStyle == .groove {
            contentFrame = NSInsetRect(self.bounds, 2, 2)
        } else if imageFrameStyle == .photo {
            contentFrame = NSRect(x: contentFrame.origin.x + 1, y: contentFrame.origin.x + 2, width: contentFrame.size.width - 3, height: contentFrame.size.height - 3)
        } else if imageFrameStyle == .grayBezel {
            contentFrame = NSInsetRect(self.bounds, 8, 8)
        }


        // Now find the right image size for the current imageScaling
        let imageScaling = self.imageScaling
        var drawingSize = imageSize

        // Proportionally scaling
        if imageScaling == .scaleProportionallyDown || imageScaling == .scaleProportionallyUpOrDown {
            var targetScaleSize = contentFrame.size
            if imageScaling == .scaleProportionallyDown {
                if targetScaleSize.width > imageSize.width { targetScaleSize.width = imageSize.width }
                if targetScaleSize.height > imageSize.height { targetScaleSize.height = imageSize.height }
            }

            let scaledSize = self.sizeByScalingProportianlly(toSize: targetScaleSize, fromSize: imageSize)
            drawingSize = NSSize(width: scaledSize.width, height: scaledSize.height)
        }

        // Axes independent scaling
        else if imageScaling == .scaleAxesIndependently {
            drawingSize = contentFrame.size
        }


        // Now get the image position inside the content frame (center is default) from the current imageAlignment
        let imageAlignment = self.imageAlignment
        var drawingPosition = NSPoint(x: contentFrame.origin.x + contentFrame.size.width / 2 - drawingSize.width / 2,
                                      y: contentFrame.origin.y + contentFrame.size.height / 2 - drawingSize.height / 2)

        // Top Alignments
        if imageAlignment == .alignTop || imageAlignment == .alignTopLeft || imageAlignment == .alignTopRight {
            drawingPosition.y = contentFrame.origin.y + contentFrame.size.height - drawingSize.height

            if imageAlignment == .alignTopLeft {
                drawingPosition.x = contentFrame.origin.x
            } else if imageAlignment == .alignTopRight {
                drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width
            }
        }

        // Bottom Alignments
        else if imageAlignment == .alignBottom || imageAlignment == .alignBottomLeft || imageAlignment == .alignBottomRight {
            drawingPosition.y = contentFrame.origin.y

            if imageAlignment == .alignBottomLeft {
                drawingPosition.x = contentFrame.origin.x
            } else if imageAlignment == .alignBottomRight {
                drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width
            }
        }

        // Left Alignment
        else if imageAlignment == .alignLeft {
            drawingPosition.x = contentFrame.origin.x
        }

        // Right Alginment
        else if imageAlignment == .alignRight {
            drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width
        }

        return NSRect(x: round(drawingPosition.x), y: round(drawingPosition.y), width: ceil(drawingSize.width), height: ceil(drawingSize.height))
    }


    func sizeByScalingProportianlly(toSize newSize: NSSize, fromSize oldSize: NSSize) -> NSSize {
        let widthHeightDivision = oldSize.width / oldSize.height
        let heightWidthDivision = oldSize.height / oldSize.width

        var scaledSize = NSSize.zero

        if oldSize.width > oldSize.height {
            if (widthHeightDivision * newSize.height) >= newSize.width {
                scaledSize = NSSize(width: newSize.width, height: heightWidthDivision * newSize.width)
            } else {
                scaledSize = NSSize(width: widthHeightDivision * newSize.height, height: newSize.height)
            }
        } else {
            if (heightWidthDivision * newSize.width) >= newSize.height {
                scaledSize = NSSize(width: widthHeightDivision * newSize.height, height: newSize.height)
            } else {
                scaledSize = NSSize(width: newSize.width, height: heightWidthDivision * newSize.width)
            }
        }

        return scaledSize
    }
}
Codey
  • 779
  • 1
  • 7
  • 25