1

I have been trying to use UIPanGestureRecognizer to move an SCNNode attached to a UISceneView in iOS ARKit. It moves fine in the x dimension, but I cannot seem to figure out how to move it in the y direction. I have followed two different approaches, trying each without success:

  1. How do I find my mouse point in a scene using SceneKit?

  2. https://github.com/rajubd49/ARKit-Sample-ObjC

Any insight on why the y dimension does not seem to behave the same as the x would be greatly appreciated. Could this have something to do with motion only in the XZ plane or something? Here is the code following the GitHub sample code cited in #2 above):

==============

//Move SCNNode - (void)handlePanGesture:(UIPanGestureRecognizer *)pagr {

switch (pagr.state)
{
    case UIGestureRecognizerStateBegan:
    {
        CGPoint tapPoint    =   [pagr locationInView:sceneView];
        NSLog(@"%s tapPoint  %@",__FUNCTION__,NSStringFromCGPoint(tapPoint));

        NSArray *hitResults = [sceneView hitTest:tapPoint types:ARHitTestResultTypeFeaturePoint | ARHitTestResultTypeEstimatedHorizontalPlane];
        lastHitTestResult   =   [hitResults firstObject];
    }
        break;
    case UIGestureRecognizerStateChanged:
    {
        CGPoint tapPoint            =   [pagr locationInView:sceneView];
        if (windScreenView.buttonState == GypsyTargetAdded)
        {
            NSArray *hitResults         =   [sceneView hitTest:tapPoint types:ARHitTestResultTypeFeaturePoint | ARHitTestResultTypeEstimatedHorizontalPlane];
            ARHitTestResult *result     =   [hitResults lastObject];

            [SCNTransaction begin];

            SCNMatrix4 lastMatrix   =   SCNMatrix4FromMat4(lastHitTestResult.worldTransform);
            SCNVector3 lastVector   =   SCNVector3Make(lastMatrix.m41, lastMatrix.m42, lastMatrix.m43);

            SCNMatrix4 newMatrix    =   SCNMatrix4FromMat4(result.worldTransform);
            SCNVector3 newVector        =   SCNVector3Make(newMatrix.m41, newMatrix.m42, newMatrix.m43);
            CGFloat dx              =   newVector.x-lastVector.x;
            CGFloat dy              =   newVector.y-lastVector.y;
            SCNVector3 adjVector        =   SCNVector3Make(gypsyTargetNode.position.x + dx, gypsyTargetNode.position.y + dy, gypsyTargetNode.position.z);
            gypsyTargetNode.position    =   adjVector;

            [SCNTransaction commit];

            NSLog(@"%s lastVector: x = %f, y = %f, z = %f",__FUNCTION__,lastVector.x,lastVector.y,lastVector.z);
            NSLog(@"%s newVector: x = %f, y = %f, z = %f",__FUNCTION__,newVector.x,newVector.y,newVector.z);
            NSLog(@"%s dx = %f, dy = %f",__FUNCTION__,dx,dy);
            NSLog(@"%s gypsyTargetNode.position: x = %f, y = %f, z = %f",__FUNCTION__,gypsyTargetNode.position.x,gypsyTargetNode.position.y,gypsyTargetNode.position.z);
            NSLog(@"%s hitResults.count: %li",__FUNCTION__,hitResults.count);
            lastHitTestResult       =   result;
        }
    }
        break;
    case UIGestureRecognizerStateEnded:
    {
        lastHitTestResult   =   nil;
    }
        break;
    default:
        break;
}

}

and here are snippets of the output console. Note that the calculated "dx" is always .000000 .

[CalibrationController handlePanGesture:] tapPoint  {207.33332824707031, 507}

[CalibrationController handlePanGesture:] lastVector: x = -0.016018, y = -0.083625, z = 0.006696

[CalibrationController handlePanGesture:] newVector: x = -0.015562, y = -0.083862, z = 0.006658

[CalibrationController handlePanGesture:] dx = 0.000456, dy = **-0.000237**

[CalibrationController handlePanGesture:] gypsyTargetNode.position: x = -0.018142, y = -0.087894, z = 0.008041

[CalibrationController handlePanGesture:] hitResults.count: 2

[CalibrationController handlePanGesture:] lastVector: x = -0.015562, y = -0.083862, z = 0.006658

[CalibrationController handlePanGesture:] newVector: x = -0.015562, y = -0.083862, z = 0.006658

[CalibrationController handlePanGesture:] dx = 0.000000, dy = **0.000000**

[CalibrationController handlePanGesture:] gypsyTargetNode.position: x = -0.018142, y = -0.087894, z = 0.008041

[CalibrationController handlePanGesture:] hitResults.count: 2

[CalibrationController handlePanGesture:] lastVector: x = -0.015562, y = -0.083862, z = 0.006658

[CalibrationController handlePanGesture:] newVector: x = -0.015150, y = -0.083862, z = 0.006565

[CalibrationController handlePanGesture:] dx = 0.000412, dy = **0.000000**

[CalibrationController handlePanGesture:] gypsyTargetNode.position: x = -0.017730, y = -0.087894, z = 0.008041

[CalibrationController handlePanGesture:] hitResults.count: 2

[CalibrationController handlePanGesture:] lastVector: x = -0.015150, y = -0.083862, z = 0.006565

[CalibrationController handlePanGesture:] newVector: x = -0.014686, y = -0.083862, z = 0.006361

[CalibrationController handlePanGesture:] dx = 0.000463, dy = **0.000000**
Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144
Eric
  • 33
  • 6

1 Answers1

1

For panning gesture implement the following approach:

let myScene = SCNScene(named: "scene.scn")!
let modelNode: SCNNode = myScene.rootNode
var selectedNode: SCNNode? = nil


override func viewDidLoad() {
    super.viewDidLoad()

    let moveGesture = UIPanGestureRecognizer(target: self, 
                                             action: #selector(moveModel))

    self.sceneView.addGestureRecognizer(moveGesture)
}

Check important conditions if any:

private func myMethod(at position: CGPoint) -> SCNNode? {

    let nnn = self.sceneView.hitTest(position, options: nil).first(where: { 
        $0.node !== modelNode 
    })?.node

    return nnn
}

Fill a gesture recognizer @objc method:

@objc func moveModel(_ gesture: UIPanGestureRecognizer) {

    let location = gesture.location(in: self.sceneView)

    switch gesture.state {
        case .began:
            selectedNode = myMethod(at: location)
        case .changed:
            guard let result = self.sceneView.hitTest(location, 
                                               types: .existingPlane).first 
            else { 
                return 
            }
            let transform = result.worldTransform
            let newPosition = SIMD3<Float>(transform.columns.3.x, 
                                           transform.columns.3.y, 
                                           transform.columns.3.z)
            selectedNode?.simdPosition = newPosition
        default:
            selectedNode = nil
    }
}
Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144
  • demarcate, this helped solve the core problem. Am working to see if it is possible to use pan gesture to move node without "touching" node directly, as the finger hides the node image during pan operation. Hoping, I can move node while touching elsewhere. Also, have reliability issue as occasionally during pan operation, node "jumps off surface" to which it was attached and appears to be suspended above real object to which it was attached. Will post any progress on either of these issues. Do you think that associating node with ARAnchor will help the second problem?? – Eric Jun 15 '20 at 07:35
  • 1
    https://stackoverflow.com/users/13699247/demarcate, I experimented with implementing this using an ARAnchor with an SCNNode, thinking that the solution would be more stable. But I found that it made little difference. If I point camera away from ARTargetHitPoint to other parts of environment, then return to pointing where virtual object used to be, the object sometimes becomes "unattached" and re-attaches in space somewhere between ARTargetHitPoint and physical camera. So I will not pursue that anymore as it adds unnecessary complexity. – Eric Jun 16 '20 at 05:56
  • Unstable anchors that tether models are due to poor tracking (insufficient number of feature points, aweful lighting conditions, small room to accomodate detected planes, no robust 6DoF tracking, etc.) That's why you have got "floating" models... – Andy Fedoroff Jun 16 '20 at 06:05
  • 1
    demarcate, yes I understand these issues. I cannot rely on ARPlaneNode detection as the flat surface I am measuring is not larger than 25 cm on a side. When I turn on debugOptions to showFeaturePoints it is very helpful. Is there a delegate or something that can be used when one or more feature points are detected? I would not want to have the debugOption on when the app is released for general use. In your experience does implementing an ARAnchor result in more reliable feature tracking (vs. using SCNnodes without ARAnchor)? – Eric Jun 16 '20 at 12:41
  • Yes, ARAnchor results in much more reliable tracking vs placing geometry without ARAnchors. But feature points live their own life, they don't depend on anchors. https://stackoverflow.com/questions/52893075/what-is-aranchor-exactly/52899502#52899502 – Andy Fedoroff Jun 16 '20 at 16:22