5

problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.

I'm wondering if there is a way to implement those steps:

  1. Detect marker on the table
  2. Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking

I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).

best regards and thanks in advance for any idea

Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144
user2463728
  • 149
  • 1
  • 11
  • You can use aruco for detecting marker for free. You dont need marker tracking cause you wanted to use only initialization phase. – ibrahim Dec 14 '17 at 15:16
  • One thing you can do is use three points marker. Tap on them and calculate position and orientation for model to place in. It works if marker is placed in horizontal plane. – Alok Subedi Dec 15 '17 at 10:19
  • The question would be how to transfer the orientation/position of the detected marker into the workflow of placing objects in ARKit/ARCore. I think those solutions are looking for an event like "detect the clickevent on an detected plane and connect the object". In my case it would be "take my marker position and apply it to an detected plane". But this step seems not to be a trivial case for these frameworks (?) – user2463728 Dec 15 '17 at 11:43

4 Answers4

2

I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.

I think you have two options:

  1. wait for the new Vuforia 7 to be released and supposedly it is going to support visual markers with ARCore and ARKit.
  2. Engage CoreML / Computer Vision - in theory it is possible but I haven't seen many examples. I think it might be a bit difficult to start with (e.g. build and calibrate model).

However Apple have got it sorted: https://youtu.be/E2fd8igVQcU?t=2m58s

Saico
  • 514
  • 5
  • 10
2

if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system. The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)

Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.

See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview

The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required. I'm unsure, however, which other commercial systems provide this feature.

Jethro
  • 1,995
  • 2
  • 19
  • 34
2

This is what i got so far for ARKit.

@objc func tap(_ sender: UITapGestureRecognizer){
    let touchLocation = sender.location(in: sceneView)
    let hitTestResult = sceneView.hitTest(touchLocation, types: .featurePoint)

    if let hitResult = hitTestResult.first{
        if first == nil{
            first = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
        }else if second == nil{
            second = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
        }else{
            third = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)

            let x2 = first!.x
            let z2 = -first!.z
            let x1 = second!.x
            let z1 = -second!.z
            let z3 = -third!.z

            let m = (z1-z2)/(x1-x2)
            var a = atan(m)

            if (x1 < 0 && z1 < 0){
                a = a + (Float.pi*2)
            }else if(x1 > 0 && z1 < 0){
                a = a - (Float.pi*2)
            }

            sceneView.scene.rootNode.addChildNode(yourNode)
            let rotate = SCNAction.rotateBy(x: 0, y: CGFloat(a), z: 0, duration: 0.1)
            yourNode.runAction(rotate)
            yourNode.position = first!

            if z3 - z1 < 0{
                let rotate = SCNAction.rotateBy(x: 0, y: CGFloat.pi, z: 0, duration: 0.1)
                yourNode.runAction(rotate)
            }
        }
    }
}

Theory is:
Make three dots A,B,C such that AB is perpendicular to AC. Tap dots in order A-B-C.
Find angle of AB in x=0 of ARSceneView which gives required rotation for node.
Any one of the point can be refrenced to calculate position to place node.
From C find if node needs to be flipped.

I am still working on some exceptions that needs to be satisfied.

Alok Subedi
  • 1,531
  • 10
  • 25
0

At the moment both ARKit 3.0 and ARCore 1.12 have all necessary API tools to fulfil almost any marker-based tasks for a precise positioning of 3D model.

ARKit

Right out-of-the-box, ARKit has the ability to detect 3D objects and place ARObjectAnchors in a scene as well as to detect images and use ARImageAnchors for accurate positioning. Main ARWorldTrackingConfiguration() class includes both instance properties – .detectionImages and .detectionObjects. It's not superfluous to say that ARKit primordially has indispensable built-in features from several frameworks:

In addition to the above, ARKit 3.0 has tight integration with a brand-new RealityKit module helping to implement multiuser connectivity, list of ARAnchors and shared sessions.

ARCore

Although ARCore has a feature called Augmented Images, the framework has no built-in machine learning algorithms, helping us detect real-environment 3D objects, but Google ML Kit framework does have. So, as an Android developer you can use both frameworks at the same time to precisely auto-composite 3D model over a real object in AR scene.

It is worth recognizing that ARKit 3.0 has a more robust and advanced toolkit than ARCore 1.12.

Andy Fedoroff
  • 26,838
  • 8
  • 85
  • 144