Lining up AR features while weaving through traffic with the Vision SDK

No items found.

Jul 26, 2019

Lining up AR features while weaving through traffic with the Vision SDK

Guest

No items found.

Guest

Jul 26, 2019

There’s an awful lot going on in this five second video clip from yesterday’s test drive of the Vision SDK — we’re enhancing the depth and feel of our augmented reality navigation. Whether you are building solutions to run on a smartphone, a connected dashcam, or an automotive heads up display, lining up projected 3D features while weaving through traffic is a tough challenge. Getting AR navigation to feel like reality requires harmonious fusing of our map data from the cloud with the live sensor data on the client: GPS, inertial measurement unit (IMU), and the camera.

In designing a novel navigation experience, customization is crucial. The Vision SDK invites developers to get creative with designing turn-by-turn directions in 3D. Opacity, thickness, and color of the AR driving line are all adjustable today. In this example, we’re also changing from a solid line to a tread mark pattern. By estimating the vehicle’s velocity from the device’s GPS and IMU, we’re able to animate the chevrons of the tread mark such that they stay in place as we drive over them.

Lane markings and a curb are used to find the horizon and vanishing point within the camera frame.

To create 3D features that look as realistic as possible, we need to draw on the camera frame with a perspective that closely matches the horizon and vanishing point of the roadway as viewed from your car. No one wants to go through a calibration procedure every time they bring up their navigation screen, so the Vision SDK uses scene segmentation to calibrate automatically. We use segmentation of key features in the environment, such as curbs and lane markings, to find the horizon and vanishing point. This allows us to project features precisely without relying on any feedback from the user.

Red outlines show where the Vision SDK dynamically obfuscates the AR tread marks.

We also use segmentation to intelligently alter the appearance of the AR line. In this example, you can see two ways we’re dynamically obfuscating portions of the tread marks. First, the interior dashboard and the hood of the car have been removed. Second, we’ve cut off the end of the path where it intersects with the car in front of us. Look closely to see how the shadows in the scene show up on top of the tread marks as well — AR features that dynamically interact with the real world enhance the feel of driving over an illuminated carpet.

Finally, we’ve improved the fit of the AR path to the road geometry using lane detection; we can use our understanding of where the left and right lane boundaries are to more precisely center the AR line as we drive. This is particularly helpful in situations with poor GPS signal, like tunnels and urban canyons.

Ready to put your imagination behind the wheel? Download the Vision SDK and start building today.

Heading 1

Heading 2

Heading 3

Heading 4

Heading 5
Heading 6

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.

Block quote

Ordered list

  1. Item 1
  2. Item 2
  3. Item 3

Unordered list

  • Item A
  • Item B
  • Item C

Text link

Bold text

Emphasis

Superscript

Subscript

No items found.
No items found.