What are the main changes since Private Beta?
Mapbox Vision now has a more modular architecture; consisting of multiple “building blocks.” Developers will be able to import only the pieces they need, enabling smaller application size and better performance. There is a single “VisionCore” block that contains all of the neural networks and directly ingests sensor data. The output of “VisionCore” is passed to a common C++ API layer called “Vision” from which developers can access all Vision data types. Vision is wrapped with native bindings for use with mobile platform SDKs.
What development platforms are supported by this new architecture?
One of the benefits of this new architecture is that every new iteration of our modules and core algorithms needs to be built only once in C++ and will be simultaneously deployed to platform language SDKs for iOS and Android platforms. Additionally, we will be supporting embedded Linux platforms beginning with this launch.
What are the components of the Vision SDK?
For each supported platform for Mapbox Vision, there are four modules: Vision, VisionAR, VisionSafety, and VisionCore:
Vision is the primary SDK, needed for any application of Mapbox Vision. Its components enable camera configuration, display of classification, detection, and segmentation layers, lane feature extraction, and other interfaces. Vision accesses real-time inference running in VisionCore.
VisionAR is an add-on module for Vision used to create customizable augmented reality experiences. It allows configuration of the user’s route visualization: lane material (shaders, textures), lane geometry, occlusion, custom objects, and more.
VisionSafety is an add-on module for Vision used to create customizable alerts for speeding, nearby vehicles, cyclists, pedestrians, lane departures, and more.
VisionCore is the core logic of the system, including all machine learning models. Importing Vision into your project automatically brings VisionCore along.
How do I set up AR Navigation?
Creating an augmented reality navigation experience requires the modules Vision, VisionAR, and Mapbox’s Navigation SDK.
What is the purpose of the VisionSafety Module?
Developers can create features to notify and alert drivers about road conditions and potential hazards with the VisionSafety SDK, an add-on module that uses segmentation, detection, and classification information passed from the Vision SDK. For example, developers can monitor speed limits and other critical signage using sign classification and track the most recently observed speed limit. When the detected speed of the vehicle is in excess of the last observed speed limit, programmable alerts can be set. VisionSafety can also send programmable alerts with pedestrians or cyclists are in the vehicle’s path. Finally, Safety Mode can send programmable alerts when a driver is closing too quickly on a lead vehicle.
How can Mapbox Vision enable triggered actions?
Programmable alerts are available in the VisionSafety module that allow extra actions to be taken based on events recognized by the Vision SDK. For example, developers can automatically capture a video clip or an image frame when a collision or hard braking event is detected. The Vision SDK exposes various events from the driving scene to enable this functionality.
What kind of lane information is exposed by Vision?
The Vision SDK's segmentation provides developers with the following pieces of lane information: number of lanes, lane widths, lane edge types, and directions of travel for each lane. A set of points describing each lane edge is also available.
Is there a way to use Vision with exogenous inputs (e.g. vehicle signals)
Yes. We now support interfaces to arbitrary exogenous sensors (e.g. differential GPS, vehicle speed, vehicle IMU). This will be especially helpful for developers using embedded platforms.
What is the minimum configuration for Mapbox Vision?
To work with only the foundational components of Vision (segmentation, detection, and classification layers), developers need only import the Vision SDK for their desired platform (iOS, Android, or Embedded Linux). Importing Vision SDK will automatically assemble the requisite VisionCore.