Why BlueSpace? Why Now?

AV Journey

Since the original DARPA Grand Challenge in 2004, millions of miles and billions of dollars have been funneled into self-driving systems, yet none have managed to reach scale. Robotaxi services are confined to small subsections of cities and autonomous trucks still utilize safety drivers on some sections of highways. Even in off-road environments such as mining, where autonomous equipment has been deployed for years, it is estimated that under 5% of vehicles operate autonomously.  

What is Taking Autonomy so Long?  

Several factors but we approached it from the software “brain”, the AV stack’s perspective. 

  • Too many dependencies

    • Autonomy at scale needs better perception with fewer dependencies, such as training dataset and HD mapping. 

  • Different approach needed: start with motion

    • We start with the dynamic movement of objects rather than imposing assumptions on objects as these assumptions and priors will inevitably be broken (e.g car driving on the wrong side of the street, not abiding by the traffic rules, etc.).

  • Make it affordable

    • Less capital intensive solutions are needed for mass deployment 

BlueSpace's 4D Predictive Perception presents a breakthrough scalable solution that can accelerate the deployment of autonomy. At the core, autonomous driving depends on perception and prediction - how well the system understands what other objects are doing - in order to plan safely. The industry’s approach to date has been focused on classification. A simplified version of the modern perception stack looks like this: Detect object based on prior training data -> Track object -> Predict object’s motion based on prior training and/or map information. 

HD mapping, annotating hours of training data, and driving millions of miles (real and simulated) have helped make self-driving systems better at perception, but they still are not tractable or robust enough for autonomy at global scale. Deep learning based perception systems will continue to struggle to cover endless edge cases. Training a system to classify objects in every possible scenario and configuration - interactions with other objects, new environments, lighting, compound objects etc.- quickly becomes intractable (cute example, not scalable and intractable nevertheless). There is a combinatorial explosion of possible object configurations and solving this is a huge challenge. A fundamental change in approach is required to build a safe, scalable solution for autonomy.

Source: Tesla talking about challenges of the long tail

Our Solution: Motion-first Approach to Autonomy

We took a first-principles approach to perception. We define objects as things that have consistent motion - the same motion state as defined by its linear and rotational velocity. We can use this definition for all objects and do not need to make a classification choice. This approach skips the intermediary steps of classification models and gets to the answer we really care about for safe motion planning: the motion of objects and, therefore, what they will do.

Our team at BlueSpace has developed a perception stack from the ground up to jointly optimize for motion consistency and object motion state. We leverage 4D data (x, y, z, and doppler) to solve for the full motion state of all objects - both linear and rotational. The result: 4D Predictive Perception solution that can handle all object classes, in all environments.

Our approach has numerous implications for the autonomy industry:

  • No dependency on prior training data - We do not need to classify objects. The optimization takes care of finding all objects, including those never seen before. This approach also naturally removes any implicit biases from the training data.

  • No dependency on HD map data - Many perception stacks use HD map data to impose motion priors on objects, which can lead to error if an object does not conform to that prior. BlueSpace removes the dependency on HD map data and the associated dependency on high accuracy, fail safe localization.

  • Faster reaction time - BlueSpace measures the motion state of an object without needing several frames to infer motion from its position. This will allow autonomous vehicles to react faster to object acceleration, and most importantly, function in highly dynamic, safety critical scenarios.

  • Lower computing requirements - BlueSpace implementation doesn't require any specialized AI hardware or power-hungry GPUs, but instead can be deployed on low cost general purpose embedded CPUs. 

BlueSpace 4D Predictive Perception answers the core question that all autonomous perception systems need - how are objects moving so we can plan safely? By answering this question, we can serve as the baseline for safe navigation so our partners can focus on building features for their use case without reinventing the wheel for motion perception. Whether you are starting from scratch or want to enhance an existing stack, join us to build the next generation of perception for all autonomy.

4D sensors are becoming ready for wide adoption, with lowering cost of FMCW lidar through silicon photonics and increasing resolution of imaging radar. While CES 2022 presents no shortage of 4D sensor providers, there is no robust software to make them useful out of the box. Our 4D Predictive Perception can be deployed across a wide variety of these sensors and provides 10-100x better motion accuracy over existing methods. BlueSpace is here to accelerate autonomous technology adoption and mass deployment across all verticals including trucking, robotaxis, logistics, mining, and consumer ADAS.

contact@bluespace.ai

Previous
Previous

Billion Dollar Question: How good is the perception system in your AV?