Advanced positioning, image stitching and point cloud matching.
Industries as diverse as robotics, artificial intelligence, and autonomous vehicles face a similar challenge: helping machines interpret a changing, 3-D environment in real-time. Whether the goal is to build a robot that can easily interact with everyday objects or a car that can safely navigate a city street, computer algorithms must be able to accurately identify their environment, objects within it and respond to all of them appropriately.
While data science has come a long way on this front, progress has been impeded by several analytical difficulties
Lack of spatial awareness.
Spatial perception is something most humans take for granted, but teaching a machine to recognize its position in space relative to everything in its environment and make responsible decisions about how to move or react to that space is much more complex. Because the need for datum exists in interpreting position and orientation, this critical dependency can skew decision making and limit reliability in operation.
Coping with a changing environment.
Locating and identifying objects is difficult enough in a static environment. But in real-life applications, autonomous systems need to accurately interpret their surroundings to make well-informed decisions about objects that are constantly changing.
Image stitching technology relies on identifying and matching corresponding features in different images. This datum-dependent approach has many pitfalls that can lead to potential error across each subsequent image scan.
Challenges of object identification.
Complex objects and distinctive shapes can be relatively easy to identify in a point cloud scan if their orientation is aligned with previous reference images. But identifying and differentiating objects is various poses can create significant challenges in real-world situations where scanner angles or object orientation is not ideal.
Spatial Orientation and Point Cloud capabilities.
EA’s proprietary spatial orientation capabilities resolve these complications by moving past datum-dependent algorithms and focusing on the unique “inner reference” within the data itself. This breakthrough discovery allows us to achieve advances beyond any other spatial orientation technology in the field, because we understand the exact spatial orientation of each point cloud and any and all objects presented within the entire frame of reference.
Point Cloud Matching
Traditional point cloud matching technology relies on datum that can be overlapped and matched together. However, this approach fails when the datum does not provide sufficient reliability. By relying on Ellipsoid Analytics proprietary “inner reference”, we can perform point cloud matching algorithms in a datum-independent way, eliminating errors from mismatched datum and delivering results at real-time speeds. In addition, the numerical accuracy within the matching process must also be managed. EA’s unprecedented ability to do reliable affine transformations for any number of dimensions also creates a next-level set of capabilities.
Like point cloud matching, fitting images together despite varying conditions and unreliable scan quality often relies on identifying points of overlapping datum and pairing them with each other. Our image stitching technology relies on our unique “inner reference” capabilities to seamlessly combine images and reduce alignment and blending errors. Both datum invariant capabiity and affine transformation capability are at the core.
Most image discrimination technology is limited to the data libraries of previously-scanned objects in specific orientations and comparisons to new scanned information. This method begins to break down when objects are misidentified or when the data base cannot produce reasonable matches for objects not in a similar orientation.
We improve object discrimination by relying on the “inner reference” provided by the object itself, and the relationship of the object to the “inner reference” of the larger frame of reference it sits within, which leads to faster, more reliable feature identification.
EA’s technology uses the “inner reference” to distinguish objects from each other, judge their orientation and position within a space, and understand their movement through that space. By relying on the “inner reference” for these calculations, we create a lighter program that can deliver more accurate results while requiring less processing power — often allowing for real-time positioning but always providing 100% numerical accuracy and reliability.