NebulaRecognition® + Planner
The technology behind our point cloud recognition engine.
The core of NebulaRecognition® + Planner is a point cloud recognition engine that can learn from a single point cloud sample of an object and locate it anywhere in the scene regardless to its location (X, Y, Z) and orientation (Rx, Ry, Rz). The learning and recognition engine can run on low power CPU and run fast. This goes against the grain of libraries from academia or open source that require powerful GPU and CPU.
Point cloud libraries use segmentation techniques — feature-engineering as front-end, or deep neural network (DNN) — then a variant of iterative closest point is deployed. Success of segmentation is critical and requires “hand crafting” of what segmentation algorithm needs to be used. Every class of objects requires a different segmentation algorithm, which makes them all an engineered solution — scalability is out the window.
Unlike the competition, there is no bag of tricks. With RRI technology, all that a user needs to do is to push the teach button, erase unwanted background and recognize the object.
Planner is the starting point of building an application with the Recognition Robotics unified experience.
It’s a full fledged 3D robotic cell simulator that communicates with robots through the unified robot protocol.
Many building blocks are made available to allow the building of virtual cells that simulate real world scenarios.
Planner’s role is to use one or more technologies available in the unified platform (NebulaRecognition, Cortex, Lucana, pro::fit) in order to deliver a given application functionality.
Apps can be manipulated with two experience levels:
- Advanced: user has full and detailed control over each node of the simulation
- Simplified: user is shown only a subset of very common controls that cover most common use cases
Some of the building blocks are:
- Can solve the inverse kinematics for some of the supported robot brands
- Can track TCP position in real time during operation for some of the supported robot brands
- Can detect collision between objects in a scene
- Shows all measured data (point clouds) in the actual simulation scene
Planner - Bin Picking
The bin picking app models a pick-and-place cycle from a bin with an arbitrary list of static and dynamic colliders in the scene.
This app allows the specification of multiple picking points for a given taught object and multiple grippers.
The app will cycle between all grippers and pick point combinations in order to find the best possible solution.
Each pick will not collide with statically determined colliders nor with objects that are in the range of the 3D point cloud sensor.
The decision engine will build a set of scores to provide the choice featuring minimal robot path for improved cycle time and minimal cable stress.