Project

Assembly

Assembling things has long been a staple for robotics in mass production, where everything is controlled, the parts, the speed at which they arrive on the conveyor belt, the angle and position on the belt, the positioning of the vision system which is going to recognise said parts and so forth.

At SE4, we are changing that.

Our platform allows humans and robots to work in unison, leveraging one another’s strengths to complete tasks. We leave complex problem-solving skills related to environmental changes to the human operators(the best AI around) , and assign tedious or repetitive tasks to our robot friends.

We utilise the human operator’s ability to recognise objects, how they relate to other objects and what their function may be, then as the operator controls the robot it is taught what’s going on. It is taught all the labels for things -and- their interactions. We use the word ‘teach’ because our system bottles that knowledge for future use, and, thanks to our awesome approach that operator-instructed information is never lost, it is continuously built upon, made generic and redeployed.

Latency was the Achilles heel of robotic teleoperation, but because our system learns, and is semi-autonomous we are able to decouple the instructions from an operator to the actions being executed by a robot. In short, the operator interacts with a simulation of the world, performs the actions they desire then queues them up and sends them to a robot for execution. Should something go wrong, the robot is smart enough to realise this and call out to its operator(s) for help.

We can effectively pilot robots from anywhere, at any distance, regardless of the traditional limitations of time and space.

Our computer vision stack’s inference and ML training are powered by NVIDIA GPUs.


Other projects