Technologies

Discover, Connect & Collaborate at TECHINNOVATION 2021

3D Vision for Autonomous Robots & Industry 4.0

Technology Overview

With the breakthrough in 3D vision-based localisation technologies under GPS-incapable environment and expertise in sensor fusion, this technology offer presents promising sensing solution for advanced robotics and automation in Industry 4.0. This solution is built upon the proprietary vision-based sensor with edge computing capabilities, providing best calibration algorithm and controls with the deployment of the multiple sensor technology.

The sensing technology enables robots to:

  1. Navigate under GPS-incapable environment, across indoors and outdoors spaces
  2. Vision-based self-localisation and moving object tracking
  3. Object identification and volumetric measurements in 3D

For Industry 4.0, these are the areas can be supported:

  1. 3D visual data acquisition and simple onboard processing, translating only required information to the main processing board
  2. Smart CCTV monitoring & image / object identification
  3. Depth perception and measurements for logistics

The technology owner is currently looking for operators that would like to transform their business operations, through the use of autonomous robots, and also to achieve the best effects of encapsulating a whole visual monitoring system to optimise their business operations on the ground. Besides, the technology owner is also looking to work with system integrators (eg. Robot makers) to adopt and integrate the technology. The system is fully customizable and configurable based on user’s requirements.

Technology Features, Specifications and Advantages

The solution can provide:

  • 360-degree omnidirectional coverage
  • Wide angle view of 178 degrees perception coverage per sensor
  • Flexibility in sensor placement and connectivity options
  • Faster response speed due to low latency onboard processing power
  • Embedded Visual Simultaneous Localisation and Mapping (VSLAM) allows autonomous applications in indoor space effectively without the need for Global Positioning System (GPS)

Potential Applications

The application potential is wide and can be applied across several industries, due to the versatility in the core technology.

The primary area and focus will be in the following:

  • Autonomous Mobile Robots (AMRs)
    • Warehousing
    • Delivery Robots
    • Service Robots
    • Collaborative Robots
  • Static Infrastructures
    • Inventory warehouse tracking
    • Anomaly Detection
    • Volumetric and object identification analysis
    • Consumer behavioral tracking in retail stores

Another key market would be AI companies conducting analytics using acquired visual data for activities such as manufacturing fault detection and inspection.

Other markets that have been considered includes Defense Science System, Supply Chain & Logistics, General Security System, etc. which requires a blend of both surveillance and information processing requirements. This is to create an entire vision-based data monitoring and acquisition ecosystem on the ground working alongside with autonomous robots adopting the acquired monitoring visuals from our sensors to assist with manual repetitive work. The CAGR is expected to be at least 6.1% and has a potential of growing beyond USD13 Billion in market size by 2025 when expanded and adopted by more companies.

Source: https://www.marketsandmarkets.com/Market-Reports/industrial-machine-vision-market-234246734.html?gclid=CjwKCAjw092IBhAwEiwAxR1lRpm3bAzqXgpMP-GTeaw90k2ZI1xF2XMhNPUu2E53OnI8nLaySjr6GxoC6E8QAvD_BwE

Customer Benefit

  1. Save time a cost required for R&D, as we provide ready-to-use hardware & middleware API. New API and hardware can be further customized and developed based on user’s requirement.
  2. Vision-based sensing for robots
    a. No pre-mapping required, reducing deployment time
    b. Able to operate indoors & outdoors
  3. Industry 4.0
    a. Data acquisition & processing over-the-edge, optimizing data flow
    b. Smart, collaborative communication between infrastructure & moving parts (eg. mobile robots), building a smart factory with an autonomous ecosystem
  4. Improve overall operational efficiency with a range of capabilities enabled by vision
  5. Safety and Return of Investment in machineries are enhanced, as robots are truly autonomous with minimal to zero human intervention required
OVERVIEW
Contact Person

Yin Yi Low

Organisation

NUS Graduate Research Innovation Programme (NUS GRIP)

Technology Category

  • Infocomm
  • Artificial Intelligence, Enterprise & Productivity, Robotics & Automation, Smart Cities, Video/Image Processing

Technology Readiness Level

Keywords

vision-based sensor; 3D vision, VSLAM, sensor fusion, computer vision, machine vision, edge computing, industry 4.0, manufacturing, autonomous mobile robots AMRs, delivery robots, warehouse, logistics, supply chain, inspection, AI companies, Data Analytics