search
1 Projects

  • Canada
  • CHIST-ERA
  • 2022

  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-007

    In this project, the team of researchers will address the problem of autonomous robotic grasping of objects in challenging scenes. We consider two industrially and economically important open challenges which require advanced vision-guided grasping. 1) “Bin-picking” for manufacturing, where components must be grasped from a random, self-occluding heap inside a bin or box. Parts may have known models, but will only be partially visible in the heap and may have complex shapes. Shiny/reflective metal parts make 3D vision difficult, and the bin walls provide difficult reach-to-grasp and visibility constraints. 2) Waste materials handling, which may be hazardous (e.g. nuclear) waste, or materials for recycling in the circular economy. Here the robot has no prior models of object shapes, and grasped materials may also be deformable (e.g. contaminated gloves, hoses). The proposed project comprises two parallel thrusts: perception (visual and tactile) and action (planning and control for grasping/manipulation). However, perception and action are tightly coupled and this project will build on recent advances in “active perception” and “simultaneous perception and manipulation” (SPAM). In the first thrust, we will exploit recent advances in 3D sensor technology and develop perception algorithms that are robust in challenging environments, e.g. handling shiny (metallic) or transparent (glass/perspex) objects, self-occluding heaps, known objects which may be deformable or fragmented, and unknown objects which lack any pre-existing models. In the second thrust, autonomous grasp planners will be developed with respect to visual features perceived by algorithms developed in the first thrust. Grasps must be planned to be secure, but also provide affordances to facilitate post-grasp manipulative actions, and also afford collision-free reach-to-grasp trajectories. Perceptual noise and uncertainty will be overcome in two ways, namely using computationally adaptive algorithms and mechanically adaptive underactuated hands. An object initially grasped by an accessible feature may need to be re-grasped (for example a tool that is not initially graspable by its handle). We will develop re-grasping strategies that exploit object properties learned during the initial grasp or manipulative actions. Overarching themes in the project are: methods that are generalisable across platforms; reproducibility of results; and the transfer of data. Therefore, the methods proposed in the two thrusts will be tested for reproducibility by implementing them in the different partner’s laboratories, using both similar and different hardware. Large amounts of data will be collected throughout these tests, and published online as a set of international benchmark vision and robotics challenges, curated by Université Laval once the project is completed.

    more_vert
Powered by OpenAIRE graph
1 Projects
  • Funder: CHIST-ERA Project Code: CHIST-ERA-17-ORMR-007

    In this project, the team of researchers will address the problem of autonomous robotic grasping of objects in challenging scenes. We consider two industrially and economically important open challenges which require advanced vision-guided grasping. 1) “Bin-picking” for manufacturing, where components must be grasped from a random, self-occluding heap inside a bin or box. Parts may have known models, but will only be partially visible in the heap and may have complex shapes. Shiny/reflective metal parts make 3D vision difficult, and the bin walls provide difficult reach-to-grasp and visibility constraints. 2) Waste materials handling, which may be hazardous (e.g. nuclear) waste, or materials for recycling in the circular economy. Here the robot has no prior models of object shapes, and grasped materials may also be deformable (e.g. contaminated gloves, hoses). The proposed project comprises two parallel thrusts: perception (visual and tactile) and action (planning and control for grasping/manipulation). However, perception and action are tightly coupled and this project will build on recent advances in “active perception” and “simultaneous perception and manipulation” (SPAM). In the first thrust, we will exploit recent advances in 3D sensor technology and develop perception algorithms that are robust in challenging environments, e.g. handling shiny (metallic) or transparent (glass/perspex) objects, self-occluding heaps, known objects which may be deformable or fragmented, and unknown objects which lack any pre-existing models. In the second thrust, autonomous grasp planners will be developed with respect to visual features perceived by algorithms developed in the first thrust. Grasps must be planned to be secure, but also provide affordances to facilitate post-grasp manipulative actions, and also afford collision-free reach-to-grasp trajectories. Perceptual noise and uncertainty will be overcome in two ways, namely using computationally adaptive algorithms and mechanically adaptive underactuated hands. An object initially grasped by an accessible feature may need to be re-grasped (for example a tool that is not initially graspable by its handle). We will develop re-grasping strategies that exploit object properties learned during the initial grasp or manipulative actions. Overarching themes in the project are: methods that are generalisable across platforms; reproducibility of results; and the transfer of data. Therefore, the methods proposed in the two thrusts will be tested for reproducibility by implementing them in the different partner’s laboratories, using both similar and different hardware. Large amounts of data will be collected throughout these tests, and published online as a set of international benchmark vision and robotics challenges, curated by Université Laval once the project is completed.

    more_vert
Powered by OpenAIRE graph