Visual Perception for Humanoid Robots


Book Description

This book provides an overview of model-based environmental visual perception for humanoid robots. The visual perception of a humanoid robot creates a bidirectional bridge connecting sensor signals with internal representations of environmental objects. The objective of such perception systems is to answer two fundamental questions: What & where is it? To answer these questions using a sensor-to-representation bridge, coordinated processes are conducted to extract and exploit cues matching robot’s mental representations to physical entities. These include sensor & actuator modeling, calibration, filtering, and feature extraction for state estimation. This book discusses the following topics in depth: • Active Sensing: Robust probabilistic methods for optimal, high dynamic range image acquisition are suitable for use with inexpensive cameras. This enables ideal sensing in arbitrary environmental conditions encountered in human-centric spaces. The book quantitatively shows the importance of equipping robots with dependable visual sensing. • Feature Extraction & Recognition: Parameter-free, edge extraction methods based on structural graphs enable the representation of geometric primitives effectively and efficiently. This is done by eccentricity segmentation providing excellent recognition even on noisy & low-resolution images. Stereoscopic vision, Euclidean metric and graph-shape descriptors are shown to be powerful mechanisms for difficult recognition tasks. • Global Self-Localization & Depth Uncertainty Learning: Simultaneous feature matching for global localization and 6D self-pose estimation are addressed by a novel geometric and probabilistic concept using intersection of Gaussian spheres. The path from intuition to the closed-form optimal solution determining the robot location is described, including a supervised learning method for uncertainty depth modeling based on extensive ground-truth training data from a motion capture system. The methods and experiments are presented in self-contained chapters with comparisons and the state of the art. The algorithms were implemented and empirically evaluated on two humanoid robots: ARMAR III-A & B. The excellent robustness, performance and derived results received an award at the IEEE conference on humanoid robots and the contributions have been utilized for numerous visual manipulation tasks with demonstration at distinguished venues such as ICRA, CeBIT, IAS, and Automatica.




Humanoid Robotics and Neuroscience


Book Description

Humanoid robots are highly sophisticated machines equipped with human-like sensory and motor capabilities. Today we are on the verge of a new era of rapid transformations in both science and engineering-one that brings together technological advancements in a way that will accelerate both neuroscience and robotics. Humanoid Robotics and Neuroscienc




Deep Learning for Robot Perception and Cognition


Book Description

Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks. - Presents deep learning principles and methodologies - Explains the principles of applying end-to-end learning in robotics applications - Presents how to design and train deep learning models - Shows how to apply deep learning in robot vision tasks such as object recognition, image classification, video analysis, and more - Uses robotic simulation environments for training deep learning models - Applies deep learning methods for different tasks ranging from planning and navigation to biosignal analysis




Robot Hands and the Mechanics of Manipulation


Book Description

Robot Hands and the Mechanics of Manipulationexplores several aspects of the basic mechanics of grasping, pushing, and in general, manipulating objects. It makes a significant contribution to the understanding of the motion of objects in the presence of friction, and to the development of fine position and force controlled articulated hands capable of doing useful work. In the book's first section, kinematic and force analysis is applied to the problem of designing and controlling articulated hands for manipulation. The analysis of the interface between fingertip and grasped object then becomes the basis for the specification of acceptable hand kinematics. A practical result of this work has been the development of the Stanford/JPL robot hand - a tendon-actuated, 9 degree-of-freedom hand which is being used at various laboratories around the country to study the associated control and programming problems aimed at improving robot dexterity. Chapters in the second section study the characteristics of object motion in the presence of friction. Systematic exploration of the mechanics of pushing leads to a model of how an object moves under the combined influence of the manipulator and the forces of sliding friction. The results of these analyses are then used to demonstrate verification and automatic planning of some simple manipulator operations. Matthew T. Mason is Assistant Professor of Computer Science at Carnegie-Mellon University, and coeditor of Robot Motion (MIT Press 1983). J. Kenneth Salisbury, Jr. is a Research Scientist at MIT's Artificial Intelligence Laboratory, and president of Salisbury Robotics, Inc. Robot Hands and the Mechanics of Manipulationis 14th in the Artificial Intelligence Series, edited by Patrick Henry Winston and Michael Brady.




Aerial Manipulation


Book Description

This text is a thorough treatment of the rapidly growing area of aerial manipulation. It details all the design steps required for the modeling and control of unmanned aerial vehicles (UAV) equipped with robotic manipulators. Starting with the physical basics of rigid-body kinematics, the book gives an in-depth presentation of local and global coordinates, together with the representation of orientation and motion in fixed- and moving-coordinate systems. Coverage of the kinematics and dynamics of unmanned aerial vehicles is developed in a succession of popular UAV configurations for multirotor systems. Such an arrangement, supported by frequent examples and end-of-chapter exercises, leads the reader from simple to more complex UAV configurations. Propulsion-system aerodynamics, essential in UAV design, is analyzed through blade-element and momentum theories, analysis which is followed by a description of drag and ground-aerodynamic effects. The central part of the book is dedicated to aerial-manipulator kinematics, dynamics, and control. Based on foundations laid in the opening chapters, this portion of the book is a structured presentation of Newton–Euler dynamic modeling that results in forward and backward equations in both fixed- and moving-coordinate systems. The Lagrange–Euler approach is applied to expand the model further, providing formalisms to model the variable moment of inertia later used to analyze the dynamics of aerial manipulators in contact with the environment. Using knowledge from sensor data, insights are presented into the ways in which linear, robust, and adaptive control techniques can be applied in aerial manipulation so as to tackle the real-world problems faced by scholars and engineers in the design and implementation of aerial robotics systems. The book is completed by path and trajectory planning with vision-based examples for tracking and manipulation.




Visual Perception for Manipulation and Imitation in Humanoid Robots


Book Description

Dealing with visual perception in robots and its applications to manipulation and imitation, this monograph focuses on stereo-based methods and systems for object recognition and 6 DoF pose estimation as well as for marker-less human motion capture.







A Visual Servoing Approach to Human-Robot Interactive Object Transfer


Book Description

Taking human factors into account, a visual servoing approach aims to facilitate robots with real-time situational information to accomplish tasks in direct and proximate collaboration with people. A hybrid visual servoing algorithm, a combination of the classical position-based and image-based visual servoing, is applied to the whole task space. A model-based tracker monitors the human activities, via matching the human skeleton representation and the image of people in image. Grasping algorithms are implemented to compute grasp points based on the geometrical model of the robot gripper. Whilst major challenges of human-robot interactive object transfer are visual occlusions and making grasping plans, this work proposes a new method of visually guiding a robot with the presence of partial visual occlusion, and elaborate the solution to adaptive robotic grasping.




Visual Perception and Robotic Manipulation


Book Description

This book moves toward the realization of domestic robots by presenting an integrated view of computer vision and robotics, covering fundamental topics including optimal sensor design, visual servo-ing, 3D object modelling and recognition, and multi-cue tracking, emphasizing robustness throughout. Covering theory and implementation, experimental results and comprehensive multimedia support including video clips, VRML data, C++ code and lecture slides, this book is a practical reference for roboticists and a valuable teaching resource.




A Roadmap for Cognitive Development in Humanoid Robots


Book Description

This book addresses the central role played by development in cognition. The focus is on applying our knowledge of development in natural cognitive systems, specifically human infants, to the problem of creating artificial cognitive systems in the guise of humanoid robots. The approach is founded on the three-fold premise that (a) cognition is the process by which an autonomous self-governing agent acts effectively in the world in which it is embedded, (b) the dual purpose of cognition is to increase the agent's repertoire of effective actions and its power to anticipate the need for future actions and their outcomes, and (c) development plays an essential role in the realization of these cognitive capabilities. Our goal in this book is to identify the key design principles for cognitive development. We do this by bringing together insights from four areas: enactive cognitive science, developmental psychology, neurophysiology, and computational modelling. This results in roadmap comprising a set of forty-three guidelines for the design of a cognitive architecture and its deployment in a humanoid robot. The book includes a case study based on the iCub, an open-systems humanoid robot which has been designed specifically as a common platform for research on embodied cognitive systems .