LOGIN TO YOUR ACCOUNT

Username
Password
Remember Me
Or use your Academic/Social account:

CREATE AN ACCOUNT

Or use your Academic/Social account:

Congratulations!

You have just completed your registration at OpenAire.

Before you can login to the site, you will need to activate your account. An e-mail will be sent to you with the proper instructions.

Important!

Please note that this site is currently undergoing Beta testing.
Any new content you create is not guaranteed to be present to the final version of the site upon release.

Thank you for your patience,
OpenAire Dev Team.

Close This Message

CREATE AN ACCOUNT

Name:
Username:
Password:
Verify Password:
E-mail:
Verify E-mail:
*All Fields Are Required.
Please Verify You Are Human:
fbtwitterlinkedinvimeoflicker grey 14rssslideshare1
Publisher: IEEE
Languages: English
Types: Conference object
Subjects: Grasping, Robot sensing systems, Joints, Robot kinematics, Force, Humans

Classified by OpenAIRE into

mesheuropmc: body regions
Due to copyright restrictions, the access to the full text of this article is only available via subscription. Parental scaffolding is an important mechanism utilized by infants during their development. Infants, for example, pay stronger attention to the features of objects highlighted by parents and learn the way of manipulating an object while being supported by parents. In this paper, a robot with the basic ability of reaching for an object, closing fingers and lifting its hand lacks knowledge of which parts of the object affords grasping, and in which hand orientation should the object be grasped. During reach and grasp attempts, the movement of the robot hand is modified by the human caregiver's physical interaction to enable successful grasping. The object regions that the robot fingers contact first are detected and stored as potential graspable object regions along with the trajectory of the hand. In the experiments, we showed that although the human caregiver did not directly show the graspable regions, the robot was able to find regions such as handles of the mugs after its action execution was partially guided by the human. Later, this experience was used to find graspable regions of never seen objects. At the end, the robot was able to grasp objects based on the position of the graspable part and stored action execution trajectories. Ministry of Education, Culture, Sports, Science and Technology, Japan ; European Commission ; TÜBİTAK

Share - Bookmark

Download from

Cite this article

Collected from