Any Project from any Company can be Implemented at a better Price

1.Mobile robotic system for search mission

Mobile robotic system for search mission

Robot can do a work with ease which seems to be impossible for a man and it becomes more helpful if one can able to control it wirelessly. Now a day robot is becoming a versatile and has a lot of features like one can control it by Smartphone, can avoid obstacles automatically, sense the environment and can send alert and now even it can diffuse the bomb and can perform almost all the critical task. The feature which is discussed in this paper is to use it in rescue and search mission. The robot can be controlled wirelessly using RF technology, has ultrasonic sensor for obstacle detection and it is also equipped with the smart phone camera to provide a Omni directional view and can send the video stream wirelessly to remote device which makes it easier to controlled the bot. The robot can explore those places where human cannot reach easily like the places suffered from natural disaster like earthquake, tsunami and hurricane.

2 Cooperative capture by multi-agent using reinforcement learning application for security patrol

Cooperative capture by multi-agent using reinforcement learning application for security patrol

Aim of this study is to create a security patrol system which is a cooperative capturing system by using multi-agent in the building. A host computer deploys autonomous robots as agents and find the best strategy to enclose an intruder. From the view point of the pursuit problem by multi-agent, reinforcement learning theory is one of choices to find way how to enclose an intruder. In order to apply reinforcement learning theory to security patrol systems, this study introduces how to discretize patrol areas. Some RFID tags are embedded in the floor and each autonomous robot can know the location where it is by sensing RFID tags, then sends locational information to the host computer. The host computer calculates positioning of the autonomous robots based on received locational data through wireless network. We make a prototype of patrol system and show how it works in this paper.

3 Development and implementation of a natural interface to control an industrial hydraulic robot arm

Development and implementation of a natural interface to control an industrial hydraulic robot arm

This paper discusses the architecture, the development and the implementation of a natural interface based on a vision system to control an industrial hydraulic robot arm with three Degrees Of Freedom (3 DOF). The project consists of using the natural movements of a human operator to capture commands to move the robot arm. By moving his upper limbs, hands and arms, the human operator generates control signals through an architecture which is based on a Microsoft Kinect for Windows and an electronic system developed with a microcontroller to trigger the proportional control valves of the hydraulic robot arm. From the architecture we show that we control industrial robotics systems using only the visual natural movement of human body without any other sensing device. We implemented a prototype and gathered experimental results that helped us validate our approach and architecture. The system was built to meet industrial accuracy while respecting safety standards.

4 RoboSantral: An autonomous mobile guide robot

RoboSantral: An autonomous mobile guide robot

RoboSantral, An autonomous mobile robot which has been designed and realized in order to guide the visitors through a university campus, is presented in this paper. This robot accompanies guests through the campus and gives presentations on predefined locations. Location data is obtained from GPS sensors. Targets such as faculty buildings, museums etc... are recognized by the image processing of pre-defined tags. As microprocessor and microcontroller, Raspberry Pi and Arduino are used respectively.

5 Robot Farmers: Autonomous Orchard Vehicles Help Tree Fruit Production

Robot Farmers: Autonomous Orchard Vehicles Help Tree Fruit Production

This article presents perception and navigation systems for a family of autonomous orchard vehicles. The systems are customized to enable safe and reliable driving in modern planting environments. The perception system is based on a global positioning system (GPS)-free sensor suite composed of a twodimensional (2-D) laser scanner, wheel and steering encoders, and algorithms that process the sensor data and output the vehicle's location in the orchard and guidance commands for row following and turning. Localization is based on range data to premapped landmarks, currently one at the beginning and one at the end of each tree row. The navigation system takes as inputs the vehicle's current location and guidance commands, plans trajectories for row following and turning, and drives the motors to achieve fully autonomous block coverage. The navigation system also includes an obstacle detection subsystem that prevents the vehicle from colliding with people, trees, and bins. To date, the vehicles sporting the perception and navigation infrastructure have traversed over 350 km in research and commercial orchards and nurseries in several U.S. states. Time trials showed that the autonomous orchard vehicles enable efficiency gains of up to 58% for fruit production tasks conducted on the top part of trees when compared with the same task performed on ladders. Anecdotal evidence collected from growers and workers indicates that replacing ladders with autonomous vehicles will make orchard work safer and more comfortable.

6 Vibration and voice operated navigation system for visually impaired person

Vibration and voice operated navigation system for visually impaired person

Usually blind people use white canes which are very limited in its ability to provide navigation assistance to the user and cannot easily detect obstacles. Mobility of visually impaired people is limited by their inability to perceive their surroundings. Therefore the purpose of this project is to build a navigation system that will be able to guide a visually impaired person safely and with ease, in an indoor and outdoor environment. This goal has been realized through the use of an ultrasonic device to determine the range of obstacles and also a microcontroller to act accordingly. The system includes a warning system through voice rendering and through generation of vibration.

7 Intelligent Object Sorting Insolent System


Intelligent Object Sorting Insolent System

In today's world, maximum of the industrial unit we can find definite robots which were made for that quantified task and they can't be reconfigured to exhibit robust nature majorly because of their design. In relentless working regions most of the time human efforts may lead to fiasco, industrial environment may include hazardous operations. Thus to avoid any ill effects we came up with an idea of “Intelligent Object Sorting Insolent System (IOSiS)”. In this paper we have enlightened the use of image processing, incorporated with robotics to demonstrate an application. IOSiS main object is to highlight the use of robotics to manipulate human functionality to ease tedious operations. A mechanical arm and for visual sensors, a camera is embedded onto the system. For demonstration of this application, capabilities a scenario is created in which the robot captures the image of the object and in accordance to its configuration it detects it and using the maneuverable arm, it places the object to desired location. The paper tackles multiple problems to become robust, autonomous and accurate to its functionality. With complex algorithms from image processing the data is manipulated into fast processing embedded application which produces an output on a real-time basis. To further accompany the autonomous nature of the system on field, a data logging and monitoring from a remote location via internet is also shown to explain the efficiency of this application. An internet logging shows its capabilities in being a part of centralized operation in any industry.

8 An RF relay based control & communication system for Unmanned Ground Vehicle

An RF relay based control and communication system for Unmanned Ground Vehicle and Micro Air Vehicle


This paper presents a control and communication system for Unmanned Ground Vehicle (UGV) - Micro Air Vehicle (MAV) based surveillance. Radio Frequency (RF) based relay communication system is used to communicate between the ground station and vehicles. These vehicle systems are meant to address a large number of civilian and military applications including intelligence, surveillance and reconnaissance through a collaborative communication network. A First Person View (FPV) approach is utilized to wirelessly pilot the vehicles. This paper discusses the on-board computation methodology of the vehicles and communication system with ground station through a custom interface. The integrated system concludes to present a technique to control and coordinate a robotic surveillance system.

9 A robot control system for video streaming services by using dynamic encoded QR codes

A robot control system for video streaming services by using dynamic encoded QR codes

We propose a novel robot control system by transmitting robot control information on existing video streaming services as dynamic encoded two-dimentional visual code. We implemented sensor data transmitting system by using dynamic encoded two-dimentional visual code which called SENSe-TREAM [1] and we built the robot controlling system by using SENSeTREAM architecture. This paper shows the architecture of robot controlling system and future vision of telepresence and human-robot interaction.

10 Towards Development of Reliable Mobile Robot Navigation System

Towards Development of Reliable Mobile Robot Navigation System

This paper presents a combined control system for mobile robot. The developed system allows to teleoperate a mobile robot and switches to autonomous movement when the connection with the mobile robot is lost. To create this control system the comparative analysis of existing sensors was performed, based on this analysis the sensors for localization robot were chosen, the virtual model of the 4-wheeled mobile robot and a controller were developed. Then based on ROS(Robot Operating System) the combined control system was created.

11 Autonomous three-wheeled robot with computer vision system

Autonomous three-wheeled robot with computer vision system

This paper presents three-wheeled robot with computer vision system. The robot was developed in the Saint-Petersburg Electrotechnical University in the context of the computer vision algorithms research. The main research steps were the following: low level control system executing, layout of PCB for brushless direct current motor wheels control, writing a program for robot remote access, pattern recognition and obstacles avoiding using computer vision.

12 Humanoid robots rescuing humans and extinguishing fires using HARMS

Humanoid robots rescuing humans and extinguishing fires for Cooperative Fire Security System using HARMS

Fires cause billions of dollars in damage and thousands of deaths each year. Firefighting robots are being deployed around the world to reduce the loss of human life and the amount of property damage. High-rise buildings are used for both business and family homes. Buildings with dozens of floors present a great challenge to firefighters. Firefighter ladders cannot reach high enough to fight fires at the top of the building. Going into the building itself, in order to extinguish the blazing fire, is typically too dangerous and puts firefighters at risk. Monitoring, locating, and extinguishing the fire in the smallest amount of time is crucial to controlling fires in highrise buildings. This paper introduces humanoid robots capable of moving towards and extinguishing a fire and locating and rescuing any humans unlucky enough to be trapped in the inferno. This paper is one part of a Cooperative Fire Security System using HARMS (CFS2H) that detects, locates, and extinguishes a fire and rescues human beings using the Human, Agent, Robot, Machine, Sensor (HARMS) protocol

13 BRACON: Control system for a robotic arm with 6 degrees of freedom for education systems

BRACON: Control system for a robotic arm with 6 degrees of freedom for education systems

This article focuses on the design and development of a control system for a robotic arm designed at the Universidad de las Fuerzas Armadas, Latacunga extension, by using Dynamixel servomotors. The use of Python software, with advantages and features of being a free programming language, provides the project with reliability and ease of communication with a computer arm. The use of these techniques allow to obtain solutions much cheaper than the current ones by using open source software.

14 A 9-dof robotic hand Teleoperation system using haptic technology

A 9-dof robotic hand Teleoperation system using haptic technology

In order to represent the robotic technology in the field of human-machine interaction and wireless communication for allowing interactivity in real-time with virtual objects, it is very necessary to develop some technology that makes the maximum use of robot to help people to do their work in an efficient way in their day to day life. In this Teleoperation system using haptic technology, an operator controls the movements of a robot which is located at some distance. Different types of force sensors, angel sensors and gyro sensors are used to take the input and these inputs are given to microcontroller. The motors in robot arm respond to control signal from controller board. This robotic arm/hand has wide range of application in the area such as in industry for pick and place and in medical for surgery etc. This robotic hand is used where more precision and accuracy is required. This robotic hand replaces human hand in the situation where human hand is unable to penetrate.

15 Telepresence Robots for Medical and Homecare Applications

Telepresence Robots for Medical and Homecare Applications

This chapter explores the up-to-date research findings and industry practices in telepresence robots for medical and homecare applications, including rehabilitation and therapy, monitoring and assistance, and communication. Moreover, the key contributing factors to the success of telepresence robots are also discussed to address the future trends and opportunities. Robots have the advantages of high precision, strong consistency, and stability. Thus, in the field of medical applications, the use of robots exactly helps to overcome the technical limitations of conventional surgery performed by physicians. Telepresence helps extend not only human vision and hearing but also the sense of touch, which is important for physical rehabilitation. From the user's perspective, human-centered design is definitely critical to the success of the telepresence robot. Once the real demands can be explored and realized, telepresence robots will eventually enter our lives as new roles for modern medicine and homecare.

16 AGROBOT — a robot for early crop disease detection using image processing

AGROBOT — a robot for early crop disease detection using image processing

Management of crops from early stage to mature harvest stage involves identification and monitoring of plant diseases, nutrient deficiency, controlled irrigation and controlled use of fertilizers and pesticides. Although the number of remote sensing solutions is increasing, the availability and ground visibility during critical growth stages of crops continue to be major concerns. eAGROBOT (a prototype) is a ground based agricultural robot that overcomes challenges existing in large and complex satellite based solutions and helpdesk form of solutions available as m-Services. It provides a small, portable and reliable platform to automatically survey farmland, detect diseases as well as spray the pesticide. In future, the farmer can obtain a consolidated view of the farm along with decision support statistics for planning purposes. The development of eAGROBOT, real time testing results obtained from cotton and groundnut plantations and future focus has been detailed in this paper.

17 Designing a spatially aware, autonomous quad copter using the android control sensor system

Designing a spatially aware, autonomous quad copter using the android control sensor system


Gathering information for intelligence, surveillance, and reconnaissance (ISR) poses a risk to the human operators, namely the United States military and intelligence sectors. An autonomous drone that can perform advance ISR of enclosed spaces will significantly impact a variety of safety critical applications, including search and rescue. Current systems are limited to outdoor environments with access to global positioning systems (GPS) and are typically expensive, with custom engineering and proprietary interfaces. Our aim is to create indoor capability and utilize commercial off-the-shelf (COTS) subsystems to reduce cost and improve flexibility for diverse applications. The goal of this project is to develop a proof of concept design for a quadcopter that will create a map of an unknown, indoor space. We must develop a simultaneous localization and mapping (SLAM) algorithm for the quadcopter to create the map autonomously. The problem of both building a map of an unknown space and localizing within that space is termed SLAM. SLAM is frequently referred to as a chicken-and-egg problem, since accurate mapping requires knowledge of location, and vice versa. A SLAM algorithm must probabilistically relate environmental sensors and utilize a probabilistic motion model to converge to a most likely map of the environment and position of the robot. This project has four major parts: hardware, which includes integration of the sensors, quadcopter, and Android phone; command and control; the SLAM algorithm, which will run without a GPS; and a mobile application for viewing usable maps. We found both localization and mapping algorithms are adept at operating separately within a GPS obscured environment. Future steps include combining the localization and mapping algorithms into an optimized SLAM algorithm that will run efficiently on the Android phone.

19A Sensor-Based Dual-Arm Tele-Robotic System

A Sensor-Based Dual-Arm Tele-Robotic System

We present a novel system to achieve coordinated task-based control on a dual-arm industrial robot for the general tasks of visual servoing and bimanual hybrid motion/force control. The industrial robot, consisting of a rotating torso and two seven degree-of-freedom arms, performs autonomous vision-based target alignment of both arms with the aid of fiducial markers, two-handed grasping and force control, and robust object manipulation in a tele-robotic framework. The operator uses hand motions to command the desired position for the object via Microsoft Kinect while the autonomous force controller maintains a stable grasp. Gestures detected by the Kinect are also used to dictate different operation modes. We demonstrate the effectiveness of our approach using a variety of common objects with different sizes, shapes, weights, and surface compliances.

20 Real-Time Multisensory Data Retrieval for Cloud Robotic Systems

Real-Time Multisensory Data Retrieval for Cloud Robotic Systems


Cloud Robotics is currently driving interest in both academia and industry. It allows different types of robots to share information and develop new skills even without specific sensors. They can also perform intensive tasks by combining multiple robots with a cooperative manner. Multi-sensor data retrieval is one of the fundamental tasks for resource sharing demanded by Cloud Robotic system. However, many technical challenges persist, for example Multi-Sensor Data Retrieval (MSDR) is particularly difficult when Cloud Cluster Hosts accommodate unpredictable data requested by multi robots in parallel. Moreover, the synchronization of multi-sensor data mostly requires near real-time response of different message types. In this paper, we describe a MSDR framework which is comprised of priority scheduling method and buffer management scheme. It is validated by assessing the quality of service (QoS) model in the sense of facilitating data retrieval management. Experiments show that the proposed framework achieves better performance in typical Cloud Robotics scenarios.

21 Smart-M3-based robots self-organization in pick-and-place system

Smart-M3-based robots self-organization in pick-and-place system

This paper presents an approach for robots selforganization for pick-and-place scenario based on Smart-M3 information sharing platform that provide possibilities of information sharing between different services in smart space. In scope of the approach the reference model for robots selforganization has been developed. To provide semantic interoperability, the ontologies for the robots participating in the scenario, have been built. The scenario implementation is based on Lego® Mindstorms EV3 set for robot construction, which is one of the most popular sets for education at the moment.

22 A cloud robotics system for telepresence enabling mobility impaired people to enjoy

A cloud robotics system for telepresence enabling mobility impaired people to enjoy the whole museum experience

We present a novel robotic telepresence platform composed by a semi-autonomous mobile robot based on a cloud robotics framework, which has been developed with the aim of enabling mobility impaired people to enjoy museums and archaeological sites that would be otherwise inaccessible. Such places, in fact, very often are not equipped to provide access for mobility impaired people, in particular because these aids require dedicated infrastructures that may not fit within the environment and large investments. For this reason, people affected by mobility impairments are often unable to enjoy a part or even the entire museum experience. Solutions allowing mobility impaired people to enjoy museum experience are often based on recorded tours, thus they do not allow active participation of the user. On the contrary, the presented platform is intended to allow users to enjoy completely the museum round. A robot equipped with a camera is placed within the museum and users can control it in order to follow predefined tours or freely explore the museum. Our solution ensures that users see exactly what the robot is seing in real-time. The cloud robotics platform controls both navigation capabilities and teleoperation. Navigation tasks are intended to let the robot reliably follow pre-defined tours, while main concern of teleoperation tasks is to ensure robot safety (e.g., by means of dynamic obstacle detection and avoidance software). Proposed platform has been optimized to maximize user experience.

23 Human hand tracking using MATLAB to control Arduino based robotic arm

Human hand tracking using MATLAB to control Arduino based robotic arm

Hand tracking system gets so much attention in recent time because of its great applications. This method can be implemented by so many techniques. Here we show a straight forward technique of tracking the human hand using the robotic arm. This paper shows the interfacing of human hand using robot arm. With this method the robotic hand can be controlled using human hand. Its demonstration is done by using image processing technique to detect different colors at different axis of human hand. This technique is very useful since it takes real time video of hand and tracks it to get interface with robotic arm. A laptop camera will get the video and track the RGB (red, green and blue) colors at different axis of hand denoting X and Y axis. Tracking of such hand will interface the controller with robotic arm. The main aim behind this approach to program a robotic arm, so that it should be controlled by human hand and will reach the locations where human will not be able to reach and do the given task by direct interfacing with human hand. In this we can see the real time movement of robotic arm.

24 Flying robot - A drone for urban warfare

Flying robot - A drone for urban warfare

This paper presents about a flying robot for data collection and patrolling purposes. The data collection is done using camera where the output of the camera is given to monitor. The flying mechanism consists of brushless DC motor and propeller which decides the motion and the direction of the robot. The power supply is given by nickel metal hydride battery. The control mechanism requires radio frequency and PIC microcontroller. The navigation system is provided by GPS which is used to detect the correct location of the robot. If the robot is equipped with the combat equipment, it protects from the enemy threat by controlling it from safer distance like UCAV (unmanned combat vehicle).

25 Towards a Reliable Monitoring Robot for Mountain Vineyards

Towards a Reliable Monitoring Robot for Mountain Vineyards

Crop monitoring and harvesting by ground robots on mountain vineyards is an intrinsically complex challenge, due to two main reasons: harsh conditions of the terrain and reduced time availability and unstable localization accuracy of the GPS system. In this paper is presented a cost effective robot that can be used on these mountain vineyards for crop monitoring tasks. Also it is explored a natural vineyard feature as the input of a standard 2D simultaneous localization and mapping approach (SLAM) for feature-based map extraction. In order to be possible to evaluate these natural features for mapping and localization purposes, a virtual scenario under ROS/Gazebo has been built and described. A low cost artificial landmark and an hybrid SLAM is proposed to increase the localization accuracy, robustness and redundancy on these mountain vineyards. The obtained results, on the simulation framework, validates the use of a localization system based on natural mountain vineyard features.

26 Designing an autonomous soil monitoring robot

Designing an autonomous soil monitoring robot

Through the monitoring of soil conditions land managers can respond rapidly to mitigate adverse events, such as extreme weather or ongoing drought. However, without an extensive system of sensors, gathering information over a large field takes an exorbitant amount of time. This mass collection of soil data would allow farm managers to study time-lapsed trends and variables within a particular region to provide quick assessment of land conditions. Currently, the client uses a bulky handheld wireless soil sensor to measure moisture content and temperature. To take measurements, the client must walk to the coordinates of interest, clear the ground of vegetation, manually insert the probe into the ground, and log the reading. The team is designing an autonomous soil monitoring rover to expedite data collection and reduce labor. The rover will be able to autonomously navigate through a field several acres in size and avoid obstacles. It will gather data on soil moisture and temperature at a set of given waypoints and relay the information back to the farm manager. Constructed with a custom welded steel frame, the first rover prototype will be a four-wheeled vehicle with front wheel drive. The vehicle will be equipped with a Stevens Hydra Probe II mounted to a linear actuator. Navigation will be handled using a GPS and wheel encoders. When completed, the rover will allow the land manager to analyze trends between soil data and pasture health, providing an accurate snapshot of a field.


26 AndroRC: An Android remote control car unit for search missions

AndroRC: An Android remote control car unit for search missions

The AndroRC is a remote control car (RC) unit controlled by a smartphone running on an Android application. The car is meant to be used in search missions in the occurrence of natural disasters. It is developed to autonomously avoid obstacles that are not visible to the user driver. The RC unit is developed based on a Tamiya 70112 Buggy car chassis set with an extra servo motor added to provide the left and right directions. The RC is equipped with an ultrasonic distance sensor, a camera, a Bluetooth receiver, a Wi-Fi transmitter, two 9-V batteries and two Arduino microcontroller boards (UNO and MEGA). The Arduino MEGA controls the propulsion and direction, while the UNO processes the information received from the distance sensor to stop the RC at a pre-defined distance. The Android application uses the embedded orientation sensor on the smartphone to determine the four directions (forward, backward, left and right) intended by the user; hence, rotating the smartphone to different directions results in to the corresponding propulsion of the RC unit. The control commands are transmitted to the RC unit through the Bluetooth communication. The Android application also receives (via Wi-Fi) and displays the information from the camera in real-time. The AndroRC was characterized and examined on bench-top settings.

27 Signage System for the Navigation of Autonomous Robots in Indoor Environments

Signage System for the Navigation of Autonomous Robots in Indoor Environments

In many occasions people need to go to certain places without having any prior knowledge about the environment. This situation may occur when the place is visited for the first time, or even when there is not any available map to situate us. In those cases, the signs of the environment are essential for achieving the goal. The same situation may happen for an autonomous robot. This kind of robots must be capable of solving this problem in a natural way. In order to do that, they must use the resources present in their environment. This paper presents a RFID-based signage system, which has been developed to guide and give important information to an autonomous robot. The system has been implemented in a real indoor environment and it has been successfully proved in the autonomous and social robot Maggie. At the end of the paper some experimental results, carried out inside our university building, are presented.

28 Trajectory planning and tracking control for LED placement spray wax robot

Trajectory planning and tracking control for LED placement spray wax robot

For the motion system of a LED placement spray wax robot, this paper uses shortest path planning method of optimization theory to plan the robot's motion trajectory. It uses high order polynomial to plan spray wax robot's joint movement. Guided by the determining learning theory, this paper uses radial basis function (RBF) to design an adaptive neural network controller (ANNC). It tracks and controls the motion process of robot to move along the planned trajectory. This ensures the stability and rapidity of robot movement to achieve the purpose of save time and energy. Experiment is conducted using the algorithm designed in this paper, and comparison experiment with PID algorithm is also well done. The experimental results illustrate the effectiveness of the method designed in this paper.

29 Real-time gesture recognition and robot control through blob tracking



Real-time gesture recognition and robot control through blob tracking


This paper presents the framework on vision based interface that has been designed to instruct a humanoid robot through gestures using image processing. Image thresholding and blob detection techniques were used to obtain gestures. Then we analyze the images to recognize the gesture given by the user in front of a web camera and take an appropriate action (like taking picture, moving robot, etc). The application is developed using OpenCV (Open Computer Vision) libraries and Microsoft Visual C++. The gestures obtained by processing the live images are used to command a humanoid robot with simple capabilities. A commercial humanoid toy robot - Robosapien was used as the output module of the system. The robot was interfaced to computer by USB-UIRT (Universal Infrared Receiver and Transmitter) module.


30 Wall following and human detection for mobile robot surveillance in indoor environment

Wall following and human detection for mobile robot surveillance in indoor environment



This paper describes the design of an indoor surveillance system capable of wall following and human detection based on intelligent mobile robot navigation. The wall following is performed based on a differential velocity control using type-2 fuzzy logic for a wheeled mobile robot equipped with IR sensors and sonar sensors, and human detection is performed by a human detection sensor. We have tested the application of our design in mobile robot for an indoor surveillance task of a polygon terrain using right wall following. The surveillance system of mobile robot is effective to work well in our testing




31 Global path planning with obstacle avoidance for omnidirectional mobile robot using camera

Global path planning with obstacle avoidance for omnidirectional mobile robot using overhead camera


Path planning is one of the indispensable modules of autonomous mobile robots that delineates a collision-free path between two desired positions in an obstacle-cluttered workspace. In this paper, we propose a global path planning method in the image plane using a single overhead camera based on the principle of artificial potential fields. Our algorithm optimally fuses an image-based technique for obstacle avoidance with path planning in the image space and integrates the CAD-based recognition method. The proposed method is suitable for planning the desired path for the omnidirectional mobile robots, and it is used as an input to our previously developed path-following controller. Experiment results show the efficiency of the generated path using an overhead camera for the omnidirectional robot iMoro which is a four-wheeled, independently steered mobile robot.





32 A wearable device for controlling a robot gripper with fingertip contact, pressure, vibrotactile

A wearable device for controlling a robot gripper with fingertip contact, pressure, vibrotactile, and grip force feedback


Inspired by the potential of robotic teleoperation platforms, which extend the capabilities of both the human operator and the controlled robot, we have created a wearable haptic device that gives an operator bilateral control over the gripper of a remote robot. We believe this device is the first to provide kinesthetic grip force feedback along with independently controllable fingertip contact, pressure, and vibrotactile feedback, all of which are known to be of vital importance to humans when directly manipulating objects. The device is worn on the user's index finger and thumb and allows him or her to control the grip aperture of the robot using a pinching motion. Simultaneously, the operator receives kinesthetic grip force feedback from a geared DC motor and fingertip contact, pressure, and vibrotactile feedback from a pair of linear voice-coil actuators. This paper first describes the design of the device and then proposes a controller that closely links the human's hand to the sensory signals measured by kinesthetic and tactile sensors on the robot's gripper. We demonstrate initial feasibility of the device by having a user teleoperate a PR2 humanoid robot to repeatedly pick up and set down five diverse objects.

33 Towards the design of a new humanoid robot for domestic applications

Towards the design of a new humanoid robot for domestic applications

Robots that possess the ability to undertake everyday tasks in domestic environments have the potential to provide unprecedented independence to disabled and elderly people who are currently reliant on other people to do these jobs for them. In addition to the ability to perform basic tasks, it is desirable that such robots possess some form of social interface such that users can interact with them in a natural manner. While many robot platforms have been developed to perform everyday tasks, few systems possess high levels of mechanical efficiency, system stability, practical functionality and a dynamic social interface. This work presents the novel design of a humanoid robot that uses wheels for locomotion and the combination of an actuated stabilizer and a self-balancing control algorithm to maintain stability. To validate some of the basic concepts in this design, a full scale working prototype was built and its performance was tested. It was found that despite being the first prototype of its type, it was capable of robust locomotion in indoor environments and was capable of traversing small bumps with relative ease. It was also very efficient at picking up small items that from the ground.

34 System for automatic collisions prevention for a manipulator arm of a mobile robot

System for automatic collisions prevention for a manipulator arm of a mobile robot

Control systems of mobile robots controlled remotely by a human operator using only limited visual feedback from camera subsystem can aid the operator in various ways. This paper deals with automatic collision detection for manipulator arm mounted on a mobile robot - the goal of the system is to prevent any possible collisions between the arm and the robot or the arm itself. Implementation of the system uses the separating axis algorithm on pairs of oriented bounding boxes enveloping mechanical components of the robot. Practical testing of the whole control system was done on two existing mobile robots.

35 Design of a vision-based autonomous robot for street navigation

Design of a vision-based autonomous robot for street navigation

In this study we present the design of an autonomous mobile robot that navigates indoor and outdoor environments using computer vision. System's hardware, software and simulation infrastructures are explained and autonomous navigation algorithms used to reach a given target are described. Successful experimental results are obtained for computer vision algorithms like sidewalk following, visual localization and sidewalk detection from satellite maps, crosswalk detection and path planning. We envision that in the future such a system can be a basis for designing advanced robot systems that help people in their daily lives.

36 Study on navigating path recognition for the greenhouse mobile robot based on K-means algorithm


Study on navigating path recognition for the greenhouse mobile robot based on K-means algorithm

In order to improve the robustness to the nonuniform illumination and the real-timness of the mobile robot navigation path recognition system in a greenhouse, firstly, the three components H, S and I are respectively separated from HSI color space, and the H component which has nothing to do with light intensity and can restrain effectively the effect of noise is extracted for the subsequent image processing. For the color characteristic of greenhouse environment, the clustering segmentation of the image is performed based on K-means algorithm to achieve the respective cluster of the path and green crop information. Then, the redundant and interference information existing in the clustered image is eliminated by a morphological corrosion so as to obtain the complete and clear path information. Compared with the conventional threshold segmentation methods, the proposed method can solve the problem of too large memory occupation and too long calculation time caused by the unclear segmentation information for the subsequent Hough transform, thus can enhance the rapidity of the greenhouse path recognition and meet the real-time requirements of automatic navigation and operation of the greenhouse robot. The experiment results show that for the greenhouse robot working in the environment with a complex background and variable light, the proposed method can significantly reduce the effect of the nonuniform illumination on the navigation, that is, has a good robustness to the nonuniform illumination.





37 Smart phone interface for robust control of mobile robots


Smart phone interface for robust control of mobile robots


Advancements in cell phone hardware and software are revolutionizing the smart phone sector. Everything from shopping to complex medical systems are getting smart phone based. This paper discusses the development of a generic system for control of mobile robots using smart phones, and development of applications for two iPhones to get surveillance video from a mobile robot is illustrated. The solution is designed to handle interruptions such as incoming call and SMS during operation making the system robust. The developed system is flexible enough that it can accommodate any special camera without any changes in hardware of the system. Experiment results show that this generic solution is fast, flexible, portable and secure enough that it can be used anywhere with any robot/vehicle.

39 Path planning algorithm development for autonomous vacuum cleaner robots

Path planning algorithm development for autonomous vacuum cleaner robots

A vacuum cleaner robot, generally called a robovac, is an autonomous robot that is controlled by intelligent program. Autonomous vacuum cleaning robot will perform task like sweeping and vacuuming in a single pass. The DVR-1 vacuum cleaning robot consists of two DC motor operated wheels that allow 360 degree rotation, a castor wheel, side spinning brushes, a front bumper and a miniature vacuum pump. Sensors in the bumper are used for generating binary information of obstacle detection then they are processed by some controlling algorithms. These algorithms are used for path planning and navigation. The robot's bumper prevents them from bumping into walls and furniture by reversing or changing path accordingly.

40 Ultrasonic-sensor deployment strategies and use of smartphone sensors for mobile robot navigation


Ultrasonic-sensor deployment strategies and use of smartphone sensors for mobile robot navigation in indoor environment



This paper presents deployment strategies of ultrasonic sensors and a way of using Smartphone sensors to help mobile robot navigation in indoor environments. There are critical needs for cost-effective, reliable, and fairly accurate solutions to meet the demands of indoor robotic applications. Ultrasonic sensors have been popular in detecting simple objects due to the low-cost and simplicity despite their limitations. We propose an efficient way of deployment of ultrasonic sensors for low-cost mobile robots. A Smartphone has many high performance sensors that can be utilized to navigate and localize mobile robots. The sensors include a camera, a gyroscope, and an accelerometer. We analyzed the use of orientation sensor of a Smartphone and compared its performance to a conventional approach. The comparison results were promising. The combination of the efficient way of the sensor deployment and the use of Smartphone sensors shows a possibility of developing a low-cost indoor mobile robotics platform for college education and robotics research laboratories.





41 Ultrasonic-sensor deployment strategies and use of smartphone for mobile robot navigation

Ultrasonic-sensor deployment strategies and use of smartphone sensors for mobile robot navigation in indoor environment

This paper presents deployment strategies of ultrasonic sensors and a way of using Smartphone sensors to help mobile robot navigation in indoor environments. There are critical needs for cost-effective, reliable, and fairly accurate solutions to meet the demands of indoor robotic applications. Ultrasonic sensors have been popular in detecting simple objects due to the low-cost and simplicity despite their limitations. We propose an efficient way of deployment of ultrasonic sensors for low-cost mobile robots. A Smartphone has many high performance sensors that can be utilized to navigate and localize mobile robots. The sensors include a camera, a gyroscope, and an accelerometer. We analyzed the use of orientation sensor of a Smartphone and compared its performance to a conventional approach. The comparison results were promising. The combination of the efficient way of the sensor deployment and the use of Smartphone sensors shows a possibility of developing a low-cost indoor mobile robotics platform for college education and robotics research laboratories.

42 Design of tracked robot with remote control for surveillance

Design of tracked robot with remote control for surveillance

For specific purpose, tracked robot that can be controlled remotely and able to acquire images from environment is very important, for example in rescuing disaster victims. We propose architecture for Raspberry pi and AVR-based mobile robot that can be controlled by low cost remote controller Integrated Circuits(IC) and able to avoid obstacles using ultrasonic distance sensor. This prototype also can be used for education and research in the university. We evaluate the performance of the robot in terms of the distance and the capability to deliver video streaming from the output raspberry pi and 2.4 GHz Video transmitter.

43 Customizing household mobile robot for remote laboratories

Customizing household mobile robot for remote laboratories

Recent technological inventions in electronics and communication technologies allow us to do a range of activities over the Internet. Some of these activities we could not even think of a few years ago. With this new scenario, performing experiments that involve real hardware remotely over the Internet is now a reality. However, considering the complexities of technologies involved with remote laboratories, some of the experimental systems need to be modified/customized before they can be integrated within a remote laboratory system. One such system is a mobile robot that can be used for various student learning laboratory activities. This paper will report the customizing process of a household mobile robot so that it can be integrated within a remote laboratory facility.

44 Assisted robot navigation based on speech recognition and synthesis

Assisted robot navigation based on speech recognition and synthesis


Interactive robots can help people with or without disabilities. In this sense, research has been made in order to help children with motor disabilities to explore the world around them, which is important for their cognitive development. However, most of these initiatives lack on natural and intuitive interfaces, or are prohibitively expensive to be adopted in a larger scale. This paper describes an experimental environment to use speech recognition and synthesis to improve human-robot interaction (HRI) with children. The proposed system main goal is to perform activities with physically disabled children, however it can be used with other children. Thus, robots that are attractive, small-sized and relatively low-cost are used to implement such environment. The system recognizes a set of simple speech commands, which allows human-assisted navigation.

45 autonomous mobile robot for gas leak detection in indoor environments

Experimental application of an autonomous mobile robot for gas leak detection in indoor environments

This paper presents the experimental application of an autonomous mobile robot for gas leak detection in indoor environments. The application is focused to automatize a human-risky operation in indoor areas. The goal of the autonomous mobile robot is the localization of a toxic gas leak source. So, the mobile robot has to explore the whole area and perform an auto-localization procedure based on a SLAM method and a LIDAR sensor. The mobile robot measures gas concentration by using a photoionization detector. The experimentation was realized in a large indoor environment in a university facility with a simulated gas leak source. The combination of the results from the auto-localization procedure with the information of the sensors allows the estimation of the gas leak source location.

46 Positioning and Navigation of Meal Delivery Robot Using Magnetic Sensors and RFID

Positioning and Navigation of Meal Delivery Robot Using Magnetic Sensors and RFID

The main purpose of this paper is to study positioning and navigation of mobile robot with integrated magnetic sensor and RFID. With the use of RFID and the proposed progressive polling algorithm, process the data access and identification for each Tag in order to prevent conflicts between read of Tag and ID number to improve efficiency. Through actual experiment results, mobile robot will move stably along a magnetic stripe. When one's position offset, it will be corrected immediately. At this moment, RFID can exclude conflict between reads of Tags using progressive polling algorithm to keep the robot arrive accurately at the next station. Therefore, it provides a stability of platform to move the route, flexibility of the real-time correction of system platform, a high accuracy of positioning and navigation, and a high efficiency of Tag's recognition.

47 Safety Control of Industrial Robots Based on a Distributed Distance Sensor

Safety Control of Industrial Robots Based on a Distributed Distance Sensor

In the field of human-robot interaction in industrial environments, the active control of robot based on exteroceptive sensors' measurements is a viable approach to the issue of safety enhancement. Among all possible solutions, onboard sensors have several advantages, in terms of ease of deployment and calibration, and absence of occlusions. In this paper, we present a prototype of a distributed distance sensor that can be mounted on an industrial robot. The sensor's outputs have been used as part of a newly conceived control strategy, aimed at improving human safety by means of assessing the level of danger induced by the robot. Several experiments on an ABB IRB140 industrial robot have been carried out, demonstrating the feasibility of the proposed approach in a realistic scenario.

48 Virtual laboratory for a remotely operating robot arm

48 Virtual laboratory for a remotely operating robot arm
New techniques, like virtual laboratories and the ones remotely operated, improve teaching and learning in the academic society. These allow students to gain experience with real tools, in an intuitive way at a low cost. The paper presents an alternative way of e-learning in robotics that allows the students to simulate and remotely operate the mechanical RV-2AJ robot arm. The main characteristics of the application are a realistic graphics and a wide variety of options which allow the user to easily learn the advanced concepts of control of the robot.

49 Mobile robot unknown indoor environment exploration using self-localization and grid map building

Mobile robot unknown indoor environment exploration using self-localization and grid map building

This paper present an approach for unknown indoor environment exploration with simultaneous localization and mapping, the approach address the problem of exploration unknown indoor environments based on robot mobile moving and sonar scanning. The measurements given by the odometry system, update for the self-localization. The map building process maintaining two map grids: (1) map grid models the occupancy of the environment (OM), (2) map grid memorize the robot trajectory(TM). The use of the two grid maps provides an efficacy description and use of the environment information over time. Results in simulation and real robots experiments using random exploration show the efficacy of our approach.

50 Fuzzy logic based force-feedback for obstacle collision avoidance of robot manipulators

Fuzzy logic based force-feedback for obstacle collision avoidance of robot manipulators

Robot
remote teleoperation enables users to perform complex tasks in hostile or inaccessible environments, without physical presence. However, minimizing collisions with obstacles while maintaining accuracy and speed of task is important. While visual and auditory inputs to the user aid in accurate control, to achieve the required speed and accuracy, tactile and kinesthetic force-feedback information can be used. This paper presents a dynamic real-time fuzzy logic based force-feedback control for obstacle avoidance in a remotely operated robot manipulator. The presented method utilizes absolute position of the robot manipulator to calculate the distance vector to known obstacles. A fuzzy controller utilizes the distance vectors and the velocities of the components in the manipulator to generate force feedback in each axis. Furthermore, the paper presents an interactive graphical user interface that enables users to add or remove obstacles in the environment dynamically. The presented method was implemented on a simple 3-DOF robot manipulator.

51 Development of a peristaltic crawling robot for long-distance inspection of sewer pipes

Development of a peristaltic crawling robot for long-distance inspection of sewer pipes


We have developed a robot that can inspect long distances in thin sewer pipes. Inspections of sewer pipes are necessary to prevent accidents such as road caving accidents. However, thin sewer pipes cannot be easily inspected by existing methods. To solve this problem, we developed a peristaltic crawling robot that mimics earthworm locomotion. This robot comprises six units: five joint parts and a head part with a camera that inspects inside the sewer pipes. As the actuator in the unit is a pneumatic artificial muscle, the robot is driven by air pressure. Air is supplied to each unit through an electric valve mounted directly on the unit, thereby preventing delay in air transfer. We confirmed that the robot can stably drive through horizontal and vertical sewer pipes. In driving tests through a bending pipe, we identified materials for the joint parts that enable the robot to smoothly travel through the pipe. During these tests, we further evaluated the robot's traveling performance by measuring its driving speed through the pipe.

52 A new vision and navigation research for a guide-dog robot system in urban system

A new vision and navigation research for a guide-dog robot system in urban system

This paper presents a development of guide-dog robot system for visually impaired. Based on a hall-sensor joystick and ultrasonic sensors, a “smart rope” system is designed for the human-robot interaction. In this system, multiple functions are provided for the self-walking in urban system, such as following, navigation and obstacle avoidance. To distinguish between small involuntary force and the intended navigational movement, a fuzzy logic control method is applied to improve the accuracy for the “smart rope” system manipulation. To compensate the lack of visual sense of visually impaired, a smart phone with camera is utilized as the robot vision, in order to detect the traffic lights and the zebra crossing. A fast vision recognition approach is provided based on Adaboosting and Template matching combined algorithm. For the evaluation of proposed method, an integrated system is implemented to the mobile robot platform. The performance of both interactive system and vision system are analyzed after the experiment in the urban environment. System's accuracy, usefulness and adaptability are verified. The experimental results showed that this new designed guide-dog robot system is suitable and effective to assist the visually impaired for the self-walking.

Monday, December 3, 2012

An ongoing development at University of Debrecen, Faculty of Informatics is to convert a commercially available remote controlled all-terrain robot into an autonomous one. In the present phase our goal is to prepare the vehicle for autonomous tasks. To move autonomously, the robot needs to know some information about its surroundings. Therefore, some sensors are necessary to be developed. Till now, GPS receiver, accelerometer, magnetic field sensor and eight ultrasonic based distance sensors have been developed and connected. As we want the robot to solve tasks, we need to place a processing unit on board. There are two options we are working with. One is Altera's Development and Education board 2, the other is National Instruments' Single Board RIO 9632. Software development is in progress for these platforms simultaneously. Various collections of sensors and high performance real-time control unit make the robot capable of being great test device of complex autonomous algorithms, such as route following and path finding algorithms. A state-of-art report is presented.

No comments:

Post a Comment