Autonomously Simultaneous Localization and Mapping Based on Line Tracking in a Factory-like Environment

This study is related to SLAM, also known simultaneous localization and mapping which is highly important and an indispensable issue for autonomous mobile robots. Both an environment mapping and an agent’s localization are provided with SLAM systems. However, while performing SLAM for an unknown environment, the robot is navigated by three different ways: a user guidance, random movements on an exploration mode or exploration algorithms. A user guidance or random exploration methods have some drawbacks that a user may not be able to observe the agent or random process may take a long time. In order to answer these problems, it is searched for a new and autonomous exploration algorithm for SLAM systems. In this manner, a new kind of left-orientated autonomous exploration algorithm for SLAM systems has been improved. To show the algorithm effectiveness, a factorylike environment is made up on the ROS (Robot Operating System) platform and navigation of the agent is observed. The result of the study demonstrates that it is possible to perform SLAM autonomously in any similar environment without the need of the user interference.

Image Processing based Task Allocation for Autonomous Multi Rotor Unmanned Aerial Vehicles

Nowadays studies based on unmanned aerial vehicles draws attention. Especially image processing based tasks are quite important. In this study, several tasks were performed based on the autonomous flight, image processing and load drop capabilities of the Unmanned Aerial Vehicle (UAV). Two main tasks were tested with an autonomous UAV, and the performance of the whole system was measured according to the duration and the methods of the image processing. In the first mission, the UAV flew over a 4×4 sized color matrix. 16 tiles of the matrix had three main colors, and the pattern was changed three times. The UAV was sent to the matrix, recognized 48colors of the matrix and returned to the launch position autonomously. The second mission was to test load drop and image processing abilities of the UAV. In this mission, the UAV flew over the matrix, read the pattern and went to the parachute drop area. After that, the load was dropped according to the recognized pattern by the UAV and then came back to the launch position.

Prepared by Akif Durdu, M.Celalettin Ergene, Onur Demircan, Hasan Uguz, Mustafa Mahmutoglu, Ender Kurnaz

Performance Analysis of Image Mosaicing Methods for Unmanned Aerial Vehicles

Today, Unmanned Aerial Vehicles (UAVs) have gained considerable importance, especially in the defense industry. Thanks to the cameras placed on these vehicles, a certain area can be explored for safety reasons. Various image processing techniques are used for this. Image mosaicing is one of these techniques. In this study, the effects of SIFT(Scale-Invariant Feature Transform), SURF(Speeded Up Robust Features), FAST(Features from Accelerated Segment Test) and Harris corner detector methods used for image mosaicing on images taken from unmanned aerial vehicles are tested and compared.

aykut

Prepared by Aykut Tahtırvancı, Akif Durdu

A New Approach to Mobile Robot Navigation in Unknown Environments

Several algorithms have been developed to help guide mobile robots in unknown environments.  Various kinds of Bug algorithms are available and each one these algorithms has an advantage over the others under different circumstances.  This paper introduces a new approach, the Diligent-Bug (D-Bug) algorithm, which is developed to enable a collision free navigation of robots in an unknown 2-dimensional environment. Static obstacles of arbitrary shapes have been considered to evaluate the developed algorithm. This algorithm also enables robots to avoid getting stuck in both local and global loops.

motuma

Prepared by Motuma Abafogi, Akif Durdu, Bayram Akdemir

 

Comparison of Optimal Path Planning Algorithms

This work is concerned with path planning algorithms which have an important place in robotic navigation. Mobile robots must be moved to the relevant task point in order to be able to fulfill the tasks assigned to them. However, the movements planned in a frame or random may affect the duty time and even in some situations, the duty might be failed. When such problems are taken into consideration, it is expected that the robots should go to the task point and complete their tasks within the shortest time and most suitable way. It is aimed to give results about a comparison of some known algorithms. With this thought, a map for a real time environment has been created and the appropriateness of the algorithms are investigated with respect to the described starting/end points. According to the results, the shortest path is found by the A* algorithm. However, it is observed that the time efficiency of this algorithm very low. On the other hand, PRM algorithm is the most suitable method in terms of elapsed time. In addition to this, algorithm path length is closer to the A* algorithm. The results are analyzed and commented according to the statistical analysis methods.

 

web

Prepared by Mehmet Korkmaz, Akif Durdu

Sensor Comparison for a Real-Time SLAM Application

Different types of sensors are used for Simultaneous Localization and Mapping (SLAM) applications. These sensors have their own advantages and disadvantages. Although high accuracy rates, laser sensors have some disadvantages such as price, power requirements, and weight. As an alternatively, it is possible to use of inexpensive sensors such as Kinect which gives image and depth data in SLAM systems. There have been many studies that benefit from such sensors with good results and many of them are carried out in a simulation environment. However, there are few studies on whether similar outcomes will be valid for real-time applications. With this thought, a real-time application has been performed for comparison of both sensors in SLAM systems. In the light of obtained findings, this type of sensors is not a good alternative to laser sensors in terms of both map accuracy and time consumption

3

Prepared by Mehmet Korkmaz, Akif Durdu, Yunus Emre Tusun

Robotic Hand Grasping of Objects Classified by Using Support Vector Machine and Bag of Visual Words

Recent statistics show that more than 10 million people in the world suffer amputation. Most of these people also have depression because of losing their hand, arm and leg movements. With current technology it is possible to give these people hands, arms and legs. Our aim is to give these people a chance to live. In this study we have designed a robotic hand in order to grasp objects. Grid based feature extraction and bag of words method are used to extract features from the images and the classification is made by support vector machine. There are three classes made; cups, pens, and staplers. So we can demonstrate a bureau environment and a handicapped person works in a bureau can grasp daily bureau materials in a real time application. We used a specific computer program toolbox to do software processes and a microprocessor to control the robotic hand. This paper just aims classification and grasping pens, cups, and staplers. However, with some improvements we believe such kind of prostheses can give a future to the handicapped people. We assume this study will be a step to a new and more advanced kind of prostheses than old traditional ones.

2

Prepared by M. Celalettin Ergene, Akif Durdu