Mobile robots are high-performance robots that are used to perform a specific function in environments such as land, air and water, with free movement options and are equipped with many sensors for different processing capabilities. Today, it is used in many tasks such as object detection, tracking and mapping. Mobile robots used in mapping implementations are usually guided by user inputs. However, in some cases, this guidance is autonomously implemented through exploration algorithms that are examined under the active Simultaneous Localization and Mapping (SLAM) keyword. These algorithms are usually based on Laser Imaging Detection and Ranging (LIDAR) sensor. Since this sensor has a bulky structure and occupancy grid maps require heavy computing time, it is needed to develop new kinds of algorithms. In this study, we propose a novel Convolutional Neural Network (CNN) based algorithm that can create a map of an environment with a mobile robot that is independent of user inputs and move autonomously. For the first stage, the CNN structure is trained using the data set consisting of the environment image and the wheel angles related to these images so that the CNN model learns how to guide the robot. For the second stage, the robot is navigated autonomously through the trained network in an environment which is different from the first one, and the map of the environment is acquired simultaneously. Training and testing processes have been realized on a real-time implementation and the advantages of the developed method have been verified.
Deep learning (DL) based localization and Simultaneous Localization and Mapping (SLAM) has recently gained considerable attention demonstrating remarkable results. Instead of constructing hand-crafted algorithms through geometric theories, DL based solutions provide a data-driven solution to the problem. Taking advantage of large amounts of training data and computing capacity, these approaches are increasingly developing into a new field that offers accurate and robust localization systems. In this work, the problem of global localization for unmanned aerial vehicles (UAVs) is analyzed by proposing a sequential, end-to-end, and multimodal deep neural network based monocular visual-inertial localization framework. More specifically, the proposed neural network architecture is three-fold; a visual feature extractor convNet network, a small IMU integrator bi-directional long short-term memory (LSTM), and a global pose regressor bi-directional LSTM network for pose estimation. In addition, by fusing the traditional IMU filtering methods instead of LSTM with the convNet, a more time-efficient deep pose estimation framework is presented. It is worth pointing out that the focus in this study is to evaluate the precision and efficiency of visual-inertial (VI) based localization approaches concerning indoor scenarios. The proposed deep global localization is compared with the various state-of-the-art algorithms on indoor UAV datasets, simulation environments and real-world drone experiments in terms of accuracy and time-efficiency. In addition, the comparison of IMU-LSTM and IMU-Filter based pose estimators is also provided by a detailed analysis. Experimental results show that the proposed filter-based approach combined with a DL approach has promising performance in terms of accuracy and time efficiency in indoor localization of UAVs.
Coronavirus disease 2019 (COVID-2019), which emerged in Wuhan, China in 2019 and has spread rapidly all over the world since the beginning of 2020, has infected millions of people and caused many deaths. For this pandemic, which is still in effect, mobilization has started all over the world, and various restrictions and precautions have been taken to prevent the spread of this disease. In addition, infected people must be identified in order to control the infection. However, due to the inadequate number of Reverse Transcription Polymerase Chain Reaction (RT-PCR) tests, Chest computed tomography (CT) becomes a popular tool to assist the diagnosis of COVID-19. In this study, two deep learning architectures have been proposed that automatically detect positive COVID-19 cases using Chest CT X-ray images. Lung segmentation (preprocessing) in CT images, which are given as input to these proposed architectures, is performed automatically with Artificial Neural Networks (ANN). Since both architectures contain AlexNet architecture, the recommended method is a transfer learning application. However, the second proposed architecture is a hybrid structure as it contains a Bidirectional Long Short-Term Memories (BiLSTM) layer, which also takes into account the temporal properties. While the COVID-19 classification accuracy of the first architecture is 98.14%, this value is 98.70% in the second hybrid architecture. The results prove that the proposed architecture shows outstanding success in infection detection and, therefore this study contributes to previous studies in terms of both deep architectural design and high classification success.
This study is related to SLAM, also known simultaneous localization and mapping which is highly important and an indispensable issue for autonomous mobile robots. Both an environment mapping and an agent’s localization are provided with SLAM systems. However, while performing SLAM for an unknown environment, the robot is navigated by three different ways: a user guidance, random movements on an exploration mode or exploration algorithms. A user guidance or random exploration methods have some drawbacks that a user may not be able to observe the agent or random process may take a long time. In order to answer these problems, it is searched for a new and autonomous exploration algorithm for SLAM systems. In this manner, a new kind of left-orientated autonomous exploration algorithm for SLAM systems has been improved. To show the algorithm effectiveness, a factorylike environment is made up on the ROS (Robot Operating System) platform and navigation of the agent is observed. The result of the study demonstrates that it is possible to perform SLAM autonomously in any similar environment without the need of the user interference.
Nowadays studies based on unmanned aerial vehicles draws attention. Especially image processing based tasks are quite important. In this study, several tasks were performed based on the autonomous flight, image processing and load drop capabilities of the Unmanned Aerial Vehicle (UAV). Two main tasks were tested with an autonomous UAV, and the performance of the whole system was measured according to the duration and the methods of the image processing. In the first mission, the UAV flew over a 4×4 sized color matrix. 16 tiles of the matrix had three main colors, and the pattern was changed three times. The UAV was sent to the matrix, recognized 48colors of the matrix and returned to the launch position autonomously. The second mission was to test load drop and image processing abilities of the UAV. In this mission, the UAV flew over the matrix, read the pattern and went to the parachute drop area. After that, the load was dropped according to the recognized pattern by the UAV and then came back to the launch position.
Prepared by Akif Durdu, M.Celalettin Ergene, Onur Demircan, Hasan Uguz, Mustafa Mahmutoglu, Ender Kurnaz
Today, Unmanned Aerial Vehicles (UAVs) have gained considerable importance, especially in the defense industry. Thanks to the cameras placed on these vehicles, a certain area can be explored for safety reasons. Various image processing techniques are used for this. Image mosaicing is one of these techniques. In this study, the effects of SIFT(Scale-Invariant Feature Transform), SURF(Speeded Up Robust Features), FAST(Features from Accelerated Segment Test) and Harris corner detector methods used for image mosaicing on images taken from unmanned aerial vehicles are tested and compared.
Several algorithms have been developed to help guide mobile robots in unknown environments. Various kinds of Bug algorithms are available and each one these algorithms has an advantage over the others under different circumstances. This paper introduces a new approach, the Diligent-Bug (D-Bug) algorithm, which is developed to enable a collision free navigation of robots in an unknown 2-dimensional environment. Static obstacles of arbitrary shapes have been considered to evaluate the developed algorithm. This algorithm also enables robots to avoid getting stuck in both local and global loops.
This work is concerned with path planning algorithms which have an important place in robotic navigation. Mobile robots must be moved to the relevant task point in order to be able to fulfill the tasks assigned to them. However, the movements planned in a frame or random may affect the duty time and even in some situations, the duty might be failed. When such problems are taken into consideration, it is expected that the robots should go to the task point and complete their tasks within the shortest time and most suitable way. It is aimed to give results about a comparison of some known algorithms. With this thought, a map for a real time environment has been created and the appropriateness of the algorithms are investigated with respect to the described starting/end points. According to the results, the shortest path is found by the A* algorithm. However, it is observed that the time efficiency of this algorithm very low. On the other hand, PRM algorithm is the most suitable method in terms of elapsed time. In addition to this, algorithm path length is closer to the A* algorithm. The results are analyzed and commented according to the statistical analysis methods.
Different types of sensors are used for Simultaneous Localization and Mapping (SLAM) applications. These sensors have their own advantages and disadvantages. Although high accuracy rates, laser sensors have some disadvantages such as price, power requirements, and weight. As an alternatively, it is possible to use of inexpensive sensors such as Kinect which gives image and depth data in SLAM systems. There have been many studies that benefit from such sensors with good results and many of them are carried out in a simulation environment. However, there are few studies on whether similar outcomes will be valid for real-time applications. With this thought, a real-time application has been performed for comparison of both sensors in SLAM systems. In the light of obtained findings, this type of sensors is not a good alternative to laser sensors in terms of both map accuracy and time consumption
Recent statistics show that more than 10 million people in the world suffer amputation. Most of these people also have depression because of losing their hand, arm and leg movements. With current technology it is possible to give these people hands, arms and legs. Our aim is to give these people a chance to live. In this study we have designed a robotic hand in order to grasp objects. Grid based feature extraction and bag of words method are used to extract features from the images and the classification is made by support vector machine. There are three classes made; cups, pens, and staplers. So we can demonstrate a bureau environment and a handicapped person works in a bureau can grasp daily bureau materials in a real time application. We used a specific computer program toolbox to do software processes and a microprocessor to control the robotic hand. This paper just aims classification and grasping pens, cups, and staplers. However, with some improvements we believe such kind of prostheses can give a future to the handicapped people. We assume this study will be a step to a new and more advanced kind of prostheses than old traditional ones.