Browsing by Author "Pilarski, Leonardo"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- An AI-based object detection approach for robotic competitionsPublication . Pilarski, Leonardo; Luiz, Luiz Eduardo; Braun, João; Nakano, Alberto Yoshiro; Pinto, Vítor H.; Costa, Paulo Gomes da; Lima, JoséArtificial Intelligence has been introduced in many applications, namely in artificial vision-based systems with object detection tasks. This paper presents an object localization system with a motivation to use it in autonomous mobile robots at robotics competitions. The system aims to allow robots to accomplish their tasks more efficiently. Object detection is performed using a camera and artificial intelligence based on the YOLOv4 Tiny detection model. An algorithm was developed that uses the data from the system to estimate the parameters of location, distance, and orientation based on the pinhole camera model and trigonometric modelling. It can be used in smart identification procedures of objects. Practical tests and results are presented, constantly locating the objects and with errors between 0.16 and 3.8 cm, concluding that the object localization system is adequate for autonomous mobile robots.
- Object detection and localization: an application inspired by RobotAtFactory using machine learningPublication . Pilarski, Leonardo; Lima, José; Nakano, Alberto Yoshihiro; Braun, JoãoThe evolution of artificial intelligence and digital cameras has made the transformation of the real world into its digital image version more accessible and widely used. In this way, the analysis of information can be carried out with the use of algorithms. The detection and localization of objects is a crucial task in several applications, such as surveillance, autonomous robotics, intelligent transportation systems, and others. Based on this, this work aims to implement a system that can find objects and estimate their location (distance and angle), through the acquisition and analysis of images. Having as motivation the possible problems that can be introduced in the robotics competition, RobotAtFactory Lite, in future versions. As an example, the obstruction of the path developed through the printed lines, requiring the robot to deviate, and/or the positioning of the boxes in different places of the initial warehouses, being positioned so that the robot does not know its previous location, having to find it somehow. For this, different methods were analyzed, based on machine leraning, for object detection using feature extraction and neural networks, as well as object localization, based on the Pinhole model and triangulation. By compiling these techniques through python programming in the module, based on a Raspberry Pi Model B and a Raspi Cam Rev 1.3, the goal of the work is achieved. Thus, it was possible to find the objects and obtain an estimate of their relative position. In the future, in a possible implementation together with a robot, this data can be used to find objects and perform tasks.
- Robot at factory lite - a step-by-step educational approach to the robot assemblyPublication . Luiz, E. Luiz; Pilarski, Leonardo; Baidi, Kaïs; Braun, João; Oliveira, Andre Schneider; Lima, José; Costa, Paulo Gomes daIn a robotics scope, an excellent way to test and improve knowledge is through competitions. In other words, it is possible to follow the results in practice, compare them with the development of other teams and improve the current solutions. The Robot At Factory Lite proposal simulates an Industry 4.0 warehouse scenario, applying education through Science, Technology, Engineering, and Mathematics (STEM) methodology, where the participants have to work on a solution to overcome its challenges. Thus, this article presents an initial electromechanical proposal, which is the basis for developing robots for this competition. The presented main concepts aim to inform the possibilities of using the robot’s parts and components. Thus, an idea can be sketched in the participants’ minds, inspiring them to use their imagination and knowledge through the presentation of this model.
