Semester: 4
ECTS: 5
Lectures: 30
Practice sessions: 30
Independent work: 90
Module Code: 23-321-0166
Semester: 4
ECTS: 5
Lectures: 30
Practice sessions: 30
Independent work: 90
Module Code: 23-321-0166

Module title:


Perception and localization of robots

Lecturers and associates:



Module overview:


The goal of this module is to provide students with a thorough and extensive understanding of different types of sensors and their characteristics, which are essential in the world of robotics. The module is elective and represents the second module in the robotics vertical. Through this module, students will learn the basic theoretical concepts related to sensors, and through interactive practical examples, they will learn how sensors are applied in real robotic systems. This includes understanding how sensors enable robots to perceive and understand their environment, and how they are used in localization and navigation processes. The main goals of the module are:
Get to know the different sensors used in robotic applications.
Familiarize yourself with the camera sensor that forms the basis of robotic vision.
Implement a robot control algorithm that will enable solving the given problem using the camera.
Use artificial intelligence methods in computer vision applications.
Implement a localization algorithm using sensor fusion.

The module builds on the knowledge gained in the previous module Introduction to robotics. Module assessment is based on solving a series of practical tasks.
In this module students will learn:
recognize and understand the operation of different types of sensors used in robotic applications.
accept and process data from real sensors and adequately prepare them for integration and application in algorithms for robotic applications.
implement and use sensors within the chosen simulation environment or the robot.
accept the image from the camera sensor and calculate the intrinsic and extrinsic parameters of the camera.
accept image and point cloud from RGBD camera.
implement a robot control algorithm that uses a camera to solve a given problem.
apply artificial intelligence methods in robotic vision applications.
implement a localization algorithm that uses data fusion from multiple sensors.

Literature:


Required readings:
1. Silva, C. (2015) Sensors and Actuators. 2nd edn. Boca Raton: CRC Press.
2. Elgendy, M. (2020) Deep Learning for Vision Systems. Shelter Island: Manning Publications.

Supplementary readings:
1. Géron, A. (2019) Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. Sebastopol: O´Reilly Media.
2. Kovačić, Z., Bogdan, S., Krajči, V. (2002) Osnove robotike. Zagreb: Graphis.