{"id":30,"date":"2022-06-02T18:31:27","date_gmt":"2022-06-02T22:31:27","guid":{"rendered":"https:\/\/carleton.ca\/rncsl\/?page_id=30"},"modified":"2025-10-08T08:48:57","modified_gmt":"2025-10-08T12:48:57","slug":"research","status":"publish","type":"page","link":"https:\/\/carleton.ca\/rncsl\/research\/","title":{"rendered":"RESEARCH"},"content":{"rendered":"<p style=\"padding-left: 35px;\"><strong>Check out our <\/strong> <a href=\"https:\/\/carleton.ca\/rncsl\/publications\/\"> <strong>Publications<\/strong><\/a><\/p>\n<p><div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-control-design-of-unmanned-aerial-and-ground-vehicles\" aria-expanded=\"false\" aria-controls=\"slideme-control-design-of-unmanned-aerial-and-ground-vehicles\" class=\"slideme__heading slideme__trigger\">Control Design of Unmanned Aerial and Ground Vehicles<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-control-design-of-unmanned-aerial-and-ground-vehicles\" aria-hidden=\"true\"><p><br \/>\nDesigning controllers for Unmanned Aerial vehicles (UAVs) including quadrotors and Autonomous Ground Vehicles (AGVs), are challenging regarding control design, model representation, and stability. Various control approaches have been employed to address the control system limitations of UAVs\/AGVs and enhance the stability, guidance, and navigation.<\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1016\/j.rineng.2024.102497\" target=\"_blank\" rel=\"noopener noreferrer\">Paper6<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.asoc.2024.111843\" target=\"_blank\" rel=\"noopener noreferrer\">Paper5<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1080\/00207179.2022.2079004\" target=\"_blank\" rel=\"noopener noreferrer\">Paper4<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.isatra.2022.12.014\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.23919\/ECC55457.2022.9838033\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ICSTCC52150.2021.9607198\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-reinforcement-learning-and-predictive-control\" aria-expanded=\"false\" aria-controls=\"slideme-reinforcement-learning-and-predictive-control\" class=\"slideme__heading slideme__trigger\">Reinforcement Learning and Predictive Control<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-reinforcement-learning-and-predictive-control\" aria-hidden=\"true\"><p><br \/>\nThe robotics community has extensively embraced Reinforcement Learning (RL) algorithms for controlling complex single-robot and multi-robot systems. Model predictive control (MPC), also known as receding horizon control, is an advanced control approach that is important for industrial process control and gained popularity because it considers control input and state constraints.<\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1016\/j.asoc.2024.111843\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.23919\/ACC60939.2024.10644310\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1049\/cth2.12709\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-observer-based-controllers-for-unmanned-vehicles\" aria-expanded=\"false\" aria-controls=\"slideme-observer-based-controllers-for-unmanned-vehicles\" class=\"slideme__heading slideme__trigger\">Observer-based Controllers for Unmanned Vehicles<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-observer-based-controllers-for-unmanned-vehicles\" aria-hidden=\"true\"><p><br \/>\nIn the field of autonomous navigation, comprehensive autonomous modules able to accurately estimate Unmanned Aerial Vehicle (UAV) motion components and to provide control signals to successfully track the vehicle along the desired trajectory are in great demand. When using a Vision-Aided Inertial Navigation System (VA-INS) composed of a low-cost Inertial Measurement Unit (IMU) and a vision unit (monocular or stereo camera), the UAV motion components that require estimation will include orientation (attitude), gyro bias (if angular velocity is unavailable), position, and linear velocity. Given the fact that vehicle\u2019s attitude, position, and linear velocity are generally unknown, they can be reconstructed utilizing sensor measurements. In our Lab, we develop observer-based controllers for Unmanned Aerial Vehicles (UAVs) and Autonomous Ground Vehicles (AGVs).<\/p>\n<p><b>Visit our research Talks\/Videos at international events<\/b><br \/>\n<a href=\"https:\/\/youtu.be\/iVbbZPs_3kc\" target=\"_blank\" rel=\"noopener noreferrer\"> Hashim \u2013 Observer-based Controller for Attitude &#8211; Romania<\/a><\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1080\/00207179.2022.2079004\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.isatra.2022.12.014\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ICSTCC52150.2021.9607198\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-vision-based-aided-navigation-for-gps-denied-regions\" aria-expanded=\"false\" aria-controls=\"slideme-vision-based-aided-navigation-for-gps-denied-regions\" class=\"slideme__heading slideme__trigger\">Vision-based Aided Navigation for GPS-denied Regions<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-vision-based-aided-navigation-for-gps-denied-regions\" aria-hidden=\"true\"><p><br \/>\nRobust and accurate navigation solutions for autonomous vehicles are essential. Indoor and outdoor applications, such as household cleaning devices, pipelines, terrain mapping, reef monitoring, exemplify situations when GPS might be unreliable and only low-cost measurement units (e.g., inertial measurement unit (IMU)) might be available. In such a case GPS-independent navigation solutions are indispensable. A typical low-cost IMU module is composed of an accelerometer and a gyroscope which provide measurements of rigid-body\u2019s acceleration and angular velocity, respectively. In the absence of GPS, a cost-effective autonomous vehicle requires navigation solutions that rely on low-cost IMU and feature measurements collected by a vision unit. Hence, autonomous navigation in space requires estimation of orientation (known as attitude), position, and linear velocity. An inertial vision unit composed of a stereo vision unit and an IMU can be employed to extract rigid-body\u2019s pose &#8211; a combination of attitude and position. In our Lab, we develop stochastic and deterministic nonlinear estimation techniques for vehicles navigating with six degrees of freedom (6 DoF) applicable for Unmanned Aerial Vehicles (UAVs) and Autonomous Ground Vehicles (AGVs).<\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/ISUsnQbvz74\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><br \/>\n<\/iframe><\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/BWraOI0LAXo\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><br \/>\n<\/iframe><\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/CP3xiOcGrTc\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><br \/>\n<\/iframe><\/p>\n<p><b>Visit our research Talks\/Videos at international events<\/b><br \/>\n<a href=\"https:\/\/youtu.be\/tnvQL6gQZOc&quot; target\"> Akos &#8211; Mobile Robots Navigation &#8211; Japan<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/nqNroZQHOko\"> Ajay &#8211; Navigation Deterministic Filter &#8211; USA<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/Jb4JlueFQ2c\"> Hashim &#8211; Navigation Stochastic Observer &#8211; USA<\/a><\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1016\/j.eswa.2025.126656\" target=\"_blank\" rel=\"noopener noreferrer\">Paper6<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ITSC48978.2021.9565015\" target=\"_blank\" rel=\"noopener noreferrer\">Paper5<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/TIM.2024.3509582\" target=\"_blank\" rel=\"noopener noreferrer\">Paper4<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/IROS47612.2022.9981893\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.conengprac.2021.104926\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.23919\/ACC50511.2021.9482995\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-nonlinear-estimators-for-simultaneous-localization-and-mapping-slam\" aria-expanded=\"false\" aria-controls=\"slideme-nonlinear-estimators-for-simultaneous-localization-and-mapping-slam\" class=\"slideme__heading slideme__trigger\">Nonlinear Estimators for Simultaneous Localization and Mapping (SLAM)<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-nonlinear-estimators-for-simultaneous-localization-and-mapping-slam\" aria-hidden=\"true\"><p><br \/>\nRobotics applications are experiencing a surge in demand for navigation solutions suitable for partially or completely unknown robot pose in three-dimensional (3D) space (i.e., attitude and position) within an unknown environment. Simultaneous Localization and Mapping (SLAM) is one of the key robotics tasks as it tackles simultaneous mapping of the unknown environment defined by multiple landmark positions and localization of the unknown pose (i.e., attitude and position) of the robot in 3D space. In our Lab, we develop stochastic and deterministic nonlinear estimation techniques for SLAM applicable for Unmanned Aerial Vehicles (UAVs) and Autonomous Ground Vehicles (AGVs).<\/p>\n<p><b>Visit our research Talks\/Videos at international events<\/b><br \/>\n<a href=\"https:\/\/youtu.be\/k4GGCNtyS34\" target=\"_blank\" rel=\"noopener noreferrer\"> Marium &#8211; SLAM Stochastic Filter &#8211; USA<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/8C49kVdewQM\" target=\"_blank\" rel=\"noopener noreferrer\"> Trevor &#8211; SLAM Observer &#8211; Romania<\/a><\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1109\/TITS.2020.3035550\" target=\"_blank\" rel=\"noopener noreferrer\">Paper6<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/TSMC.2020.3047338\" target=\"_blank\" rel=\"noopener noreferrer\">Paper5<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.ast.2021.106569\" target=\"_blank\" rel=\"noopener noreferrer\">Paper4<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/LCSYS.2020.3000266\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.ifacol.2021.11.263\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ICSTCC50638.2020.9259679\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-attitude-and-pose-estimation\" aria-expanded=\"false\" aria-controls=\"slideme-attitude-and-pose-estimation\" class=\"slideme__heading slideme__trigger\">Attitude and Pose Estimation<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-attitude-and-pose-estimation\" aria-hidden=\"true\"><p><br \/>\nAutomated and semi-automated robotic applications such as unmanned aerial vehicles (UAVs), autonomous underwater vehicles (AUVs), Autonomous Ground Vehicles (AGVs), satellites, radars and others can be controlled to rotate successfully in the three dimensional (3D) space if the orientation of the rigid-body is accurately known. However, the true attitude (orientation) or pose (orientation + position) of a rigid-body, cannot be extracted directly. Alternatively, the attitude\/pose can be determined using a set of measurements available in the body-frame and observations in the inertial-frame. In general, measurement units are corrupted with unknown bias and noise components. In our Lab, we develop stochastic and deterministic nonlinear estimation techniques for attitude\/pose applicable for vehicles in 3D space.<\/p>\n<p><b>Visit our research Talks\/Videos at international events<\/b><br \/>\n<a href=\"https:\/\/youtu.be\/Gv8ykbikp-I\" target=\"_blank\" rel=\"noopener noreferrer\"> Moise \u2013 QUEST-based Kalman filter and LQR &#8211; Luxembourg<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/oK22We1wlzs\" target=\"_blank\" rel=\"noopener noreferrer\"> Hashim \u2013 Neural Stochastic Attitude Filter &#8211; USA<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/tnvQL6gQZOc\" target=\"_blank\" rel=\"noopener noreferrer\"> Akos \u2013 Pose Estimation Mobile Robots &#8211; Japan<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/5cWtTUny8XY\" target=\"_blank\" rel=\"noopener noreferrer\"> Trenton \u2013 Pose Filter &#8211; Canada<\/a><br \/>\n<a href=\"https:\/\/youtu.be\/MEScsm9qNC4\" target=\"_blank\" rel=\"noopener noreferrer\"> Ajay \u2013 Attitude Estimation &#8211; Hong Kong<\/a><\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1109\/TSMC.2020.2980184\" target=\"_blank\" rel=\"noopener noreferrer\">Paper10<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1002\/RNC.4971\" target=\"_blank\" rel=\"noopener noreferrer\">Paper9<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/TSMC.2018.2870290\" target=\"_blank\" rel=\"noopener noreferrer\">Paper8<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/TSMC.2019.2920114\" target=\"_blank\" rel=\"noopener noreferrer\">Paper7<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2018.2889612\" target=\"_blank\" rel=\"noopener noreferrer\">Paper6<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.jfranklin.2018.12.025\" target=\"_blank\" rel=\"noopener noreferrer\">Paper5<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/CASE48305.2020.9217036\" target=\"_blank\" rel=\"noopener noreferrer\">Paper4<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/CDC.2018.8619681\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.23919\/ACC.2019.8814878\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/SMC42975.2020.9283387\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-multi-agent-consensus-formation-and-distributed-control\" aria-expanded=\"false\" aria-controls=\"slideme-multi-agent-consensus-formation-and-distributed-control\" class=\"slideme__heading slideme__trigger\">Multi-agent: consensus, formation, and distributed control<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-multi-agent-consensus-formation-and-distributed-control\" aria-hidden=\"true\"><p><br \/>\nThe use of collaborative autonomous robotic vehicles allows for greater flexibility and capacity as well as higher performance in areas such as surveillance, inspection, space explorations, communication, sensor deployment and many others. Multi-agent systems (MAS) distribute work in a logical manner and exchange information via self-formed local network and, hence, they are often called nodes. The network is named a communication graph formed by a set of nodes and the communication lines between different nodes are called edges. The graph can be directed or undirected. An undirected graph allows the information to flow in both directions. The connected nodes of such a graph own similar characteristics. On a directed graph or a digraph the direction of the information flow is fixed. The direction is pointed from one node to another indicating how the information flows from one node to its neighbors. The control of such multi-agent systems faces several practical as well as theoretical challenges. Dynamics of the node can be nonlinear and unknown, the network bandwidth capacity is limited and may suffer from variable delays and loss of packets, the operating environment is changing and complex with presence of noise, the embedded computational resources are limited, etc. In our Lab, we develop multi-agent consensus, formation, and distributed control techniques for homogeneous heterogeneous systems addressing unknown high order nonlinear system.<\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1109\/TSMC.2017.2702705\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1080\/00207179.2017.1359422\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1080\/00207721.2016.1226984\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<br \/>\n<\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-wildfire-detection-using-modern-machine-learning-algorithms\" aria-expanded=\"false\" aria-controls=\"slideme-wildfire-detection-using-modern-machine-learning-algorithms\" class=\"slideme__heading slideme__trigger\">Wildfire Detection using Modern Machine Learning Algorithms<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-wildfire-detection-using-modern-machine-learning-algorithms\" aria-hidden=\"true\"><p><br \/>\nOur lives rely heavily on the resources that forests provide. They are regarded as the planet\u2019s lungs because they filter the air by adding oxygen (O2) and lowering the high levels of carbon dioxide levels (CO2). They serve as homes for a variety of animals and can be utilized to shield crops from the wind. Additionally, they clear the water of the majority of pollution-causing agents. Due to the numerous jobs and higher revenues that forests create, countries\u2019 economies are improved. Smoke and air pollution pose serious health threats, especially to vulnerable populations. Evacuations disrupt livelihoods and cause psychological trauma. Economically, the costs are staggering. Firefighting expenditures soar, and losses in timber, agriculture, and tourism industries mount. Long-term, diminished soil fertility hinders agriculture, and reduced water quality impacts communities downstream. Environmental repercussions extend globally. We address critical challenges in wildfire detection, focusing on enhancing time resolution and optimizing processing speeds while maintaining high accuracy levels of state-of-the-art machine learning algorithms.<\/p>\n<p style=\"text-align: center;\"><iframe loading=\"lazy\" title=\"YouTube video player\" src=\"https:\/\/www.youtube.com\/embed\/xwMzFpZkC8M\" width=\"560\" height=\"315\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\" data-mce-fragment=\"1\"><br \/>\n<\/iframe><\/p>\n<p><b>Have a look at our contributions<\/b><\/p>\n<p style=\"padding-left: 15px;\">[J30]. A. V. Jonnalagadda, and <b>H. A. Hashim<\/b>, &#8220;<a href=\"https:\/\/doi.org\/10.1016\/j.rsase.2024.101181\" target=\"_blank\" rel=\"noopener noreferrer\">SegNet: A Segmented Deep Learning based Convolutional Neural Network Approach for Drones Wildfire Detection<\/a>,&#8221; Remote Sensing Applications: Society and Environment (<b><span style=\"color: #0000ff;\">RSA-SE<\/span><\/b>), pp. 1-26, 2024.<\/p>\n<p style=\"padding-left: 15px;\">[C16]. A. V. Jonnalagadda, <b>H. A. Hashim<\/b>, and A. Harris, &#8220;<a href=\"http:\/\/dx.doi.org\/10.1109\/ICDS62089.2024.10756303\" target=\"_blank\" rel=\"noopener noreferrer\">Comprehensive and Comparative Analysis between Transfer Learning and Custom Built VGG and CNN-SVM Models for Wildfire Detection<\/a>,&#8221; In Proc. of the 2024 <b><span style=\"color: #0000ff;\">IEEE<\/span><\/b> International Conference On Intelligent Computing in Data Sciences (<b><span style=\"color: #0000ff;\">ICDS&#8217;24<\/span><\/b>), Marrakech, Morocco, pp. 1-7, <b><span style=\"color: #0000ff;\">2024<\/span><\/b>. [<b><a href=\"https:\/\/arxiv.org\/pdf\/2411.08171.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">pdf<\/a><\/b>] <a href=\"https:\/\/youtu.be\/XDc2WrCjXiY\" target=\"_blank\" rel=\"noopener noreferrer\"> Video<\/a><\/p>\n<p><\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-uav-avionics-systems-and-integration\" aria-expanded=\"false\" aria-controls=\"slideme-uav-avionics-systems-and-integration\" class=\"slideme__heading slideme__trigger\">UAV Avionics Systems and Integration<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-uav-avionics-systems-and-integration\" aria-hidden=\"true\"><p><br \/>\nAvionics systems of a UAV or drone are the critical electronic components found onboard that regulate, navigate, and control UAV travel while ensuring public safety. Contemporary UAV avionics work together to facilitate success of UAV missions by enabling stable communication, secure identification protocols, novel energy solutions, multi-sensor accurate perception and autonomous navigation, precise path planning, that guarantees collision avoidance, reliable trajectory control, and efficient data transfer within the UAV system. Moreover, special consideration must be given to electronic warfare threats prevention, detection, and mitigation, and the regulatory framework associated with UAV operations. While some UAV missions are entirely dependent on a remote human operator others are either partially or fully autonomous independent of remote human control. Lack of onboard human operator necessitates precise and reliable UAV avionics systems. Considering the fact that the number of UAVs carrying out missions in the civil airspace is increasing exponentially, safe navigation achieved through effective and standardized procedures is paramount. Thus, it is crucial to ensure seamless and harmoniously operation of all the UAV avionics systems that include the flight control surfaces, UAV sensors, navigation and planning systems, communication systems and power systems. Furthermore, safe and responsible UAV use by general public, government bodies, and professional operators is enforced by transportation agencies worldwide through rigorous certification process and strict regulations. UAVs have powerful communication capabilities, not only can they be connected to cellular networks, but they also can enable terrestrial wireless communications by forming an assisted communication system and acting as aerial base stations or communication access points. Air-to-air communication between UAVs and air-to-ground communication between UAVs and ground stations are critical elements of UAV networks since they provide the necessary means of identification and communication to achieve the required tasks. UAV\u2019s compact size and payload constraints make the energy resource it carries in short supply. Consequently, the key component of mission planning is optimization of the energy consumption and performance. UAVs can use a variety of energy systems, such as batteries, fuel, or renewable energy cells, and selecting the optimal energy system and focusing of energy management allows to achieve extended UAV flight duration and increased operational range.<\/p>\n<p>Optimal selection of the energy source and its efficient management allow to minimize landing frequency for refueling or battery replenishment enhancing UAV versatility. Successful completion of a scheduled mission is highly dependent on the UAV\u2019s perceptual capabilities and the resulting awareness of its current navigation states (starting point) in three-dimensional (3D) space including location (position and orientation) in the six degrees-of-freedom, speed, heading direction, and target destination. Successful perception and navigation is built on four key pillars: sensor selection, multi-sensor fusion, navigation techniques selection (map-based and mapless), and robust estimator design. Finding a collision free path in a cluttered environment requires careful path planning from initial location to the final destination in 3D space while tackling kinematic and dynamic constraints. Locating the suitable obstacle free path in 3D space requires solving the multi-objective Nondeterministic Polynomial-time NP-hard problem that has no single optimal solution. Once the collision free path is identified, trajectory control techniques are applied to track the UAV along the desired route. Another critical component of UAV avionics are the defense systems used to confront electronic warfare threats, such as destructive and non-destructive cyberattacks, transponder attacks and jamming threats, using state-of-the-art countermeasures and defensive aids.<\/p>\n<p><b>Have a look at our contributions<\/b><br \/>\n[<b><a href=\"https:\/\/doi.org\/10.1016\/j.rineng.2024.103786\" target=\"_blank\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1109\/ACCESS.2025.3561068\" target=\"_blank\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1139\/dsa-2023-0091\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<\/p>\n<p><\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><br \/>\n<div class=\"slideme\"><dl class=\"slideme__list\"><dt class=\"slideme__term\"><a href=\"#slideme-artificial-intelligence-implementation-on-different-engineering-applications\" aria-expanded=\"false\" aria-controls=\"slideme-artificial-intelligence-implementation-on-different-engineering-applications\" class=\"slideme__heading slideme__trigger\">Artificial Intelligence Implementation on Different Engineering Applications<\/a><\/dt><dd class=\"slideme__description\" id=\"slideme-artificial-intelligence-implementation-on-different-engineering-applications\" aria-hidden=\"true\"><p><br \/>\nWe have strong expertise on applying a variety of Artificial Intelligence (AI) techniques to address different engineering applications.<\/p>\n<p>Supervised and\/or Unsupervised Learning for Surface Electromyography sEMG-Based hand gestures classifications: [<b><a href=\"https:\/\/doi.org\/10.1016\/j.sasc.2024.200144\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<\/p>\n<p>Hybrid Integrated Pix2Pix and WGAN Model with Gradient Penalty for Binary Images Denoising: [<b><a href=\"https:\/\/doi.org\/10.1016\/j.sasc.2024.200122\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<\/p>\n<p>Supervised and\/or Unsupervised Learning for Cast Components: [<b><a href=\"https:\/\/doi.org\/10.1007\/s11668-023-01695-8\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<\/p>\n<p>Fuzzy Logic Control for Twin Rotor MIMO System: [<b><a href=\"https:\/\/doi.org\/10.1016\/j.eswa.2015.08.026\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1155\/2015\/704301\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>]<\/p>\n<p>Evolutionary Techniques for Communication Systems: [<b><a href=\"https:\/\/doi.org\/10.1007\/s10776-018-0388-1\" rel=\"noopener noreferrer\">Paper4<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.jnca.2015.09.013\" rel=\"noopener noreferrer\">Paper3<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1002\/dac.4100\" rel=\"noopener noreferrer\">Paper2<\/a><\/b>] , [<b><a href=\"https:\/\/doi.org\/10.1016\/j.comnet.2019.04.009\" target=\"_blank\" rel=\"noopener noreferrer\">Paper1<\/a><\/b>]<\/p>\n<p><\/p><\/dd><dl><\/div><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><span data-mce-type=\"bookmark\" style=\"display: inline-block; width: 0px; overflow: hidden; line-height: 0;\" class=\"mce_SELRES_end\">\ufeff<\/span><\/p>\n<h4><strong>Explore <\/strong><\/h4>\n<p><a href=\"https:\/\/carleton.ca\/rncsl\/\" target=\"_blank\" rel=\"noopener noreferrer\">HOME<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/news\/\" target=\"_blank\" rel=\"noopener noreferrer\">NEWS<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/media\/\" target=\"_blank\" rel=\"noopener noreferrer\">MEDIA<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/research\/\" target=\"_blank\" rel=\"noopener noreferrer\">RESEARCH<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/publications\/\" target=\"_blank\" rel=\"noopener noreferrer\">PUBLICATIONS<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/research-group\/\" target=\"_blank\" rel=\"noopener noreferrer\">GROUP<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/research-openings\/\" target=\"_blank\" rel=\"noopener noreferrer\">OPENINGS<\/a>  &#8211;  <a href=\"https:\/\/carleton.ca\/rncsl\/contact\/\" target=\"_blank\" rel=\"noopener noreferrer\">CONTACT<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Check out our Publications \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff Explore HOME &#8211; NEWS &#8211; MEDIA &#8211; RESEARCH &#8211; PUBLICATIONS &#8211; GROUP &#8211; OPENINGS &#8211; CONTACT<\/p>\n","protected":false},"author":6,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_relevanssi_hide_post":"","_relevanssi_hide_content":"","_relevanssi_pin_for_all":"","_relevanssi_pin_keywords":"","_relevanssi_unpin_keywords":"","_relevanssi_related_keywords":"","_relevanssi_related_include_ids":"","_relevanssi_related_exclude_ids":"","_relevanssi_related_no_append":"","_relevanssi_related_not_related":"","_relevanssi_related_posts":"","_relevanssi_noindex_reason":"","_mi_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":"","_links_to":"","_links_to_target":""},"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v21.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>RESEARCH - Robotics, Navigation, and Control Systems Laboratory<\/title>\n<meta name=\"description\" content=\"Check out our Publications \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff Explore HOME &nbsp;&nbsp; - &nbsp;&nbsp; NEWS &nbsp;&nbsp; - &nbsp;&nbsp; MEDIA\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/carleton.ca\/rncsl\/research\/\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/carleton.ca\/rncsl\/research\/\",\"url\":\"https:\/\/carleton.ca\/rncsl\/research\/\",\"name\":\"RESEARCH - Robotics, Navigation, and Control Systems Laboratory\",\"isPartOf\":{\"@id\":\"https:\/\/carleton.ca\/rncsl\/#website\"},\"datePublished\":\"2022-06-02T22:31:27+00:00\",\"dateModified\":\"2025-10-08T12:48:57+00:00\",\"description\":\"Check out our Publications \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff Explore HOME &nbsp;&nbsp; - &nbsp;&nbsp; NEWS &nbsp;&nbsp; - &nbsp;&nbsp; MEDIA\",\"breadcrumb\":{\"@id\":\"https:\/\/carleton.ca\/rncsl\/research\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/carleton.ca\/rncsl\/research\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/carleton.ca\/rncsl\/research\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/carleton.ca\/rncsl\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"RESEARCH\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/carleton.ca\/rncsl\/#website\",\"url\":\"https:\/\/carleton.ca\/rncsl\/\",\"name\":\"Robotics, Navigation, and Control Systems Laboratory\",\"description\":\"Carleton University\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/carleton.ca\/rncsl\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"RESEARCH - Robotics, Navigation, and Control Systems Laboratory","description":"Check out our Publications \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff Explore HOME &nbsp;&nbsp; - &nbsp;&nbsp; NEWS &nbsp;&nbsp; - &nbsp;&nbsp; MEDIA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/carleton.ca\/rncsl\/research\/","twitter_misc":{"Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/carleton.ca\/rncsl\/research\/","url":"https:\/\/carleton.ca\/rncsl\/research\/","name":"RESEARCH - Robotics, Navigation, and Control Systems Laboratory","isPartOf":{"@id":"https:\/\/carleton.ca\/rncsl\/#website"},"datePublished":"2022-06-02T22:31:27+00:00","dateModified":"2025-10-08T12:48:57+00:00","description":"Check out our Publications \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff \ufeff\ufeff\ufeff Explore HOME &nbsp;&nbsp; - &nbsp;&nbsp; NEWS &nbsp;&nbsp; - &nbsp;&nbsp; MEDIA","breadcrumb":{"@id":"https:\/\/carleton.ca\/rncsl\/research\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/carleton.ca\/rncsl\/research\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/carleton.ca\/rncsl\/research\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/carleton.ca\/rncsl\/"},{"@type":"ListItem","position":2,"name":"RESEARCH"}]},{"@type":"WebSite","@id":"https:\/\/carleton.ca\/rncsl\/#website","url":"https:\/\/carleton.ca\/rncsl\/","name":"Robotics, Navigation, and Control Systems Laboratory","description":"Carleton University","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/carleton.ca\/rncsl\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"}]}},"acf":{"banner_image_type":"none","banner_button":"no"},"_links":{"self":[{"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/pages\/30"}],"collection":[{"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/comments?post=30"}],"version-history":[{"count":4,"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/pages\/30\/revisions"}],"predecessor-version":[{"id":688,"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/pages\/30\/revisions\/688"}],"wp:attachment":[{"href":"https:\/\/carleton.ca\/rncsl\/wp-json\/wp\/v2\/media?parent=30"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}