The details and outlines of the previously delivered workshops are available on this page. To access the recorded videos and materials, please contact us at TrustCAV@carleton.ca.

Application of Machine Learning in Driver State and Behaviour Detection for Intelligent and Autonomous Vehicles

Date: 11 April 2025,   Presenters: Dr. Arash Abarghooei (Carleton University) and Dr. Abdullah Jirjees (Canada National Research Council)

Purpose: Familiarize students with machine learning techniques for driver monitoring systems (DMS), telematics-based risk prediction, and vision-based distraction detection in intelligent and autonomous vehicles

Introduction: This workshop introduced the fundamentals and current research on applying machine learning in intelligent vehicles, focusing on driver behaviour and state detection. The sessions included real case studies from Carleton University’s ABL lab, and the state-of-the-art research conducted at the NRC Department of Connected and Automated Vehicles, covering direct and indirect driver monitoring, such as vision systems and telematic data.

Covered material:

Part 1: Problem Statement and Introduction to Machine Learning Methods

  • Background & Motivation: Human error statistics in traffic collisions; importance of driver state detection in SAE Level 2–3 shared control systems; legal and insurance implications
  • State-of-the-Art DMS Technologies: Camera-based monitoring, alcohol detection systems, telematics platforms, wearables, and multimodal fusion systems (e.g., Bosch, Tesla, Smart Eye)
  • Driver Misbehavior Taxonomy: Impaired, fatigued, distracted, and aggressive driving; measurement modalities including gaze, steering input, biosignals, vehicle states, and ADAS sensors
  • Machine Learning Fundamentals: Classical vs deep learning; supervised, unsupervised, semi-supervised, and reinforcement learning paradigms; data, computational, and deployment challenges
  • Classical ML Algorithms: Naïve Bayes, Decision Trees, Random Forest, KNN, SVM; clustering (K-means, FCM); feature engineering and interpretability considerations
  • Neural Networks & Deep Learning: Shallow neural networks, backpropagation, CNNs, RNNs, LSTM; automatic feature learning for complex perception tasks
  • Performance Evaluation Metrics: Confusion matrix, accuracy, precision, recall, F1-score, ROC-AUC; MAE, MSE, RMSE, and R² for regression tasks
  • Training & Deployment Considerations: Class imbalance handling, normalization, cross-validation, train-test split, model drift, scalability, and real-time constraints

Part 2: Case Studies and Methods in Carleton’s Research

  • Big Data Telematics Analysis (Study 1): 7.2 million Canadian trips; 12-month insurance dataset; 1 Hz inertia, GPS, OBDII data; statistical analysis of aging effects and 90° turning patterns
  • Older Driver Risk Prediction (Study 2): Candrive dataset (256 drivers, 7 years); accident-labelled data with cognitive assessments; risk stratification using XGBoost (~78% accuracy)
  • Telematics & Smart Steering Wheel (Study 3): Driving simulator setup; 10 Hz sampling; feature extraction with sliding windows; supervised (SVM, KNN, NN, DT) and unsupervised (K-means, FCM) classification
  • ADAS-Based Risk Quantification (Study 4): Sensor-level feature extraction; aggressive and distracted driving metrics (TEOR, LGOR); regression and ML models (GPR, SVM, RF, NN); time-window and feature selection studies
  • Vision-Based Distraction Detection (Study 5): Marker-based head tracking; markerless face tracking (MediaPipe Face Mesh); orientation estimation; ML-based distraction classification
  • Real-World Deployment: Integration of CAN, GPS, IMU, cabin and road cameras, steering wheel sensors; multimodal fusion for driver monitoring
  • Challenges & Future Directions: Privacy concerns, data labeling cost, lighting variability, real-time constraints; deep learning (LSTM), semi-supervised learning, domain adaptation, federated learning, adaptive driver models

Part 3: Integrated Multimodal System for Real-Time Driver Fatigue Detection and Cognitive Load Assessment

  • Research Context & Motivation: Canadian collision statistics highlighting fatigue-related fatalities and injuries; growing need for robust driver monitoring systems alongside ADAS deployment
  • CAV Research Infrastructure: NRC vehicle fleet (ICE, hybrid, EV); modular rooftop sensor cube including stereo cameras, thermal camera, 128-channel Velodyne LiDAR, GNSS, IMU; in-cabin monitoring cameras and high-performance onboard computing
  • System Architecture Overview: Multimodal framework combining:
    – Driver Behavior Measurement (DBM)
    – Physiological Measurement (GPM)
    – Vehicle Dynamic Measurement (VDM)
    Data synchronization via ROS2 and feature engineering pipeline feeding a CNN–LSTM model
  • Driver Behavior Measurement (DBM): RealSense-based facial landmark tracking; extraction of eye aspect ratio, mouth aspect ratio, pupil circularity, mouth-over-eye ratio, gaze direction
  • Physiological Measurement (GPM): Heart rate (HR), heart rate variability (HRV); extension to ECG (brain signals) and EMG (muscle fatigue) for expanded multimodal assessment
  • Vehicle Dynamic Measurement (VDM): Acceleration, deceleration, roll, pitch, yaw; designed for both on-road and off-road vehicle studies
  • Deep Learning Framework:
    – CNN for spatial feature extraction from behavioral metrics
    – LSTM for temporal sequence modeling of multimodal data
    – Integrated multimodal fusion for fatigue and cognitive load classification
  • Experimental Setup: Two real vehicles; urban/rural/highway driving around London, Ontario; 10 sessions (30–60 min each); controlled stimulant restrictions; ROS2-based sensor integration
  • Strengths & Limitations: Accurate real-time detection and scalable sensor integration; current limitation of single-driver dataset and exclusion of vehicle dynamics in final model (planned integration in Phase II)
  • Future Directions: Multi-driver expansion, adverse weather testing, integration of additional physiological and vehicle dynamic metrics, enhanced robustness for ADAS deployment

 

Introduction to Electric Connected Automated Vehicle (CAV) Technology, Application Contexts, and Trust Factors

Date: 30 July 2025,    Presenter: Dr. Ata M. Khan (Professor, Carleton University)

Purpose: Familiarize students with electric and automated driving technologies, SAE automation levels, shared mobility applications, trust factors, and policy considerations shaping Connected and Automated Vehicle (CAV) deployment

Introduction: This workshop introduced the technological foundations of electric and automated vehicles, covering SAE automation levels, sensor architectures, shared mobility systems, and public trust considerations. It examined the evolution of active safety systems, policy frameworks, and the societal implications of CAV deployment. Discussions highlighted the importance of reliability, cybersecurity, and human factors in advancing automation.

Covered material:

  • Motivation for Automation: Safety, cost reduction, and efficiency improvements; human-factor collision statistics; examples of distracted driving impacts from simulator studies
  • SAE Automation Levels & DDT: Levels 0–5; Dynamic Driving Task (DDT); distinction between ADAS (Levels 1–2) and Automated Driving Systems (Levels 3–5)
  • Active Safety & System Evolution: Five eras of vehicle safety (NHTSA); progression from driver assistance to full automation
  • CAV Architecture & Sensors: Integrated in-vehicle unit design; sensor clusters (camera, radar, LiDAR); data fusion; V2X, DSRC, 4G/5G, satellite communications
  • Applications & Adoption Forecasts: Personal vehicles, robotaxis, microtransit shuttles, automated public transit buses, and freight automation timelines
  • Shared Electric & Automated Mobility: Fleet management, ride matching, charging coordination, and system-level design considerations
  • Trust Factors & Public Perception: Positive/negative trust drivers; survey findings (McKinsey, AAA); safety, cost, cybersecurity, and reliability concerns
  • Public Policy & Case Study: Canadian regulatory framework (Transport Canada principles); Trans-Canada automated truck initiative; Boston Waymo case study highlighting deployment and trust challenges
  • AI for Full Automation: Bayesian AI-based cognitive vehicle architecture; probabilistic reasoning under uncertainty; comparison with distracted human driver performance; research directions toward Level 5 automation

Unlocking the Promise of CAV Technology

Date: 30 July 2025,   Presenter: Omar Choudry (Senior Specialist, City of Ottawa)

Purpose: Provide a municipal-government perspective on the deployment of Connected and Automated Vehicle (CAV) technologies, highlighting real-world pilots, V2X integration, traffic optimization, and infrastructure challenges

Introduction: This workshop provided a municipal perspective on unlocking the potential of Connected and Automated Vehicle (CAV) technology, showcasing real-world deployments in the City of Ottawa. The session covered live V2X signal integration, cloud-based eco-driving systems, automated shuttle pilots, and large-scale connected vehicle analytics. Emphasis was placed on infrastructure constraints, adaptive traffic management, and the strategic role of municipalities in enabling scalable and sustainable CAV ecosystems.

Covered material:

  • Municipal CAV Context: City-scale traffic infrastructure, role of municipalities in CAV testing and deployment, and regulatory considerations for SAE Level 2–3 systems
  • V2X & Traffic Signal Integration: Canada’s first live V2I signal demonstration; SPaT broadcasting; DSRC-to-cloud transition; real-time signal state sharing
  • Cloud-Based Eco-Driving Systems: Speed advisory applications for green-wave alignment; measurable fuel and energy savings; implications for electrified fleets
  • Adaptive & Predictive Signal Control: Corridor coordination, left-turn pre-requesting, performance monitoring, and data-driven optimization
  • Automated Shuttle Deployments: Low- and medium-speed shuttle pilots; public-road demonstrations; operational lessons learned
  • Connected Vehicle Data Analytics: Large-scale intersection data analysis, harsh braking mapping, safety insights, and origin–destination studies
  • Deployment Challenges & Future Vision: Infrastructure cost, funding constraints, DSRC vs. C-V2X transition, platooning potential, and gradual path toward higher automation levels

Intellectual Property Matters – IP, IPON and Supporting Innovators

Date: 30 July 2025,   Presenter: Roula Thomas (Vice-President, Intellectual Property Ontario)

Purpose: Introduce the role of intellectual property (IP) in supporting innovation, commercialization, and economic growth, and outline how IP Ontario (IPON) assists Ontario-based researchers and startups

Introduction: This workshop discussed the strategic importance of intellectual property (IP) for innovators and startups, emphasizing the need for structured IP planning to support commercialization and growth. Roula Thomas outlined Canada’s IP landscape, common challenges in university–industry collaboration, and global trends in automotive patenting. The session highlighted how IP Ontario provides education, advisory services, and funding support to help Ontario-based innovators build competitive and sustainable IP strategies.

Covered material:

  • Why IP Matters: Canada’s innovation–commercialization gap; low SME patent ownership; importance of IP strategy for growth, exports, and investment readiness
  • Forms of IP: Trademarks, copyrights, industrial designs, patents, and trade secrets; key differences in protection scope and duration
  • IP Strategy Basics: Aligning IP with business goals; assessing market, competitors, and technology positioning; using IP as a business and negotiation tool
  • Risk Management: Protecting novelty before disclosure; understanding patent requirements (novelty, utility, non-obviousness); avoiding freedom-to-operate conflicts
  • Automotive Sector Snapshot: Global patent trends in EVs and AVs; dominance of China and major OEMs; Canada’s limited share of filings
  • Commercialization Challenges: University technology transfer gaps, investor expectations, and alignment between research outputs and industry needs
  • IPON Support Programs: IP education, benchmarking, coaching, and funding to help SMEs build IP capacity and protect innovations

From Design to Dollars: Project Evaluation for New Product Development

Date: 30 July 2025,  Presenter: Dr. Yuriy Zabolotnyuk (Professor, Carleton University)

Purpose: Introduce financial evaluation methods for assessing the commercial viability of engineering projects and new product development initiatives.

Introduction: This workshop introduced the financial foundations required to evaluate engineering projects from a commercialization perspective. Using real-world examples of technically successful but financially failed products, the session emphasized the importance of integrating financial analysis into product development. Core valuation tools such as NPV, IRR, payback period, and sensitivity analysis were demonstrated through an applied autonomous vehicle sensor example, highlighting how engineering innovation must ultimately translate into sustainable economic value.

Covered material:

  • Engineering vs. Financial Success: Case studies of technically advanced but financially unsuccessful products (e.g., Concorde, DeLorean, early EVs); importance of financial feasibility alongside technical design
  • Core Financial Concepts: Cash inflows and outflows, revenues vs. costs, investment timing, and the importance of profit generation for project sustainability
  • Time Value of Money: Present value principles; discounting future cash flows; relationship between risk and required return
  • Project Evaluation Metrics:
    – Payback Period
    – Net Present Value (NPV)
    – Internal Rate of Return (IRR)
    – Profitability Index (PI)
  • Cost of Capital & Risk: Debt vs. equity financing; hurdle rate; relationship between risk level and expected return
  • Applied Example – Sensor Development: Financial modeling of a hypothetical autonomous vehicle sensor project including revenue projections, cost assumptions, NPV/IRR calculation, and break-even analysis
  • Sensitivity & Scenario Analysis: Evaluating optimistic, baseline, and pessimistic cases; impact of sales, cost, and market uncertainty on project viability
  • Finance as Science and Art: Mathematical valuation tools combined with subjective forecasting and market assumptions; importance of defensible financial modeling

Vehicle Dynamics, Driving Scenarios, and Simulation – Developing a Research Driving Simulator

Date: 31 July 2025,   Presenters: Dr. Arash Abarghooei (Carleton University)

Purpose: Familiarize students with the design and implementation of a research-grade driving simulator, covering vehicle dynamics modeling, scenario creation, Simulink-based simulation, and hardware integration for autonomous driving research

Introduction: This workshop introduced the design and development of a research-grade driving simulator using MATLAB/Simulink and Unreal Engine integration. It covered vehicle dynamics modeling from kinematic to full 6-DoF systems, powertrain and actuation models, real-road scenario reconstruction from OpenStreetMap, and 3D visualization in Simulink. A live demonstration showcased a fully integrated hardware platform supporting autonomous driving research and human-in-the-loop studies.

Covered material:

  • Simulation Platforms Overview: Comparison of commercial (rFpro, VI-Grade, SCANeR, etc.) and open-source (AirSim, CARLA, MATLAB/Simulink) simulators; Unreal Engine integration and API capabilities
  • MATLAB/Simulink Ecosystem: Automated Driving Toolbox, Vehicle Dynamics Toolbox, RoadRunner/Prescan integration, hardware-in-the-loop (Speedgoat, Nvidia Jetson), joystick and sensor connectivity
  • Hardware Setup Design: Cost-effective simulator configuration including high-performance PC (i9 CPU, RTX 4080), Logitech G29 steering system, triple 50-inch displays, bass shakers, and motion-ready configuration
  • Vehicle Dynamics Modeling:
  • Kinematic bicycle model for low-speed path tracking
  • Single-track dynamic model (3-DoF planar) with tire slip
  • Dual-track (4-wheel independent) model
  • Full 6-DoF spatial model with suspension and tire models
  • Tire & Stability Modeling: Pacejka (Magic Formula), understeer/oversteer behavior, instability analysis
  • Powertrain & Actuation Models: Combustion engine and EV motor modeling, transmission dynamics, Ackermann steering geometry, steering wheel dynamics, hydraulic brake modeling
  • Scene & Scenario Design Workflow: OpenStreetMap export, RoadRunner scene refinement, .fbx/.xodr file generation, Scenario Designer integration, actor and sensor configuration
  • Real-World Reconstruction: Jeanne d’Arc/St. Joseph roundabout, Riverside & Hunt Club, Area X.O recreation in RoadRunner
  • Simulink-Based Execution: Scenario Reader, 3D Scene Configuration, Vehicle-to-World transforms, ego controller integration, visualization blocks, and multi-vehicle simulation
  • Demonstration Session: Live simulator operation and group-based interactive demo at ABL lab

 

Getting Started with Control & Estimation for Autonomous Vehicles: The Basics

Date: 31 July 2025,   Presenter: Dr. Joshua Marshall (Professor, Queen’s University)

Purpose: Introduce the fundamental principles of control theory and state estimation for autonomous vehicles, focusing on linear state-space modeling, feedback control, and foundational estimation concepts as preparation for nonlinear and uncertain real-world systems.

Introduction: This workshop introduced the foundational concepts of control and estimation for autonomous vehicles, emphasizing a bottom-up approach starting from a simple 1D vehicle model. Dr. Marshall demonstrated how linear state-space modeling, feedback control, and basic simulation form the core building blocks for more advanced nonlinear and uncertain systems encountered in real-world autonomous vehicles. The session highlighted the practical challenges introduced by wheeled dynamics, slip, and uncertainty, preparing participants for deeper study of modern control and estimation techniques in subsequent lectures.

Covered material:

  • Autonomous Vehicle Big Picture: Position of control and estimation within the full autonomy stack (perception, localization, planning, control); distinction between passenger AVs, convoying trucks, and off-road autonomous systems
  • Control & Estimation in Context: Relevance of classical control concepts to autonomous vehicles; narrowing control theory to what is practically needed for AV systems
  • Toy Problem Approach – 1D Vehicle Model:
    – Simplified linear vehicle model (position and velocity states)
    – State-space representation (̇ = + )
    – Importance of starting with minimal models before extending to full 4-wheel dynamics
  • Simulation & Sanity Checking: Time-domain simulation of input forces; interpreting velocity and position responses; importance of numerical integration and sampling rates (10–25 Hz baseline)
  • Modern (State-Space) Control:
    – Time-domain vs frequency-domain control
    – Feedback control fundamentals
    – Making the vehicle track a desired position instead of responding to arbitrary inputs
  • State Estimation Foundations:
    – Need for estimators in localization
    – Relationship between system models and measurement models
    – Preparing groundwork for observers and filters
  • Wheeled Vehicle Challenges:
    – Wheel slip and constraints
    – Implications for both control design and estimation accuracy
  • From Linear to Real-World Systems:
    – Introduction to nonlinearity and uncertainty
    – Motivation for advanced nonlinear control and estimation methods
    – Teaser for deeper exploration in subsequent lectures

Computer Vision and Perception for Autonomous Driving

Date: 31 July 2025,   Presenter: Dr. Jonathan Wu (Professor, University of Windsor)

Purpose: Familiarize students with perception systems for autonomous vehicles, covering sensor technologies, computer vision, deep learning, multi-sensor fusion, connected vehicles, and emerging AI-driven trends

Introduction: This workshop introduced the perception systems that underpin autonomous driving, focusing on computer vision, deep learning, and multi-sensor fusion. It covered core sensing technologies (LiDAR, radar, cameras, GNSS/IMU), perception pipelines, and modern fusion architectures including transformer-based and BEV models. Emerging trends such as self-supervised learning and adverse weather perception were also discussed.

Covered material:

  • Historical Evolution of Autonomous Driving: From early radio-controlled vehicles and DARPA Grand Challenge to modern commercial systems (Tesla, Waymo) and safety-driven deployment
  • Core Sensors for Perception: Ultrasonic, radar (Doppler-based), LiDAR (3D mapping), cameras (stereo vision), and GNSS/IMU for localization; strengths and limitations of each modality
  • Computer Vision Tasks: Image classification, object detection, semantic and instance segmentation; perception challenges (occlusion, illumination, intra-class variation)
  • Deep Learning for Vision: CNNs and Mask R-CNN; feature extraction pipelines; stereo vision systems; embedded perception platforms
  • Multisensor Fusion: Data-, feature-, and decision-level fusion; accident case study (Tesla 2016); modern fusion architectures (GAN-based, transformer-based, BEVFusion, graph-based models)
  • Multi-Target Tracking & Scene Understanding: Continuous state estimation of dynamic objects using LiDAR, radar, and camera integration
  • Connected Vehicles (V2X): Vehicle-to-everything communication enabling cooperative perception and enhanced safety
  • Emerging Trends & Research Directions: Self-supervised learning, foundation models, adverse weather perception, neuromorphic sensors, open-world autonomy, and student-led research on domain adaptation and accident detection

Non-Terrestrial Networks (NTN): Fundamental Dynamics and TestBed Demo

Date: 31 July 2025,   Presenters: Dr. Halim Yanikomeroglu (Professor, Carleton University) & Dr. Pablo Madoery

Purpose: Familiarize students with the evolution of Non-Terrestrial Networks (NTN) in the 6G era, covering LEO megaconstellations, HAPS networks, direct-to-device connectivity, and AI-enabled testbed demonstrations for future integrated terrestrial–space communication systems

Introduction: This workshop introduced the fundamental dynamics of Non-Terrestrial Networks (NTN) in the 6G era, focusing on LEO satellite megaconstellations, direct-to-smartphone connectivity, and high-altitude platform stations (HAPS) as a new stratospheric network layer. The sessions explored integrated terrestrial–HAPS–LEO architectures, business and scalability challenges, AI-enabled infrastructure, and sustainable wireless design principles. A live testbed demonstration illustrated how NTN can enable ubiquitous connectivity, edge computing, sensing, and intelligent transportation systems, highlighting its strategic and economic importance for Canada and beyond .

Covered material:

Part 1: Fundamentals

  • 6G and Beyond: Evolution from 1G–5G to 6G; ubiquitous hyper-connectivity; IMT-2030 capabilities; direct-to-device (D2D) connectivity and NTN integration in future wireless standards
  • NTN Architecture: Integrated terrestrial–LEO–HAPS ecosystem; radio-frequency and free-space optical links; direct-to-smartphone connectivity; vertical heterogeneous networks (VHetNet)
  • LEO Mega-Constellations: GEO/MEO/LEO comparisons; Starlink growth (thousands of satellites); evolving capacity (Gbps→Tbps per LEO); business case and scalability challenges
  • Direct-to-Smartphone Connectivity: Capacity and link budget limitations; antenna scaling; distributed (virtual) antenna arrays in space; commercial partnerships (Apple–Globalstar, T-Mobile–Starlink)
  • HAPS (High Altitude Platform Stations): Stratospheric super macro base stations (18–25 km); reduced latency and handover; scalable deployment; integration with terrestrial base stations
  • HAPS-Enabled Services: Edge computing, caching and computation offloading, UAV traffic management, intelligent transportation systems (ITS), localization and GNSS augmentation
  • Sensing & ISR Applications: High-resolution imaging potential (cm→mm ground sampling distance); AI-aided sensing from near space; integration with communication infrastructure
  • Sustainable & Green Architectures: Avoiding brute-force densification; dynamic beamforming hotspots (e.g., 0.2° beamwidth → ~70 m coverage); distributed edge/data center design; alignment with UN SDGs
  • Strategic & Economic Impact: Multi-trillion-dollar global opportunity; bridging rural/remote connectivity gaps; Canada’s NTN strategic importance; NTN-CAN infrastructure initiative

Part 2: An AI-Enabled Testbed for Satellite Mega-Constellations: From Simulator to Networks Innovation Accelerator

  • Motivation & Context: Evolution from RF bent-pipe satellites to RF + optical mesh mega-constellations; need for advanced networking techniques and validation testbeds
  • Testbed Architecture: Multi-layer architecture (physical, link, network, transport, application); integration of orbital dynamics, topology computation, packet-level simulation, and AI-enabled protocol design
  • Constellation Modeling: Walker Star and Walker Delta constellations; RAAN distribution (180°/360°); orbital planes, true anomaly, inter-satellite optical links
  • Simulation Framework: MATLAB-based orbital modeling; OMNeT++ packet-level simulation; GUI-based configuration; automated metrics extraction; SDN controller integration
  • Traffic & Network Emulation: CBR, exponential, burst traffic models; Wireshark monitoring; simulated-to-real network bridging (digital twin roadmap)
  • QoS-Aware Routing & Queue Management: EF, AF, BE traffic classes; latency vs. load analysis; route and weight selection trade-offs
  • Software-Defined Networking (SDN): Controller placement, dynamic activation, satellite–controller association, performance–overhead trade-offs
  • Weather-Aware & AI-Driven Routing: Backup route configuration, reinforcement learning for adaptive routing, weather-informed decision-making
  • Advanced Topics: Green satellite networks, disruption mitigation, risk-aware security allocation, ML integration (LSTM time-series prediction, RL in OMNeT++).

Non-GNSS Positioning and Navigation

Date: 25 September 2025,   Presenter: Dr. Mohamed Atia (Professor, Carleton University)

Purpose: Familiarize students with alternative positioning and navigation technologies for GPS-denied environments, including sensor-based localization, SLAM, and multi-sensor fusion for autonomous systems.

Introduction: This workshop introduced alternative positioning and navigation technologies for GPS-denied environments, focusing on inertial navigation, vision-based localization, LiDAR/radar sensing, and local radio positioning. It explored SLAM methodologies, sensor fusion architectures, and real-world failure cases to highlight robustness challenges. MATLAB-based demonstrations illustrated multi-sensor integration for autonomous systems.

Covered material:

  • Positioning & Navigation Fundamentals: Overview of positioning, navigation, and mapping (PNM); GNSS principles (trilateration, Doppler, orientation) and limitations (multipath, interference, jamming, spoofing).
  • Inertial Navigation Systems (INS): 6-DOF/9-DOF IMUs, inertial integration, PVO estimation, drift characteristics, and attitude/heading reference systems.
  • Vision-Based Navigation: Feature detection and tracking, visual odometry principles, camera motion estimation.
  • Range-Based Navigation: LiDAR and Radar point clouds, scan matching, motion estimation from geometric alignment.
  • Local Radio Positioning: Wi-Fi, UWB, BLE, RFID; ToA, TDoA, AoA, RTT methods; indoor fingerprinting techniques.
  • SLAM Approaches: Mapping, localization, and simultaneous localization and mapping; EKF-SLAM, PF-SLAM, Graph-SLAM; sensor noise and estimation challenges.
  • Hands-on Demonstrations: MATLAB-based examples of GPS, INS, LiDAR, and visual/inertial navigation.
  • Challenges & Future Directions: Sensor drift and noise, data fusion complexity, infrastructure dependency, security threats; AI-enhanced SLAM, cooperative navigation, hybrid fusion systems, 5G/6G integration.

Threat Modelling Techniques for Connected and Autonomous Vehicles (CAVs)

Date: 24 October 2025,   Presenter: Dr. Jason Jaskolka (Professor, Carleton University)

Purpose: Introduce structured threat modelling methods for identifying assets, classifying threats, ranking risks, and deriving mitigations for Connected and Autonomous Vehicle systems.

Introduction: This workshop introduced structured threat modelling approaches for Connected and Autonomous Vehicles (CAVs), focusing on asset identification, risk classification, and mitigation design. It covered STRIDE and DREAD methodologies, system decomposition using data flow diagrams, and real-world CAV attack scenarios. Practical exercises and tool demonstrations emphasized security-by-design principles for autonomous systems.

Covered material:

  • Threat Modelling Fundamentals: Core security engineering concepts; four key threat modelling questions; security objectives (confidentiality, integrity, availability, authenticity, accountability).
  • Threat Modelling Process: Decompose the application, identify assets, determine and rank threats, define controls and mitigations.
  • System Representation: Use of Data Flow Diagrams (DFDs) and concept models to represent assets, data flows, privilege boundaries, and system interactions.
  • STRIDE Threat Classification: Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, Elevation of Privilege; mapping threats to system components.
  • DREAD Risk Ranking: Damage, Reproducibility, Exploitability, Affected Components, Discoverability; quantitative threat prioritization.
  • Threat Analysis Techniques: Attack trees, asset-based analysis across perception, network, and application layers.
  • CAV Case Studies: Example attacks including GPS spoofing, Sybil attacks, denial-of-service on ECUs/gateways, firmware manipulation, and sensor data tampering.
  • Controls & Mitigations: Authentication, authorization, encryption, digital signatures, logging/auditing, least privilege, filtering, throttling, secure firmware validation.
  • Tools & Practical Exercises: Microsoft Threat Modelling Tool, OWASP Threat Dragon, STRIDE-based table exercise for autonomous taxi system.

Environmental Perception Systems in Autonomous Vehicles

Date: 17 November 2025,   Presenter: Dr. Marzieh Amini (Professor, Carleton University)

Purpose: Introduce environmental perception architectures for autonomous vehicles, with emphasis on multi-sensor fusion and robustness under adverse Canadian weather conditions.

Introduction: This workshop explored environmental perception systems in autonomous vehicles, focusing on multi-modal sensor fusion and robustness under adverse weather conditions common in Canada. Dr. Amini presented research from the Intelligent Sensing and Perception Lab on camera, LiDAR, radar, thermal imaging, and GNSS/INS integration, highlighting their comparative strengths and limitations. The session included real research case studies on fog-aware object detection, thermal pedestrian recognition, radar-based detection under severe conditions, and LiDAR-based infrastructure risk monitoring, emphasizing practical challenges and future directions for all-weather autonomous driving.

Covered material:

  • Autonomous Driving Stack Overview: Perception, localization, planning, and control pipeline; levels of driving automation (SAE Level 0–5); operational design domain (ODD) and restricted operational domain (ROD).
  • Perception in Adverse Weather: Challenges of rain, snow, fog, glare, and low illumination; impact on object detection, localization, and safe navigation.
  • Camera-Based Perception: Object detection (YOLO), semantic segmentation (SegNet), tracking (optical flow + Kalman filter), lane detection; limitations of RGB cameras in degraded visibility.
  • Multi-Sensor Modalities:
    – LiDAR (3D point cloud, accurate depth, illumination-invariant)
    – Radar (long-range detection, weather robustness, velocity estimation)
    – Thermal cameras (vulnerable road user detection at night/snow)
    – Ultrasonic sensors (short-range detection, parking/blind spot)
    – GNSS/INS integration (centimeter-level localization)
  • Sensor Performance Comparison: Literature-based and experimental evaluation of sensor reliability across weather conditions; effect-level scoring and metric-based comparison.
  • Fog-Aware Object Detection: Adaptive YOLO framework with Haziness Degree Evaluator (HDE) for conditional defogging; improved detection under fog without degrading clear-weather performance.
  • Thermal vs. RGB Detection Study: Comparative analysis of pedestrian and cyclist detection; higher mAP performance using thermal imaging under low-light and snowy conditions.
  • Radar-Based Object Detection: FMCW radar processing pipeline; backbone/detector comparisons (ResNet, VGG, YOLOv4/v8); IoU trade-offs under low-resolution radar constraints; computational complexity considerations.
  • Cross-Weather Domain Shift: Dataset limitations, labeling challenges for radar/LiDAR, simulator limitations, and domain adaptation challenges.
  • LiDAR-Based Environmental Risk Assessment: 3D point cloud classification (PointCNN) for vegetation encroachment detection on electrical transmission lines; proximity analysis and risk reporting.
  • Research Outlook: Sensor–model co-evolution, computational constraints, simulator development, and future directions for all-weather autonomous perception.

 

Machine Learning Foundations and Applications

Date: 2 December 2025,   Presenters: Dr. Omair Shafiq (Professor, Carleton University) & Heba Zahran

Purpose: Introduce foundational concepts of machine learning and deep learning, connect them to responsible AI principles, and demonstrate a practical convolutional neural network (CNN) application for traffic light classification in autonomous vehicles

Introduction: This workshop provided a comprehensive overview of machine learning foundations and their application in autonomous vehicle systems. The first session covered supervised and unsupervised learning, deep learning architectures, and the full machine learning lifecycle, emphasizing the importance of responsible AI principles such as fairness, reliability, transparency, and human-centered design. The second session featured a hands-on demonstration of a CNN-based traffic light classification system using real-world driving data, highlighting preprocessing techniques, model training, performance evaluation, and practical deployment considerations in intelligent transportation systems.

Covered material:

  • Digital Age & Big Data Context: Exponential growth of data (volume, velocity, variety, veracity, value); role of cloud computing and big data in enabling modern machine learning and AI systems
  • AI, Machine Learning, and Deep Learning Hierarchy: Distinction between AI, machine learning, deep learning, and generative AI; supervised vs. unsupervised learning paradigms
  • Supervised Learning Foundations: Classification concepts; training/validation/test splits; confusion matrix; evaluation metrics (accuracy, precision, recall, F1-score, ROC-AUC)
  • Unsupervised Learning Techniques: Association rule mining (Apriori algorithm); market basket analysis; clustering (K-means); centroid updates and convergence criteria
  • Neural Networks & Deep Learning: CNNs for image-based feature extraction; RNNs for sequential/temporal data; hierarchical feature learning for complex perception tasks
  • Responsible AI Principles: Fairness and bias mitigation; reliability and safety; privacy and security; inclusivity; transparency and explainability; accountability; human-centered AI design; relevance across the full ML lifecycle
  • Hands-On Demonstration – Traffic Light Classification:
    – Dataset: LISA Traffic Light Dataset (90K real-world images)
    – Data preprocessing: bounding box cropping, normalization, augmentation
    – CNN architecture: convolutional layers, ReLU, pooling, dropout, flattening
    – Training: cross-entropy loss, optimizer tuning, epoch-based validation
    – Performance: ~99% test accuracy; confusion between red and yellow classes due to imbalance
  • Deployment Considerations: Overfitting mitigation, class imbalance handling, computational constraints for edge devices, trade-offs between model complexity and real-time feasibility

Alphaba Smart Bus and Related Autonomous Vehicle Design Challenges

Date: 12 January 2026,   Presenter: Dr. Jun Steed Huang (Professor, Carleton University)

Purpose: Familiarize students with large-scale autonomous public bus system design, focusing on safety architecture, multi-sensor fusion, communication security, and global AV development standards.

Introduction: This workshop introduced the large-scale design and deployment of autonomous public bus systems, emphasizing safety architecture, multi-sensor fusion, and layered redundancy strategies. It covered global regulatory standards, real-time control systems, V2X integration, and industrial collaboration models. The Alphaba Smart Bus case study illustrated practical design challenges and deployment considerations for commercial autonomous vehicles.

Covered material:

  • Global AV Landscape & Standards: Review of European (AUTOSAR), Japanese (ISO 26262–based systems), and U.S. (NHTSA AV Test Initiative) regulatory frameworks and safety approaches
  • CBSF Initiative (China Bus System of the Future): Government-led autonomous bus project integrating new energy, AI, wireless charging, V2X, and large-scale deployment across multiple cities
  • Perception & Sensor Fusion: GPS + INS navigation; camera, radar, LiDAR processing; heterogeneous computing; multi-sensor spatiotemporal fusion and stochastic decision theory
  • 7 Safety Lines Architecture: Layered safety zones (90m wireless to 0.5m touch); graded braking and deceleration strategy; background learning and redundancy design
  • Vehicle Control & Actuation: Electric hydraulic power steering (EPS), electronically braking system (EBS), high-speed liquid-cooled motor, Ethernet-over-CAN architecture
  • Real-Time Algorithms: MEMS LiDAR processing; signed GPS encryption; secure communication protocols
  • Safety & Physical Design Innovations: Reverse airbag bumper; bus lane labeling; layered perception redundancy (7+1 safety lines)
  • Industrial & International Ecosystem: Collaboration with NXP, RoboSense, SoftBank, Scania, BYD; deployment plans in China and overseas markets
  • Comparative AV Developments: Case studies of global AV platforms (Google, GM, Korean, Chinese, Canadian, U.S. trucks; ADAS and LiDAR advancements).