The rapid progression of autonomous vehicles (AVs) necessitates the implementation of advanced safety measures to ensure their dependable and secure operation in a variety of environments. A critical element in enhancing AV safety is the precise classification and prediction of the behavior of surrounding objects, such as other vehicles, pedestrians, cyclists, and static obstacles. This project is dedicated to developing a robust object behavior classification framework utilizing machine learning and deep learning techniques. By integrating sensor data from radar and cameras, the proposed system processes real-time environmental information to accurately identify and categorize object behaviors. The project underscores the importance of feature extraction, data fusion, and model optimization in achieving high classification accuracy. Moreover, it addresses challenges related to dynamic environments, occlusions, and varying weather conditions, providing solutions to improve the robustness of the classification system.
To design the simulation scenarios, the Automated Driving Toolbox is required. The driving scenario designer is used to design scenario test cases, configure sensors, and generate synthetic object detections. Driving Scenario Designer app helps us to create artificial driving scenarios for testing autonomous vehicles. The test scenarios are developed using the Driving Scenario Designer app which allows the users to create a scenario by drag and drop interface which will enable them to place roads, vehicles, pedestrians, and other actors.
A versatile dataset containing both safe and risky scenario test cases is required for the training and testing of the machine-learning model. So, 21 scenario cases have been developed for the implementation of this model.
Safe scenario:
Risky scenario:
Sensors, such as radar and camera, are to be mounted on the ego vehicle to acquire the data. These sensors are placed in such a way that they get maximum coverage of the whole scenario. These sensors acquire the data from the scenario test cases and are stored in a MATLAB function from which the sensor data can be extracted.
A Simulink model is formed with this scenario test cases.
The scenario environment is visible clearly in the bird's eye plot wherein we get a clear view of the scenario, including the sensors.
This scenario data is then exported into a MATLAB function. This MATLAB function contains all the data of the sensor and scenario readings. The sensor and scenario data from all the test cases are loaded into a single file to implement the machine-learning algorithms.
The Scenario Reader app extracts the positions of actors and roads from scenario files created with the Driving Scenario Designer app. It then outputs this information in the coordinate system of either the ego vehicle or the world coordinate system.
All the scenario data sets are loaded into a single file for the study of the sensor data and to implement the machine learning algorithm on the loaded data. So, for the implementation, we need to label the data into safe and risky data for training purposes.
The object classification system operates by continuously monitoring the sensor's readings and bird’s eye view plot to detect sudden changes in the movement of the other actors in the scenario which indicate a potential accident or collision. For this classification model, a machine learning model is implemented on this sensor data and scenario data. For applying machine-learning algorithms in the MATLAB, Statistics and Machine-Learning Toolbox is required. The in-built functions for various machine-learning algorithms are used to implement this model.
For testing purposes, a new test scenario test case is developed and then it is checked for the object behavior classification. Here, for an example test case, a risky scenario test case is developed and our object behavior classification model is implemented on it.
Test scenario:
Output:
3D model:
Safe scenario
Risky scenario:
Implementation video: