10:00 - 12:00 | Thu 23 Aug | Fredensborg | ThA3
A framework is presented for the verification of an agents decision making in autonomous driving applications by checking the logic of the agent for instability and inconsistency. The framework verifies the decisions of a rational agent implemented in Natural Language Programming (NLP) and based on a belief-desire-intention (BDI) paradigm using sEnglish and Jason code. The main results are methods of verification for the correctness of real-time agent decisions expressed in computational tree logic (CTL) formulae. The methods rely on the Model Checker for Multi Agent Systems (MCMAS) verification tool. To test the new verification system, an autonomous vehicle (AV) has been modelled and simulated, which is capable of planning, navigation, objects detection and obstacle avoidance using a rational agent. The agents decisions are based on information received from mono-cameras and LiDAR sensor that feed into logic-based decisions of the AV. The model of the AV and its environment has been implemented in the Robot Operating System (ROS) and the Gazebo virtual reality simulator.
No information added