The detection of persons in vehicles includes not only the determination of the number of passengers, but also their exact positioning and the gestures made. This information is important for the recognition of out of position (OOP) situations and the targeted activation of new, adjusted safety systems to protect passengers regardless of their position. It is well known that drivers perform additional tasks (so-called secondary tasks or non-driver tasks) when driving manually. Apart from the fact that these tasks may pose a safety risk (the driver has to divide his attention between driving and other activities), to perform them, drivers move their head, arms and upper body, sometimes leaving the ideal upright position required for optimum protection in the event of an airbag deployment.
With the increasing automation of vehicles, drivers will no longer need to continuously monitor the environment. They will have more freedom to interact with other passengers and perform certain activities. As a result, the number of out-of-position situations will increase exponentially, posing a challenge for new adaptive safety systems. Our work on the detection of passengers, objects and animals inside vehicles (passenger cars and public transport) therefore aims at testing and optimising deep learning convolutional neural network architectures for pose recognition and estimation. The focus is on the creation of a graph of the human body segments for the exact positioning of passengers. Especially the distance of the different body segments in relation to a certain point or region in the vehicle interior is of great importance. The research group is also working on accomplishing accurate and robust real-time positioning in different scenarios (different number of passengers, clothing characteristics and movement amplitude).