In-Vehicle occupant detection includes not only the determination of the number of passengers, but also their exact positioning and the postures made. This information is important for the recognition of Out-of-Position (OoP) situations and the targeted activation of new, adapted restraint systems, which can protect passengers dependent on their position. It is well known that drivers perform additional tasks (so-called secondary tasks or non-driver tasks) when driving manually. Apart from the fact that these tasks may pose a safety risk (the driver has to divide his attention between driving and other activities), to perform them, drivers move their head, arms and upper body, sometimes leaving the ideal upright position required for optimal protection in the event of an airbag deployment.
With increasing automation of vehicles, drivers will no longer need to continuously monitor the environment. They will have more freedom to interact with other passengers and perform activities. As a result, the number of Out-of-Position situations will increase exponentially, posing a challenge for new adaptive restraint systems. Our work on the detection of passengers, objects and animals inside vehicles (passenger cars and public transport) therefore aims at testing and optimising deep learning convolutional neural network architectures for pose recognition and estimation. The focus lies on the creation of a human body segment graph for the exact positioning of passengers. Especially the distance of the different body segments in relation to a certain point or region in the cabin is of great importance. The research group is also working on accomplishing accurate and robust real-time positioning in different scenarios (different number of passengers, clothing characteristics and movement amplitude).