Poster: Indoor Inertial-Based Fall Prediction and Pedestrian Tracking For The Elderly

Recently, with the growing elderly population, fall prediction has gained more attention. However, only a few studies have focused on both fall prediction and pedestrian tracking, especially in indoor environments. This study proposes a novel prototype of simultaneous fall prediction and indoor pedestrian tracking by using smartphone-based inertial sensors. It also first introduces visual calibration of inertial data for fall prediction. This prototype has been tested in a single room with normal walking and falling activities. The accuracy of localization is 0.06m and a fall action could be identified about 350ms~400ms before collision.


INTRODUCTION
Falls are the principal cause of injuries for the elderly.The duration spent on the floor after a fall directly impacts its severity.Therefore, there is a pressing need to create a fall prediction system to promptly recognize potential falls and minimize the time spent on the ground post-fall.While prior studies have predominantly concentrated on fall detection, only a limited number have delved into fall prediction.[2,6,8].They usually attached multiple inertial units (IMUs) to the body and identified falls by applying threshold, Support Vector Machine (SVM) or Neural Network (NN) [2,6,8].The threshold method is more widely investigated [2].This study also uses a threshold algorithm for fall prediction.Moreover, it introduces visual calibration of inertial data, which is rarely used in previous studies.
To receive timely helps, the fallers' locations is also of great importance.However, few studies focus on simultaneous tracking with fall recognition.Some studies have utilized GNSS when detecting falls (e.g.[3]), but it is unfeasible for indoor positioning.Later studies have used Wi-Fi [5], ZigBee [7] or Ultrawideband (UWB) [4] for localization while detecting falls.However, this may introduce additional data processing, resulting in more computation.Instead, this study uses inertial data with modified Pedestrian Dead Reckoning (PDR), enabling simultaneous tracking and fall prediction on smartphones.This study has developed a prototype for indoor fall prediction and tracking system, mainly using smartphone-based inertial data.The major contribution is from the modified PDR, allowing parallelly tracking and fall prediction with different threshold ranges.The surveillance visual data is applied for daily inertial calibration, to avoid error accumulation while providing absolute localization [1], which is also the second contribution of this study.

SYSTEM DESIGN
This study has developed a novel Vision-aided Inertial Fall Prediction and Pedestrian Tracking (VINFD-PT) system.The system could be divided into three modules as vision-based heading calibration by YOLOv8, smartphone-based inertial data processing, and mapbased real coordinates integration.It is developed based on a 3D Passive Vision-aided Inertial Sensing System (3D PVINS) [1].The main novelty is introducing a multi-threshold with two separate sliding windows for acceleration processing, as the acceleration peaks ( 1 ,  2 ) of walking activities and fall events are significantly distinguished.The advantage of this design is the higher data utilization efficiency by sharing the processed accelerations, without requiring additional data processing while carrying multiple tasks.The current design only focuses on two types of activities, i.e. walking and falls, targeting on the solitary elderly in their own dwellings or nursing-home dormitories.The system's robustness can later be improved by introducing identification of more daily activities with the assistance of processed visual data.The overall design can be found in Figure 1.The detailed setup of equipment is similar to that in Yan, et al. [1], with surveillance cameras facing to one side of the room (sampling rate 17Hz), and smartphone-based sensors attached to body (sampling rate 100Hz).However, this time the smartphone is located in the waist pocket instead of being held in the hand.Throughout the procedure, the participant begins with regular walking, encounters sudden forward falls, recovers and stands up, and then resumes normal walking.This sequence is repeated 20 times to derive an average value.

Fall Prediction and Step Detection
The step detection is based on PDR, which has been described in Yan, et al. [1].Fall prediction can also be accomplished through the utilization of PDR, as a fall can be segmented into four phases: maintaining balance, losing balance, impact, and regaining balance [6], This bears resemblance to the four phases observed in PDR-based step detection.Moreover, they also employ similar acceleration processing techniques by aggregating values across three axes over time as   (),   () and   () with removal of gravity effect (1) [1].
The synthetic | * ()| is then processed by applying two pre-defined thresholds for step detection and fall identification respectively as well as two different sliding windows.The sliding window for fall prediction is larger than that of step detection, as it involves a longer motion process.Then, a zero-crossing approach is applied to detect different cycles simultaneously for both step detection and fall prediction (Figure 2).It has been observed that both the captured step points and the occurrence of a forward fall event can be concurrently identified, allowing a fall recognition approximately 350ms to 400ms before the collision on average.

Data Integration
The processed step events are then integrated with calibrated headings using visual tracking and finally being mapped in real coordinates based on WGS 1984.The details could be found in Yan, et al. [1].In addition, the accelerations are also being daily calibrated by map information.With calibrated headings and step lengths, the average accuracy for indoor positioning is about 0.06m, with precision about 0.08m.

DISCUSSION
This study has provided a novel VINFD-PT system for simultaneous indoor tracking and fall prediction, with relatively high prediction and positioning accuracy.However, more experiments are still needed for other activities, e.g.sitting down, standing up, and walking on the stairs, which may cause confusions for fall prediction and step detection.In future study, the changing pattern of body mass center may also be introduced to provide supplemental information for behaviour recognition, while sharing the processed boundingbox coordinates to reduce additional video data processing.Secondly, as the system has only been tested in one room, further experiments on multi-rooms with longer tracks are also needed.In future experimental scenarios, they may involve moving across multi-rooms with and without assistance of visual monitoring.Besides, this system has only been tested on a single user, as it mainly targets on monitoring alone-living elderly population.The multiperson situation may be slightly different and in that case, a general calibration model for smartphone-based sensors will be established, which can provide initial calibrations based on the collected database from both videos and digital maps for smartphone-based sensors.This may also help to reduce the data processing burden for real-time applications, making it more feasible for smartphonebased applications.

CONCLUSION
This study has developed a novel prototype for fall prediction and indoor tracking and has tested in a room with fall prediction 350ms∼400ms before collision and localization accuracy of 0.06m.Future study will further investigate other activities which may cause confusions for fall prediction while introducing more experimental scenarios, such as longer and more complicated tracks.

Figure 2 :
Figure 2: An example of processed accelerations for parallel step detection and fall recognition.