Development of Bed Exit Alarm via Web Camera Utilizing Image Processing and Artificial Intelligence

This research focuses on the development of an innovative system for detecting instances of patients falling out of bed using advanced image processing and artificial intelligence techniques. By analyzing real-time images from a webcam placed in the patient's bed area, the system identifies specific key points on the patient's body to assess fall risks. The study involved testing twelve sleep patterns representing safe sleeping positions and six patterns simulating scenarios with a risk of falling. The system demonstrated remarkable accuracy in detecting fall risks, triggering warnings and alarms when key points associated with falling were not detected. Importantly, the system avoided false alarms caused by incorrect detections. These findings contribute to the improvement of bed exit alarm systems and have significant implications for patient safety.


INTRODUCTION
Bed exit alarm systems are essential in healthcare facilities to prevent patient falls by alerting caregivers when a patient attempts to leave the bed.However, the occurrence of false positives, where the alarm is triggered without an actual incident, presents a significant challenge in these systems.False positives not only lead to unnecessary disruptions but also undermine the reliability and effectiveness of the alarm system.Therefore, this research aims to investigate and address the issue of false positive alarms in bed exit alarm systems by applying advanced image processing techniques.The integration of image processing techniques offers promising solutions to mitigate the problem of false positive alarms in bed exit systems.By leveraging computer vision algorithms and pattern recognition, these techniques enable more accurate analysis of patient movements and postures, thereby improving the system's ability to distinguish between genuine bed exits and false alarms.This approach has the potential to enhance the reliability and effectiveness of bed exit alarm systems, ensuring that caregivers are alerted only when there is a reliable risk of patient falls.Several studies have delved into the application of image processing techniques to reduce false positive alarms in bed exit systems.For instance, Liu S. and Ostadabbas S. developed a non-intrusive vision-based tracking system capable of continuously monitoring and tracking human sleeping postures over time.By analyzing top view videos captured by an off-the-shelf camera, their system generates a comprehensive report called iPoTH, achieving impressive accuracy rates of up to 91.0% in a simulated hospital environment and 93.6% in a home-like setting when tested with actual human participants (1).In a similar vein, Koudou, G.B. and et al. explored an accelerometer-based approach to monitor LLIN (Long-Lasting Insecticidal Nets) usage, showcasing its potential in studying LLIN use behaviors and informing malaria prevention programs.They suggested that further research focusing on accelerometer placement, measurement frequency, and advanced machine learning techniques could enhance the accuracy of the approach (2).Furthermore, Bauer P. and et al. introduced Ocuvera, a 3D camera-based system designed to monitor hospitalized patients at risk of falling during bed exits.This automated approach predicts likely bed exits and promptly alerts nursing staff, providing adequate lead time to prevent falls and addressing the limitations of existing fall prevention technologies (3).Additionally, Matthies D. J. C. and et al. presented a do-it-yourself (DIY) bed sensor mat with an impressive accuracy rate of 85% in detecting user postures.This innovative technology facilitates various applications, including pressure ulcer prevention, bed-exit detection, diabetes detection, and sleep apnea mitigation (4,10).In this research, webcam image processing was employed as a novel method to detect occurrences of patients' bed falls.By utilizing computer vision algorithms, the proposed approach aims to minimize false alarms while providing caregivers with the ability to continuously monitor the patient's posture and evaluate their risk level.

METHODOLOGY
In this study, a method was employed that involved utilizing the Pose Estimator function to detect and monitor patient postures, with a specific focus on those that could potentially result in falls or pose risks to the patients (5,6).The Pose Estimator function played a significant role in the experimental design, being seamlessly integrated into the MATLAB program.This integration facilitated efficient processing of the collected data and enabled visualization of the results through a user-friendly graphical user interface (GUI).This approach not only allowed for the effective analysis and interpretation of the data but also provided valuable insights into patient postures and potential risks.

Experimental Design
The experimental design involved two key situations to simulate different scenarios related to patient safety.Firstly, an experiment was conducted assuming that the patient was not at risk of falling from the bed.In this scenario, the patient was positioned in a comfortable supine position without extending their arms or attempting to leave the bed.The Pose Estimator function was employed to detect and analyze all defined key points on the patient's body.Secondly, an experiment was conducted assuming that the patient was at risk of falling from the bed.In this situation, the patient was placed in a lateral position or engaged in movements that could potentially lead to a fall, such as reaching for objects beyond the bed area.The Pose Estimator function was utilized to detect and analyze the defined key points on the patient's body, with the absence of one or more of these points indicating a potential risk of falling.

Detection of Body Key Points
This study employed the Pose Estimator function implemented in MATLAB for the detection and analysis of body keypoints.The Pose Estimator utilizes advanced deep learning techniques, making it a widely adopted approach in computer vision and human pose estimation research.The research methodology consisted of several key steps.
Step 1: Initialization of the Pose Estimator function was performed using a suitable library or framework, as referenced in (7,8).This initialization enabled the utilization of pre-trained deep learning models specifically designed for human pose estimation.Through this process, keypoint information was extracted from the input data.
Step 2: Body keypoints were detected using a webcam to capture video frames.The Pose Estimator function was then applied to each frame, allowing for the analysis and processing of image data.As a result, a comprehensive set of keypoint coordinates representing various joints, such as the shoulders, elbows, wrists, hips, knees, and ankles, was generated.
Step 3: To facilitate visualization of the outcomes, graphical interfaces or video players, such as the DeployableVideoPlayer object in MATLAB, were employed in the research methodology.These tools enabled real-time display of video frames, overlaying keypoint annotations.This visual representation provided researchers with a clear understanding of the detected keypoints and their spatial relationships.For clarity, a simplified pseudo code representation of the Pose Estimator algorithm is presented in Algorithm 1 illustrating the essential steps involved in the process.The system design focused on the detection and analysis of patient postures by identifying critical joint points on the human body.A meticulous selection process was employed to choose seventeen key points, encompassing facial landmarks, shoulder points, waist points, and lower body points.To prioritize patient privacy, an effective technique called imgaussfilt was applied to introduce a blurring effect on the patient's face.This technique utilizes a 2-D Gaussian filtering algorithm to smooth the image, intentionally obscuring sensitive facial features while preserving the overall posture information.By incorporating the imgaussfilt function into the system, accurate posture analysis was achieved without compromising patient privacy.The integration of joint point identification and the application of the blurring effect contributed to a well-balanced system design that offered valuable insights into patient postural dynamics while ensuring the protection of sensitive information.

Criteria of Falling Detection
The Pose estimator employed in this study yields the precise coordinates of 17 key points throughout the body.These key points, as depicted in Figure 1, hold significant importance in understanding the body's posture and movements.The coordinates obtained through the Pose estimator enable a comprehensive analysis of the subject's body position, facilitating a detailed assessment of joint angles, spatial relationships, and overall body alignment.By accurately capturing the positional data of these 17 keypoints, the system gains valuable insights into the subject's posture dynamics, contributing to a thorough understanding of their physical state.The criteria and conditions for notifying patients about bed falls will be determined based on five key areas of the body: • Head Area: This will encompass the nose, right eye, left eye, right ear, and left ear.• Right Shoulder Area: Specifically, the shoulder on the right side of the body will be considered.• Left Shoulder Area: The shoulder located on the left side of the body will be considered.
Right Waist Area: The waist region on the right side of the body will be considered.
• Left Waist Area: The waist region on the left side of the body will be taken into consideration.By employing this research methodology, the study aimed to evaluate the effectiveness of the Pose Estimator function in realtime posture detection and fall prevention.The experimental design facilitated the assessment of patient postures in both low-risk and high-risk scenarios, providing valuable insights into the system's capabilities and limitations in detecting potential falls.

EXPERIMENTAL RESULTS
In the research, a webcam was used to identify the bed area as part of the methodology.The study examined various sleeping positions, including both normal postures and those that presented a risk of falling out of bed. Figure 2 provides a visual representation of the twelve different normal sleeping positions observed during the study.As a result, it became clear that the implemented system performed well.It accurately detected all five key points with precision, without any false alarms caused by incorrect detections.The detail of the experimental scenarios can be found in Table 1, which provides additional insights into the specific outcomes of the study.Keep the patient lying on back while maintaining a comfortable position.The detection of the system was therefore able to identify all the important points of the body that have been defined for each of The study examined six different sleep patterns associated with the risk of falling off the bed, as depicted in Figure 3. Table 2 provides specific information about each experimental pattern.Based on these experiments, a detection system was developed to identify the absence of important key points.When such a situation arises, the system effectively triggers an alert.Remarkably, the system demonstrated accurate performance across all six experimental scenarios.Additionally, the visual representation of the system's results displayed on the Graphical User Interface (GUI) screen is illustrated in Figure 4.When the absence of a specific body part is detected by the system, a corresponding result indicating the undetected area is generated.Furthermore, an image alert is displayed.In the event that the patient is not at risk of falling off the bed, the patient's image is exhibited by the system, accompanied by an indication that the alarm is off.Conversely, if the patient is at risk, the patient's image is showcased by the system along with the important body joints that were not detected, accompanied by an activated alarm status and a notification sound.

DISCUSSION AND CONCLUSION
This study aimed to develop an innovative system for detecting incidents where patients fall out of bed.To achieve this, advanced techniques in artificial intelligence and image processing were employed.By analyzing real-time images from a webcam placed in the patient's bed area, the system identified specific key points on the patient's body.The system's performance was evaluated through specified sleep patterns.Twelve sleep patterns were created to represent safe sleeping positions, while an additional six patterns were designed to simulate scenarios with a risk of falling.This approach allowed for comprehensive testing of the system's capabilities and    caused by incorrect detections.For the future work, there is potential to enhance the bed exit alarm system by integrating the webcam with a force sensor.This combination could further improve the accuracy and effectiveness of detecting instances where patients fall out of bed.To ensure its practical application, it is recommended that the prototype be implemented in a real hospital setting.It is important to conduct evaluations of the system's performance in different environments.This ongoing development and practical implementation are essential steps to ensure patient safety and optimize the system's functionality.

Figure 1 :
Figure 1: The Key Point Representation

Figure 2 :
Figure 2: Twelve Sleeping Patterns of Normal Scenario Fig 4 System detected result Missing important part (key points)

Figure 3 :
Figure 3: Six Sleeping Patterns of Falling Risk Scenario

Figure 4 :
Figure 4: Example of GUI

Table 1 :
Information of simulated sleeping patterns of normal scenario and its detection result

Table 2 :
Information of simulated sleeping patterns of risking scenario and its detection result