Shehroz S. Khan 1, Bing Ye 2, Kristine Newman 3, Andrea Iaboni 1,2, Alex Mihailidis 1,2
1 Toronto Rehabilitation Institute, Canada
2 University of Toronto, Canada
3 Ryerson University, Canada
With an increase in the population of older adults, the number of cases with dementia also increases. People living with dementia (PLwD) exhibit various behavioral and psychological experiences; agitation and aggression being the most common. Aggressive patients with dementia can harm themselves, other patients and the staff. In the past, researchers have used actigraphy to detect incidences of agitation and aggression in persons with dementia. However, actigraphy based solutions only consider body movement based indicators. In this abstract, we present a novel multi-modal sensing framework installed at the Psychiatric Geriatric Ward at Toronto Rehabilitation Institute, Canada. This novel framework uses video cameras, wearable device (for both movement and physiological data), motion and door sensors, and pressure mats to collect various types of data that may be used to detect and predict incidences of agitation and aggression in people with dementia. In this project, we have so far collected data from 11 PLwD. This data corresponds to more than 300 days’ worth, with around 250 agitation incidents, which is equivalent to less than one agitation event per entire day. The agitation events may last from few minutes to hours. Nevertheless, there is a huge skew in the agitation events with respect to normal activities data. This makes the task of detecting and predicting agitation and aggression very challenging using supervised machine learning algorithms. Labeling this massive data is also a key challenge as it can affect the prediction results. At this point, we are exploring different feature extraction strategies and novel machine learning ideas, such as unsupervised learning, anomaly detection and deep autoencoders to handle this problem.