Details for the MobiCom4AgeTech workshop program are provided below. The MobiCom registration desk will open at 7:30am. Workshop attendees should stop by the MobiCom registration desk to obtain their badge before proceeding to the workshop room. All MobiCom4AgeTech sessions will take place in the Pan American Room. See the venue page for site details. The full MobiCom workshop and conference program is available here.
Start | End | Session |
8:30am |
8:35am |
Welcome and Opening Remarks
-
Benjamin M. Marlin Professor of Computer Science, University of Massachusetts Amherst.
-
Rebecca Krupenevich Program Official, Division of Behavioral and Social Research, National Institutes on Aging.
|
8:35am |
9:20am |
Keynote
-
George Demiris Mary Alice Bennett University Professor. Penn Integrates Knowledge University Professor. Associate Dean for Research and Innovation. University of Pennsylvania School of Nursing.
|
9:20am |
10:00am |
Oral Session 1: Alzheimer's Disease and Related Dementias
-
Xiaomin Ouyang Assistant Professor, Department of Computer Science & Engineering, Hong Kong University of Science and Technology.
Detecting Digital Biomarkers of Alzheimer's Disease with AI-Powered Multimodal Sensing Systems.
(Abstract)
Abstract: Over 55 million people worldwide have Alzheimer's Disease (AD), a degenerative and irreversible brain disorder. Unfortunately, 75% of AD patients remain undiagnosed, as current diagnostic methods are usually intrusive and labor-intensive. In our work, we focus on detecting digital biomarkers that capture the behavioral symptoms of AD for early diagnosis. We will first introduce the design and deployment of ADMarker, an end-to-end system that integrates multimodal sensing technologies and federated learning algorithms for identifying multidimensional digital biomarkers in natural living environments. This system has been deployed in a clinical trial involving over 100 elderly subjects. Next, we will discuss how we tackle real-world challenges in such systems such as distributed and imperfect IoT data, limited resources, scalability, etc. By addressing these issues, our system accurately detects digital biomarkers and enables early AD identification, providing a new platform for longitudinal health monitoring and personalized intervention.
-
FAN Xiubin PhD student, City University of Hong Kong.
Experiences of Deploying a Citywide Crowdsourcing Platform to Search for Missing People with Dementia.
(Abstract)
Abstract: In this talk, we will present a crowdsourcing platform designed to enhance the safety of individuals with dementia (PwD) and share the unique lessons and experiences gained from its deployment. As cognitive decline increases the risk of PwD becoming lost, we have developed an innovative search system in Hong Kong that utilizes customized Bluetooth Low Energy (BLE) tags and a network of mobile volunteers to facilitate efficient search efforts. Since its launch in 2019, our system has supported over 3,100 families and successfully located 254 missing individuals, demonstrating the potential of technology to improve the lives of aging populations and foster dementia-friendly communities.
|
10:00am |
10:30am |
Break, Poster and demo viewing.
|
10:30am |
11:10am |
Oral Session 2: Activity Recognition and Tracking
-
Md Touhiduzzaman PhD Student, Virginia Commonwealth University.
Wireless Sensing-based Daily Activity Tracking System Deployment in Low-Income Senior Housing Environments.
(Abstract)
Abstract: Maintaining independence in daily activities and mobility is critical for healthy aging. Older adults who are losing the ability to care for themselves or ambulate are at a high risk of adverse health outcomes and decreased quality of life. It is essential to monitor daily activities and mobility routinely and capture early decline before a clinical symptom arises. Existing solutions use self-reports, or technology-based solutions that depend on cameras or wearables to track daily activities; however, these solutions have different issues (e.g., bias, privacy, burden to carry/recharge them) and do not fit well for seniors. In this study, we discuss a non-invasive, and low-cost wireless sensing-based solution to track the daily activities of low-income older adults. The proposed sensing solution relies on a deep learning-based fine-grained analysis of ambient WiFi signals and it is non-invasive compared to video or wearable-based existing solutions. We deployed this system in real senior housing settings for a week and evaluated its performance. Our initial results show that we can detect a variety of daily activities of the participants with this low-cost system with an accuracy of up to 76.90%.
-
Soumita Ghosh PhD Student, Virginia Commonwealth University.
Wi-Limb: Recognizing Moving Body Limbs Using a Single WiFi Link.
(Abstract)
Abstract: Utilizing fine grained analysis of wireless signals for human activity recognition has gained a lot of traction recently. The unique changes to the ambient wireless signals caused by different activities made it possible to recognize these fingerprints through deep learning classification methods. Most of the existing work consider a set of physical activities or gestures and try to recognize each one of them as a separate class. However, this makes the classification task harder especially when the number of activities to recognize becomes larger and when these activities include movements from the same body parts. To address that, in this study, we consider the decomposition of each physical activity into the limbs and body parts involved in that activity and study a one-by-one recognition solution. We propose a Generative Adversarial Network (GAN)-based hierarchical method that not only recognizes the involved body limbs and facilitates the recognition of complex activities, but also mitigates the temporal effects in the collected signal data and thus provides a generalized solution. Our experimental evaluation shows that we can recognize unknown physical activities through the proposed hierarchical limb recognition based model with a small Hamming loss and by just using WiFi signal data from a single transmitter and receiver link.
|
11:10am |
12:25pm |
Panel Discussion: Wearable and Wireless Sensing for AgeTech
-
VP Nguyen Assistant Professor of Computer Science, University of Massachusetts Amherst. Director, Wireless & Sensor Systems Laboratory.
-
Yaxiong Xie Assistant Professor, Department of Computer Science and Engineering, University at Buffalo, SUNY. Director, Next Generation Mobile-Network Lab.
-
Jie Xiong Principal Researcher, Microsoft Research Lab - Asia. Associate Professor of Computer Science, University of Massachusetts Amherst.
|
12:25pm |
2:00pm |
Poster and demo viewing, Lunch.
|
2:00pm |
2:45pm |
Keynote
-
Niteesh K. Choudhry Professor of Medicine, Harvard Medical School. Professor, Department of Health Policy and Management, Harvard T.H. Chan School of Public Health. Executive Director, Center for Healthcare Delivery Sciences, Brigham and Women's Hospital.
|
2:45pm |
3:45pm |
Oral Session 3: Cardiovascular Monitoring
-
Alexander Gherardi PhD Student, Department of Computer Science and Engineering, University at Buffalo.
SigmoidOxy: A Light-weight mobile perfusion tool for diabetic foot management.
(Abstract)
Abstract: Diabetic foot ulcers (DFUs) represent a significant global health challenge for the elderly with high mortality rates and complications. While imaging technologies like NIRS and hyperspectral imaging have improved wound assessment in clinical settings, their cost, and large size limit their use in the home and primary care. On the other hand, existing mobile solutions only capture secondary bio-markers like color and wound size. This paper introduces SigmoidOxy (or σ(Oxy)), a novel smartphone-based perfusion tool for DFU management. SigmoidOxy extracts oxygenation information from standard RGB images captured by smartphone cameras by applying hyperspectral reconstruction models to infer oxygenation. We evaluate SigmoidOxy's performance using the SPECTRALPACA dataset finding an Average Persons R of 0.72 and Average Mean Absolute Error of 0.239 when comparing sigmoid oxygenation signals and analyze its sensitivity to ischemia in the DFUC2021 dataset.
-
Tao Chen Postdoctoral Associate, Department of Computer Science, University of Pittsburgh.
Remote Cardiac Auscultation with Conventional Earphones.
(Abstract)
Abstract: The elderly over 65 accounts for 80% of COVID deaths in the United States. In response to the pandemic, the federal, state governments, and commercial insurers are promoting video visits, through which the elderly can access specialists at home over the Internet, without the risk of COVID exposure. However, the current video visit practice barely relies on video observation and talking. The specialist could not assess the patient's health conditions by performing auscultations. This talk presents my work on addressing this key missing component in video visits by proposing Asclepius, a hardware-software solution that turns the patient's earphones into a stethoscope, allowing the specialist to hear the patient's fine-grained heart sound (i.e., PCG signals) in video visits. To achieve this goal, we contribute a low-cost plug-in peripheral that repurposes the earphone's speaker into a microphone and uses it to capture the patient's minute PCG signals from her ear canal. As the PCG signals suffer from strong attenuation and multi-path effects when propagating from the heart to ear canals, we then propose efficient signal processing algorithms coupled with a data-driven approach to de-reverberate and further correct the amplitude and frequency distortion in raw PCG receptions. We implement Asclepius on a 2-layer PCB board and follow the IRB protocol to evaluate its performance with 30 volunteers. Our extensive experiments show that Asclepius can effectively recover Phonocardiogram (PCG) signals with different types of earphones. The objective blind testing and subjective interview with five cardiologists further confirm the clinical efficacy and efficiency of our system. PCG signal samples, benchmark results, and cardiologist interviews can be found at: https://asclepius-system.github.io/
-
Colin Barry Chief Technology Officer, Billion Labs Inc. PhD student, Electrical and Computer Engineering, UCSD.
VibroBP: Oscillometric Blood Pressure Measurements on Smartphones using Vibrometric Force Estimation.
(Abstract)
Abstract: VibroBP is a smartphone-based solution for measuring blood pressure (BP) using the oscillometric method. To measure BP, it is necessary to measure (1) the pressure applied to the artery and (2) the local blood volume change. This is accomplished by performing an oscillometric measurement at the finger's digital artery, whereby a user presses down on the phone's camera with steadily increasing force. The camera is used to capture the blood volume change using photoplethysmography. We devised a novel method for measuring the force applied by the finger without the use of specialized smartphone hardware with a technique called Vibrometric Force Estimation (VFE). The fundamental concept of VFE relies on a phenomenon where a vibrating object is dampened when an external force is applied on to it. This phenomenon can be recreated using the phone's own vibration motor and with power the resulting damped vibration measured using the Inertial Measurement Unit (IMU). A cross device reliability study with three smartphones of different manufacturers, shape, and prices results in similar force estimation performance across all smartphone models. In an N = 24 validation study of the BP measurement, the smartphone technique achieves a MAE of 9·35 mmHg and 7·94 mmHg of systolic and diastolic BP, respectively, compared to an FDA approved BP cuff. The vision for this technology is not necessarily to replace existing BP monitoring solutions, but rather to introduce a downloadable smartphone software application that could serve as a low-barrier hypertension screening measurement fit for widespread adoption.
|
3:45pm |
4:00pm |
Break
|
4:00pm |
4:40pm |
Oral Session 4: Multimodal Assessment and Intervention
-
Laureano Moro-Velázquez Assistant Research Professor, Department of Electrical and Computer Engineering, Johns Hopkins University.
Multimodal evaluation of neurodegenerative disorders with portable equipment.
(Abstract)
Abstract: Neurodegenerative conditions significantly impact our speech, eye movement, handwriting, and gait patterns. This results in dysarthria, changes in vocabulary, reading problems, micrographia, and festination, among many others. These signs can be measured and used for early diagnosis of disease and also for long-term monitoring . Our research aims to leverage multimodal characterization of patients employing portable equipment that captures all modalities simultaneously. The resulting uni-modal and multi-modal metrics can be used to detect and monitor neurodegenerative diseases such as Alzheimer’s and Parkinson’s Disease, and to differentiate Parkinson’s disease from mimicking disorders.
-
Eun Kyoung Choe Associate Professor, College of Information, University of Maryland, College Park.
Speech in Motion: Multimodal Interactions in Personal Health Technology.
(Abstract)
Abstract: In this talk, I will present how multimodal interactions, such as combining speech and touch, can create inclusive and accessible health tracking experiences for individuals with diverse health tracking needs. I will first showcase how speech enables older adults to collect rich data with low burden. Next, I will introduce multimodal feedback as a way to transform stroke survivors’ limb performance data into actionable insights, offering therapeutic support. By leveraging multimodal interactions, our research aims to lower the data collection burden and improve accessibility while enhancing the overall user experience.
|
4:40pm |
5:25pm |
Panel Discussion: AgeTech Funding and Accelerators
-
Partha Bhattacharyya Program Director, Division of Behavioral and Social Research, National Institute on Aging.
-
Amelia Hay VP of Startup Programming and Investments at AgeTech Collaborative™ from AARP.
-
Abid Siddiqui Principal, AI Fund.
|
5:25pm |
5:30pm |
Closing Remarks
-
Deepak Ganesan Professor of Computer Science, University of Massachusetts Amherst.
|