A Feasibility Study Investigating the Use of Machine Learning to Analyze Facial Imaging, Voice and Spoken Language for the Capture and Classification of Cancer Pain

Overview

Background:

- Pain related to cancer can be widespread, wield debilitating effects on daily life, and interfere with otherwise positive outcomes from targeted treatment.

- The underpinnings of this study are chiefly motivated by the need to develop and validate objective methods for measuring pain using a model that is relevant in breadth and depth to a diversity of patient populations.

- Inadequate assessment and management of cancer pain can lead to functional and psychological deterioration and negatively impact quality of life.

- Research of objective measurement scales of pain based on automated detection of facial expression using machine learning is expanding but has been limited to certain demographic cohorts.

- Machine learning models demonstrate poor performance when training sets lack adequate diversity of training data, including visibly different faces and facial expressions, which yields opportunity in the proposed study to lay a guiding foundation by constructing a more general and generalizable model based on faces of varying sex and skin phototypes.

Objectives:

-The primary objective of this study is to determine the feasibility of using facial recognition technology to classify cancer related pain in a demographically diverse set of patients with cancer who are participating on a clinical trial.

Eligibility:

- Adults and children (12 years of age or older) with histologically or cytologically proven advanced malignancies who are undergoing treatment for cancer.

- Participant must have access to internet connected smart phone or computer with camera and microphone and must be willing to pay any charges from service provider/carrier associated with the use of the device

Design:

- The design is a single institution, observational, non-intervention clinical study at the National Institutes of Health Clinical Center.

- All patients will participate in the same activities in two different settings (remotely and in-clinic) for a three-month period.

- At home, patients will utilize a mobile application for self-reporting of pain and will audio- visually record themselves reading a passage of text and describing how they feel. In the clinic, patients will perform the same activities with optimal lighting and videography, along with infrared video capture.

- Visual (RGB) and infrared facial images, audio signal, self-reported pain and natural language verbalizations of patient feelings feel will be captured. Audio signal and video data will be annotated with self-reported pain and clinical data to create a supervised machine learning model that will learn to automatically detect pain.

- Care will be taken with the study sample to include a diversity of genders and skin types (a proxy for racial diversity) to establish a broad applicability of the model in the clinical setting. Additionally, video recordings of patient natural language to describe their pain and how they feel will be transcribed and auto-processed against the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE) library to explore the presence and progression of self-reporting of adverse events.

Study Type

  • Study Type: Observational
  • Study Design
    • Time Perspective: Prospective
  • Study Primary Completion Date: March 1, 2022

Detailed Description

Background:

- Pain related to cancer can be widespread, wield debilitating effects on daily life, and interfere with otherwise positive outcomes from targeted treatment.

- The underpinnings of this study are chiefly motivated by the need to develop and validate objective methods for measuring pain using a model that is relevant in breadth and depth to a diversity of patient populations.

- Inadequate assessment and management of cancer pain can lead to functional and psychological deterioration and negatively impact quality of life.

- Research of objective measurement scales of pain based on automated detection of facial expression using machine learning is expanding but has been limited to certain demographic cohorts.

- Machine learning models demonstrate poor performance when training sets lack adequate diversity of training data, including visibly different faces and facial expressions, which yields opportunity in the proposed study to lay a guiding foundation by constructing a more general and generalizable model based on faces of varying sex and skin phototypes.

Objectives:

-The primary objective of this study is to determine the feasibility of using facial recognition technology to classify cancer related pain in a demographically diverse set of patients with cancer who are participating on a clinical trial.

Eligibility:

- Adults and children (12 years of age or older) with histologically or cytologically proven advanced malignancies who are undergoing treatment for cancer.

- Participant must have access to internet connected smart phone or computer with camera and microphone and must be willing to pay any charges from service provider/carrier associated with the use of the device

Design:

- The design is a single institution, observational, non-intervention clinical study at the National Institutes of Health Clinical Center.

- All patients will participate in the same activities in two different settings (remotely and in-clinic) for a three-month period.

- At home, patients will utilize a mobile application for self-reporting of pain and will audio- visually record themselves reading a passage of text and describing how they feel. In the clinic, patients will perform the same activities with optimal lighting and videography, along with infrared video capture.

- Visual (RGB) and infrared facial images, audio signal, self-reported pain and natural language verbalizations of patient feelings feel will be captured. Audio signal and video data will be annotated with self-reported pain and clinical data to create a supervised machine learning model that will learn to automatically detect pain.

- Care will be taken with the study sample to include a diversity of genders and skin types (a proxy for racial diversity) to establish a broad applicability of the model in the clinical setting. Additionally, video recordings of patient natural language to describe their pain and how they feel will be transcribed and auto-processed against the Patient-Reported Outcomes version of the Common Terminology Criteria for Adverse Events (PRO-CTCAE) library to explore the presence and progression of self-reporting of adverse events.

Arms, Groups and Cohorts

  • 1DF/NoPain_IV-VI_Female
    • Worst pain in past month = 0; Skin Type IVVI, Female
  • 1DM/NoPain_IV-VI_Male
    • Worst pain in past month = 0; Skin Type IVVI, Male
  • 1LF/NoPain_I-III_Female
    • Worst pain in past month = 0; Skin Type I-III, Female
  • 1LM/NoPain_I-III_Male
    • Worst pain in past month = 0; Skin Type I-III, Male
  • 2DF/MildPain_IV-VI_Female
    • Worst pain in past month = 1-3; Skin Type IVVI, Female
  • 2DM/MildPain_IV-VI_Male
    • Worst pain in past month = 1-3; Skin Type IVVI, Male
  • 2LF/MildPain_I-III_Female
    • Worst pain in past month = 1-3; Skin Type IIII, Female
  • 2LM/MildPain_I-III_Male
    • Worst pain in past month = 1-3; Skin Type IIII, Male
  • 3DF/ModPain_IV-VI_Female
    • Worst pain in past month = 4-6; Skin Type IVVI, Female
  • 3DM/ModPain_IV-VI_Male
    • Worst pain in past month = 4-6; Skin Type IVVI, Male
  • 3LF/ModPain_I-III_Female
    • Worst pain in past month = 4-6; Skin Type IIII, Female
  • 3LM/ModPain_I-III_Male
    • Worst pain in past month = 4-6; Skin Type IIII, Male
  • 4DF/SeverePain_IV-VI_Female
    • Worst pain in past month = 7-10; Skin Type IVVI, Female
  • 4DM/SeverePain_IV-VI_Male
    • Worst pain in past month = 7-10; Skin Type IVVI, Male
  • 4LF/SeverePain_I-III_Female
    • Worst pain in past month = 7-10; Skin Type IIII, Female
  • 4LM/SeverePain_I-III_Male
    • Worst pain in past month = 7-10; Skin Type IIII, Male

Clinical Trial Outcome Measures

Primary Measures

  • Feasibility of using facial recognition technology to classify pain
    • Time Frame: 3 months
    • The primary objective of this study is to determine the feasibility of using facial recognition technology to classify pain in a demographically diverse set of patients with cancer who are participating on a clinical trial.

Participating in This Clinical Trial

Inclusion Criteria

  • Ability of subject to understand and willingness to sign a written informed consent document.
  • Male or female subjects (including NIH staff) aged greater than or equal to 12 years.
  • Patients with histologically or cytologically proven advanced cancer who are undergoing treatment for cancer.
  • Patient must be under active cancer treatment on a protocol at NIH.
  • Must have access to a smart phone (iPhone or Android) with either a data plan and/or access to wireless internet (wifi) or a computer with a camera and microphone and access to internet and must be willing to use their device and assume any associated charges from service providers.

Exclusion Criteria

  • Patients with brain or central nervous system (CNS) metastases. However, if a patient has completed curative radiotherapy or surgery and has remained asymptomatic for the prior three months, then he/she will be eligible to participate.
  • Patients with Parkinson s disease.
  • Known current alcohol or drug abuse.
  • Any psychiatric condition that would prohibit the understanding or rendering of informed consent.
  • Non-English speaking subjects.

Gender Eligibility: All

Minimum Age: 12 Years

Maximum Age: N/A

Are Healthy Volunteers Accepted: No

Investigator Details

  • Lead Sponsor
    • National Cancer Institute (NCI)
  • Provider of Information About this Clinical Study
    • Sponsor
  • Overall Official(s)
    • James L Gulley, M.D., Principal Investigator, National Cancer Institute (NCI)
  • Overall Contact(s)
    • Susan G Wroblewski, R.N., (240) 858-3217, wroblewskis@mail.nih.gov

Clinical trials entries are delivered from the US National Institutes of Health and are not reviewed separately by this site. Please see the identifier information above for retrieving further details from the government database.

At TrialBulletin.com, we keep tabs on over 200,000 clinical trials in the US and abroad, using medical data supplied directly by the US National Institutes of Health. Please see the About and Contact page for details.