Evolving video capture from novel eCOA to validated assessments

Video clinical outcome assessments (vCOA) provide meaningful endpoints in disease progression as they enable novel eCOAs at home and in hospitals, increasing the frequency of assessments conducted in real-world scenarios.

Alongside novel means of data capture, Aparito’s capabilities with data and machine learning provide a tremendous opportunity for automated analysis of video recordings, assisting clinicians with new measures such as the time taken for each phase of the Timed Up-and-Go (TUG) or the number of mouthfuls per minute for a dysphagia assessment. 

Scale for the Assessment and Rating of Ataxia at home (SARAhome)

SARAhome

The Scale for the Assessment and Rating of Ataxia (SARA) tests were digitised as SARAhome in collaboration with DZNE.

SARAhome takes five domains from the eight domains included in the SARA scale performed in hospital including

  1. gait
  2. stance
  3. speech*
  4. finger-to-nose
  5. fast alternating hand movements

SARAhome is a standardised, validated assessment in Atom5™ available for blinded, randomised independent central assessor(s) scoring and pose estimation analysis.

*speech is undergoing the development of automatic ratings using machine learning by Grobe-Einsler M et Al, 2023.

Video Timed Up-and-Go (vTUG)

vTUG

Adapting the traditional Timed Up & Go into a home-based standardised video assessment, we focus on each of the different phases, i.e.

  1. sit to stand
  2. gait
  3. 180-degree turn
  4. gait
  5. stand to sit

rather than the traditional total time to complete alone.

vTUG is a standardised test that uses video capture and pattern recognition to enable objective, sensitive, high-frequency assessments.

Video Hand Opening Time (vHOT)

vHOT

Adapting the traditional hospital-based Hand Opening Time (HOT) to a standardised home or clinic-based assessment with frame-by-frame analysis and pose estimation analytics, vHOT is available with frame-by-frame analysis and pose estimation analytics.

Feeding & Eating Evaluation viDeo analysiS (FEEDS)

Feeding & Eating Evaluation viDeo analysiS (FEEDS)

Feeding & Eating Evaluation viDeo analysiS utilises Atom5™ to

  1. Identify specific points on the face and hands
  2. Apply machine learning techniques to characterise age-dependent eating skill and technique
  3. Analyse chewing and swallowing coordination

Example measures include “time taken for plate-to-mouth action” (or alternative gestures) and “number of mouthfuls per minute”.

Video Publications and Literature


Ready to explore video eCOA for your studies?

Download our brochure and prepare to unlock the potential of video assessments via Atom5™!