R. James Cotton
@peabody124.bsky.social
1.1K followers 950 following 64 posts
Physiatrist, neuroscientist, gait and movement, brain and robotics enthusiast. Assistant Professor at Northwestern and Shirley Ryan AbilityLab
Posts Media Videos Starter Packs
Pinned
peabody124.bsky.social
While rehabilitation is moving towards an era of big data, we still lack a framework to analyze this data to improve outcomes. We took a stab at this, which we call a "Causal Framework for Precision Rehabilitation." arxiv.org/abs/2411.03919
peabody124.bsky.social
More demos and code available at intelligentsensingandrehabilitation.github.io/MonocularBio...

JD did a great job creating a Gradio demo so try it out and let us know what you think

And here is a video of JD going on a celebratory run that the preprint is out :)
peabody124.bsky.social
Excitingly, in addition to producing accurate kinematics, we can measure the gait deviation index from these videos. We find this is quite sensitive to a number of different clinical backgrounds, and even more responsive after neurosurgical interventions than the standard clincial outcomes (mJOA).
Clinical Validity of Smartphone-Based Gait Deviation Index. A) Hip and Knee flexion angles of clinical and control groups B) GDI separates groups at risk of falls determined by the Berg Balance Scale. C) GDI correlates with 10 Meter Walk Test performance $r = 0.82$. D) GDI of LLPUs and KOA participants is significantly lower than that of control populations. Further, GDI of Transfemoral amputees is significantly lower than GDI of Transtibial amputees. E) GDI collected in clinical settings correlates ($r = 0.47$) with the mJOA, a clinically used ordinal questionnaire.
peabody124.bsky.social
Central to this was extending our end-to-end differentiable biomechanics approach to fitting both 2D and 3D keypoints measured from images. This can also account for smartphone rotation measured by our Portable Biomechanics Platform, which also makes this easy to integrate into clinical workflows.
Quality Measures of Single Camera Fitting. A) Kinematic traces from smartphone video (red/blue) compared to ground truth (gray dashed) during walking. B) Joint angle errors across populations for select lower limb angles. n denotes the number of unique individuals in each cohort and v denotes the number of total videos for that cohort. C) Select joint angle errors with respect to camera view angle show that sagittal plane angles have the lowest error with sagittal camera views, and frontal angles have the lowest error with frontal views. D) Pelvis translation (RTE) extracted from handheld smartphone video compared to ground truth during functional gait assessments.
peabody124.bsky.social
We developed a novel approach to fitting biomechanics from smartphone video that produces kinematic reconstructions within a few degrees and has been validated across a wide range of activities and clinical backgrounds.
Methods Overview. We introduce a method for biomechanically grounded movement analysis in clinical settings using a handheld smartphone. \textbf{A)} Researchers held a smartphone (optionally with gimbal) while following a participant walking. Our system has no specific requirements regarding viewing angle, distance to subject, or therapist assistance. \textbf{B)} Recorded smartphone video and optional wearable sensor data are stored in the cloud, and processed using PosePipe, an open-source package implementing computer vision models for person tracking and keypoint detection. \textbf{C)} To reconstruct movement, we represent movement as a function that outputs joint angles, which—combined with body scaling parameters and evaluated through forward kinematics—generate a posed biomechanical model in 3D space. This untrained model is compared to video-extracted joint locations and optionally smartphone sensor data to compute a loss. This loss guides backpropagation to iteratively refine both the kinematic trajectory and body scale. \textbf{D)} Initially, the representation lacks knowledge of the person’s movements and scale (e.g., height, limb proportions), but after optimization, it typically tracks joint locations within 15 mm in 3D and 5 pixels in 2D.
peabody124.bsky.social
Enjoyed presenting on "The Good, The Bad, and the Ugly: AI for SCI Clinicians" with @ryansolinskymd.bsky.social and @josezariffa.bsky.social. Great enthusiasm from the crowd on the topic and the lively discussion and nice followup from precourse
bsky.app/profile/ryan...
ryansolinskymd.bsky.social
AI integration in spinal cord injury medicine precourse at the ASIA2025 meeting. Led by @peabody124.bsky.social and Dr. Sarah Brueningk. Learning lessons from other successful examples in Cancer, Alzheimer’s, Cardiology.
Reposted by R. James Cotton
ryansolinskymd.bsky.social
AI integration in spinal cord injury medicine precourse at the ASIA2025 meeting. Led by @peabody124.bsky.social and Dr. Sarah Brueningk. Learning lessons from other successful examples in Cancer, Alzheimer’s, Cardiology.
peabody124.bsky.social
However, more work to do actually validating these against EMG recordings (we have this in many of our trials from our wearable sensor platform) and I suspect there will be lots of work to really tune it up.

Still, finding clinically sensible patterns is a promising first step.
peabody124.bsky.social
The imitation learning policy is trained to replicate all the kinematics from 30+ hours of markerless motion data by driving a muscle-driven model with some regularization on muscle activation. Through training it learns muscle patterns that will replicate the kinematics.
peabody124.bsky.social
Looking forward, we hope to combine things like this.

E.g. using BiomechGPT to understand movement user requests and then run simulations in the physics simulator using imitation learning, for example.

Either way, really starting to see promise for foundation models in biomechanics

Stay tuned :)
peabody124.bsky.social
Particularly exciting was evidence of positive transfer learning as we increased the set of tasks it is trained on.

Of course lots of it also makes mistakes (the person in that video is using a crutch!). Lots of work to do and we are just starting exploring the opportunities from this approach.
peabody124.bsky.social
We were super excited to see how well this performed across a range of tasks, even with fairly sparse annotation.

It's doing a great job at things like activity classification, which can be rather challenging for impaired movements, and more subtle things like inferring likely diagnoses.
peabody124.bsky.social
The next paper is BiomechGPT arxiv.org/abs/2505.18465 with @antihebbiann.bsky.social and Ruize Yang which trains a language model to be fluent in tokenized movement sequences. This draws inspiration from MotionGPT but focuses on benchmarking performance on clinically meaingful tasks.
peabody124.bsky.social
Shoutout to recent related work from @trackingskills.bsky.social group arxiv.org/abs/2503.14637

Great to see growing enthusiasm in this space
bsky.app/profile/trac...

And shoutout to MyoSuite for pushing the neuromuscular modeling in MuJoCo
peabody124.bsky.social
Here is another example. It also captures some imperfections like little foot slips we want to improve.
peabody124.bsky.social
Since then, we've tuned it up to handle anthropomorphic and muscle scaling. Still lots of work to do further tuning this as there are many things we aren't scaling such as mass and inertia and optimizing w.r.t. the EMG data we have from our wearable sensors.
peabody124.bsky.social
The first is KinTwin arxiv.org/abs/2505.13436 which trains torque driven and muscle-driven policies to replicate movements of intact and impaired gait. It detects clinically meaningful features like propulsion asymmetries and muscle timing.

Teaser from a few months back: bsky.app/profile/peab...
peabody124.bsky.social
Over the last few years we have been developing methods for markerless motion capture of biomechanics and getting them into the clinics, such as at @abilitylab.bsky.social.

We are now developing foundation models from these large datasets and testing what this enables. Two recent preprints:
peabody124.bsky.social
It was also the initial meeting of Julius Dewald and Bob Sainburg's Society for Neuromechanics in Rehabilitation (SoNMiR) and it was great to present on biomechanics in rehabilitation and arxiv.org/abs/2411.03919. Very exciting for this society bringing together this community.
SoNMIR Program – Rehabweek 2025 – Chicago
rehabweek.org
peabody124.bsky.social
@jdpeiffer.bsky.social and Tim Unger also won second prize for the ICORR talks for "Differentiable Biomechanics for Markerless Motion Capture in Upper Limb Stroke Rehabilitation: A Comparison with Optical Motion Capture" arxiv.org/abs/2411.14992.
peabody124.bsky.social
With Kyle Embry and the @abilitylab.bsky.social C-STAR team we also organized a half-day workshop "From Motion to Meaning: AI-Enabled Biomechanics for Rehabilitation" showcasing work from Georgios Pavlakos, Eni Halilaj, Vikash Kumar, Chris Awai, and Pouyan Firouzabadi.
peabody124.bsky.social
With Dailyn Despradel, Derek Kamper, @marcslutzky.bsky.social, and @dougweberlab.bsky.social we organized a workshop on EMG biofeedback. Very much enjoyed the engaged discussion on how to disseminate these technologies into the real world.
peabody124.bsky.social
Finally recovered from RehabWeek in Chicago, which was fantastic. It was a very successful week for the Intelligent Sensing and Rehabilitation lab at @abilitylab.bsky.social.

(p.s. sorry below for anyone on BlueSky who I couldn't find/failed to tag)
peabody124.bsky.social
Looking forward to presenting on what we can do with large-scale biomechanics data in rehabilitation in the SoMNiR #RehabWeek 2025 session this afternoon! @abilitylab.bsky.social
Screenshot from a chat interface for biomechanics