Michael Beyeler
@mbeyeler.bsky.social
1.1K followers 410 following 440 posts
👁️🧠🖥️🧪🤖 Associate Professor in @ucsb-cs.bsky.social and Psychological & Brain Sciences at @ucsantabarbara.bsky.social. PI of @bionicvisionlab.org. #BionicVision #Blindness #LowVision #VisionScience #CompNeuro #NeuroTech #NeuroAI
Posts Media Videos Starter Packs
Pinned
mbeyeler.bsky.social
Excited to share that I’ve been promoted to Associate Professor with tenure at UCSB!

Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!

Full post: www.linkedin.com/posts/michae...
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work
mbeyeler.bsky.social
Good eye! You’re right, my spicy summary skipped over the nuance. Color was a free-form response, which we later binned into 4 categories for modeling. Chance level isn’t 25% but adjusted for class imbalance (majority class frequency). Definitely preliminary re:“perception”, but beats stimulus-only!
mbeyeler.bsky.social
Thanks! I hear you, that thought has crossed my mind, too. But IP & money have already held this field back too long... This work was funded by public grants, and our philosophy is to keep data + code open so others can build on it. Still, watch us get no credit & me eat my words in 5-10 years 😅
mbeyeler.bsky.social
Together, this argues for closed-loop visual prostheses:

📡 Record neural responses
⚡ Adapt stimulation in real-time
👁️ Optimize for perceptual outcomes

This work was only possible through a tight collaboration between 3 labs across @ethz.ch, @umh.es, and @ucsantabarbara.bsky.social!
mbeyeler.bsky.social
And here’s the kicker: 🚨

If you try to predict perception from stimulation parameters alone, you’re basically at chance.

But if you use neural responses, suddenly you can decode detection, brightness, and color with high accuracy.
Three bar charts show how well different models predict perception of detection, brightness, and color. Using only the stimulation parameters performs worst. Including brain activity recordings—especially pre-stimulus activity—makes predictions much better across all three perceptual outcomes.
mbeyeler.bsky.social
We pushed further: Could we make V1 produce new, arbitrary activity patterns?

Yes ... but control breaks down the farther you stray from the brain’s natural manifold.

Still, our methods required lower currents and evoked more stable percepts.
Figure showing the ability of different methods to reproduce target neural activity patterns and the limits of generating synthetic responses. Left: A target neural response (bottom-up heatmap) is compared to recorded responses produced by linear, inverse neural network, and gradient optimization methods. In this example, the inverse neural network gives the closest match (MSE 0.74) compared to linear (MSE 1.44) and gradient (MSE 1.49). Center: A bar plot of mean squared error across all methods shows inverse NN and gradient consistently outperform linear and dictionary approaches. Right: A scatterplot shows that prediction error increases with distance from the neural manifold; synthetic targets (red) have higher error than natural targets (blue), illustrating that the system best reproduces responses within the brain’s natural activity space.
mbeyeler.bsky.social
Prediction is only step 1. We then inverted the forward model with 2 strategies:

1️⃣ Gradient-based optimizer (precise, but slow)
2️⃣ Inverse neural net (fast, real-time)

Both shaped neural responses far better than conventional 1-to-1 mapping
Figure comparing methods for shaping neural activity to match a desired target response. Left: the target response is shown as a heatmap. Three methods—linear, inverse neural network, and gradient optimization—produce different stimulation patterns (top row) and recorded neural responses (bottom row). Gradient optimization and the inverse neural network yield recorded responses that more closely match the target, with much lower error (MSE 0.35 and 0.50) than the linear method (MSE 3.28). Right: a bar plot of mean squared error across methods shows both gradient and inverse NN outperform linear, dictionary, and 1-to-1 mapping, approaching the consistency of replaying the original stimulus.
mbeyeler.bsky.social
We trained a deep neural network (“forward model”) to predict neural responses from stimulation and baseline brain state.

💡 Key insight: accounting for pre-stimulus activity drastically improved predictions across sessions.

This makes the model robust to day-to-day drift.
Figure comparing predicted and true neural responses to electrical stimulation. Left panels show two example stimulation patterns (top), predicted neural responses by the forward neural network (middle), and the actual recorded responses (bottom). The predicted responses closely match the true responses. Right panels show bar plots comparing model performance across methods. The forward neural network (last bar) achieves the lowest error (MSE) and highest explained variance (R²), significantly outperforming dictionary-based, linear, and 1-to-1 mapping approaches.
mbeyeler.bsky.social
Many in #BionicVision have tried to map stimulation → perception, but cortical responses are nonlinear and drift day to day.

So we turned to 🧠 data: >6,000 stim-response pairs over 4 months in a blind volunteer, letting a model learn the rules from the data.
Diagram of the experimental setup for measuring electrically evoked neural activity. A stimulation pattern is chosen across electrodes on a Utah array (left). Selected electrodes deliver 167 ms trains of 50 pulses at 300 Hz (middle left), sent via stimulator and amplifier into the visual cortex of a participant (middle). Neural signals are recorded before and after stimulation across all channels, producing multi-unit activity traces (MUAe). The difference between pre- and post-stimulation activity (ΔMUAe) is computed (middle right) and visualized as a heatmap across electrodes, showing localized increases in neural responses (right).
mbeyeler.bsky.social
👁️🧠 New preprint: We demonstrate the first data-driven neural control framework for a visual cortical implant in a blind human!

TL;DR Deep learning lets us synthesize efficient stimulation patterns that reliably evoke percepts, outperforming conventional calibration.

www.biorxiv.org/content/10.1...
Diagram showing three ways to control brain activity with a visual prosthesis. The goal is to match a desired pattern of brain responses. One method uses a simple one-to-one mapping, another uses an inverse neural network, and a third uses gradient optimization. Each method produces a stimulation pattern, which is tested in both computer simulations and in the brain of a blind participant with an implant. The figure shows that the neural network and gradient methods reproduce the target brain activity more accurately than the simple mapping.
Reposted by Michael Beyeler
joshuasweitz.bsky.social
NSF GRFP is out 2.5 months late w/key changes

1. 2nd year graduate students not eligible.

2. "alignment with Administration priorities"

3. Unlike prior years, they DO NOT specify the expected number of awards... that is a BIG problem.

a brief 🧵 w/receipts

www.nsf.gov/funding/oppo...
NSF Graduate Research Fellowship Program (GRFP)
www.nsf.gov
Reposted by Michael Beyeler
mariusschneider.bsky.social
🚨Our NeurIPS 2025 competition Mouse vs. AI is LIVE!

We combine a visual navigation task + large-scale mouse neural data to test what makes visual RL agents robust and brain-like.

Top teams: featured at NeurIPS + co-author our summary paper. Join the challenge!

Whitepaper: arxiv.org/abs/2509.14446
Mouse vs. AI: A Neuroethological Benchmark for Visual Robustness and Neural Alignment
Visual robustness under real-world conditions remains a critical bottleneck for modern reinforcement learning agents. In contrast, biological systems such as mice show remarkable resilience to environ...
arxiv.org
mbeyeler.bsky.social
As federal research funding faces steep cuts, UC scientists are pushing brain-computer interfaces forward: restoring speech after ALS, easing Parkinson’s symptoms, and improving bionic vision with AI (that’s us 👋 at @ucsantabarbara.bsky.social).

🧠 www.universityofcalifornia.edu/news/thrilli...
Thrilling progress in brain-computer interfaces from UC labs
UC researchers and the patients they work with are showing the world what's possible when the human mind and advanced computers meet.
www.universityofcalifornia.edu
mbeyeler.bsky.social
Curious though - many of the orgs leading this effort don’t seem to be on @bsky.app yet… Would love to see more #Blind, #Accessibility, and #DisabilityJustice voices here!
mbeyeler.bsky.social
Excited to be heading to São Paulo for the World Blindness Summit 2025! 🌎✨

Looking forward to learning from/connecting with blindness organizations from around the globe.

👉 wbu.ngo/events/world...

#WorldBlindnessSummit #Inclusion #Accessibility #Blindness #DisabilityRights
World Blindness Summit & WBU General Assembly - World Blind Union
wbu.ngo
mbeyeler.bsky.social
I appreciate the effort to improve the review process! Wondering what’s being done to address poor-quality reviews (the “too many paragraphs in Related Work”→Weak Reject ones)… e.g. #NeurIPS added strong steps to uphold review integrity (neurips.cc/Conferences/...) that #CHI2026 could learn from
Reviewer Code of Conduct - NeurIPS 2025
neurips.cc
mbeyeler.bsky.social
Excited to share that I’ve been promoted to Associate Professor with tenure at UCSB!

Grateful to my mentors, students, and funders who shaped this journey and to @ucsantabarbara.bsky.social for giving the Bionic Vision Lab a home!

Full post: www.linkedin.com/posts/michae...
Epic collage of Bionic Vision Lab activities. From top to bottom, left to right:
A) Up-to-date group picture
B) BVL at Dr. Beyeler's Plous Award celebration (2025)
C) BVL at The Eye & The Chip (2023)
D/F) Dr. Aiwen Xu and Justin Kasowski getting hooded at the UCSB commencement ceremony
E) BVL logo cake created by Tori LeVier
G) Dr. Beyeler with symposium speakers at Optica FVM (2023)
H, I, M, N) Students presenting conference posters/talks
J) Participant scanning a food item (ominous pizza study)
K) Galen Pogoncheff in VR
L) Argus II user drawing a phosphene
O) Prof. Beyeler demoing BionicVisionXR
P) First lab hike (ca. 2021)
Q) Statue for winner of the Mac'n'Cheese competition (ca. 2022)
R) BVL at Club Vision
S) Students drifting off into the sunset on a floating couch after a hard day's work
mbeyeler.bsky.social
👁️⚡ I spoke with Dr. Jiayi Zhang about her Science paper on tellurium nanowire retinal implants—restoring vision and extending it into the infrared, no external power required.

New materials, new spectrum, new possibilities.
🔗 www.bionic-vision.org/research-spo...

#BionicVision #NeuroTech
Bionic Vision - Advancing Sight Restoration
Discover cutting-edge research, events, and insights in bionic vision and sight restoration.
www.bionic-vision.org
mbeyeler.bsky.social
At #EMBC2025? Come check out two talks from my lab in tomorrow’s Sensory Neuroprostheses session!

🗓️ Thurs July 17 · 8-10AM · Room B3 M3-4
🧠 Efficient threshold estimation
🧑🔬 Deep human-in-the-loop optimization

🔗 embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS
Program – EMBC 2025
Loading...
embc.embs.org
Reposted by Michael Beyeler
bionicvisionlab.org
👁️⚡ Headed to #EMBC2025? Catch two of our lab’s talks on optimizing retinal implants!

📍 Sensory Neuroprostheses
🗓️ Thurs July 17 · 8-10AM · Room B3 M3-4
🧠 Efficient threshold estimation
🧑🔬 Deep human-in-the-loop optimization

🔗 embc.embs.org/2025/program/
#BionicVision #NeuroTech #IEEE #EMBS #Retina
Program – EMBC 2025
Loading...
embc.embs.org
Reposted by Michael Beyeler
bionic-vision.org
🔬👁️ The next-gen #PRIMA chip in action: subretinal surgery training in 🇩🇪 with the Science Corps team, Prof. Yannick Le Mer, and Prof. Dr. Lars-Olof Hattenbach.

3D digital visualization + iOCT = a powerful combo for precision subretinal implant work.
#BionicVision #NeuroTech

📸 via Dr. Mahi Muqit
A group of surgeons in blue scrubs and surgical masks are performing a procedure in a clinical wetlab setting. Dr. Muqit (seated) operates under a ZEISS ARTEVO® 850 surgical microscope, with Dr. others observing and assisting nearby. A large monitor and medical equipment are visible in the background, along with surgical instruments on a sterile table. The environment is dimly lit, with overhead lights providing focused illumination on the surgical field. A surgeon in blue scrubs, surgical gloves, and a hair cover is seated and operating under a ZEISS ARTEVO® 850 surgical microscope. He is performing a delicate procedure on a blue surgical model using forceps, while another masked assistant supports from behind. The operating table is covered with a sterile green drape, and medical tubing and instruments are visible around the setup. The environment is dimly lit, highlighting the precision of the surgical training. A wide view of a surgical training room shows multiple surgeons in blue scrubs and masks working around a ZEISS ARTEVO® 850 digital microscope. One seated surgeon is actively operating on a subretinal surgery model, while others observe and assist. A large overhead visualization arm and a table with imaging and surgical equipment are prominently visible. The lighting is dim except for the illuminated surgical field, emphasizing the precision and focus of the wetlab environment. Two surgeons in blue scrubs and surgical caps are seated at a ZEISS ARTEVO® 850 digital microscope in a dimly lit operating room. A large monitor displays a high-resolution OCT scan, showing detailed cross-sections of ocular tissue. A green surgical drape, tubing, and imaging equipment are visible around the operating station. The scene highlights the integration of real-time imaging in subretinal surgical training.
mbeyeler.bsky.social
Thrilled to see this one hit the presses! 🎉

One of the final gems from Dr. Justin Kasowski’s dissertatio, showing how checkerboard rastering boosts perceptual clarity in simulated prosthetic vision. 👁️⚡️

#BionicVision #NeuroTech
bionicvisionlab.org
👁️🧠 New paper alert!

We show that checkerboard-style electrode activation improves perceptual clarity in simulated prosthetic vision—outperforming other patterns in both letter and motion tasks.

Less bias, more function, same safety.

🔗 doi.org/10.1088/1741...

#BionicVision #NeuroTech
Raster patterns in simulated prosthetic vision. On the left, a natural scene of a yellow car is shown, followed by its transformation into a prosthetic vision simulation using a 10×10 grid of electrodes (red dots). Below this, a zoomed-in example shows the resulting phosphene pattern. To comply with safety constraints, electrodes are divided into five spatial groups activated sequentially across ~220 milliseconds. Each row represents a different raster pattern: vertical (columns activated left to right), horizontal (rows top to bottom), checkerboard (spatially maximized separation), and random (reshuffled every five frames). For each pattern, five panels show how the scene is progressively built across the five raster groups. Vertical and horizontal patterns show strong directional streaking. Checkerboard shows more uniform activation and perceptual clarity. Random appears spatially noisy and inconsistent.
Reposted by Michael Beyeler
optica-fvm.bsky.social
👁️🧠 It’s not too late to submit your abstract to Optica’s Fall Vision Meeting (FVM) 2025!
📍 Minneapolis/St Paul, Oct 2–5
🧑‍🏫 Featuring talks by Susana Marcos, Austin Roorda, and Gordon Legge
🍷 Kickoff at the CMRR!

🗓️ Abstracts due: Aug 8
🔗 www.osafallvisionmeeting.org

#VisionScience #VisionResearch
Optica Fall Vision Meeting
Oct 2-5 2025 University of Minnesota, Twin Cities, MN
www.osafallvisionmeeting.org
mbeyeler.bsky.social
I have fond memories from a summer internship there - such a unique place, both geographically & intellectually. Sad to see it go
Reposted by Michael Beyeler
bionic-vision.org
👁️🧠 Big step forward for #BionicVision: Science has submitted a CE mark application for the PRIMA retinal implant. If approved, it would be the first #NeuroTech to treat geographic atrophy, a late-stage form of age-related macular degeneration #AMD.

🔗 science.xyz/news/prima-c...
Science Submits CE Mark Application for PRIMA Retinal Implant – A Critical Step Towards Making It Available To Patients | Science Corporation
Science Corporation is a clinical-stage medical technology company.
science.xyz