Oisin Mac Aodha
@oisinmacaodha.bsky.social
1.6K followers 1.2K following 61 posts
Reader in Computer Vision and Machine Learning @ School of Informatics, University of Edinburgh. https://homepages.inf.ed.ac.uk/omacaod
Posts Media Videos Starter Packs
oisinmacaodha.bsky.social
We have some fantastic invited speakers lined up:
sites.google.com/g.harvard.ed...

In addition, we have a paper track for both novel unpublished and published work (deadline Oct 10th):
sites.google.com/g.harvard.ed...
oisinmacaodha.bsky.social
Working at the intersection of AI and Climate/Conservation?

If so, check out our upcoming workshop which will taking place at #EurIPS in Copenhagen in December 2025.
sites.google.com/g.harvard.ed...

@euripsconf.bsky.social
climateainordics.com
🌍 Excited to announce our Workshop on AI for Climate & Conservation (AICC) at #EurIPS2025 in Copenhagen! 🎉

📢 Call for Participation: sites.google.com/g.harvard.ed...

Confirmed speakers from Mistral AI, DeepMind, ETH Zurich, LSCE & more.

Looking forward to meeting and discussing in Copenhagen!
oisinmacaodha.bsky.social
The US-UK Scientific Forum on Measuring Biodiversity for Addressing the Global Biodiversity Crisis summary report is now available.

I participated earlier this year and it was really enlightening. The report covers explores tools, challenges, and solutions for tackling the biodiversity crisis.
nasonline.org
How do we measure life on Earth? 🌍🌱 The new summary report from the 2025 US-UK Forum on Measuring Biodiversity, hosted by the NAS and @royalsociety.org, explores tools, challenges, and solutions for tackling the #biodiversity crisis.

Read here: bit.ly/460RB7R
oisinmacaodha.bsky.social
Check out the paper for more details:

Feedforward Few-shot Species Range Estimation
arxiv.org/abs/2502.14977
Lange et al. ICML 2025
oisinmacaodha.bsky.social
This work was led by Christian Lange, with support from our amazing collaborators Max Hamilton, Elijah Cole, Alexander Shepard, Samuel Heinrich, Angela Zhu, Subhransu Maji, Grant Van Horn, and Oisin Mac Aodha.
oisinmacaodha.bsky.social
FS-SINR is efficient. At test time, it can take an arbitrary number of observations (i.e., context locations) as input, along with optional metadata, and generate a predicted range in a single forward pass of the model.
oisinmacaodha.bsky.social
We obtain better performance in the few-shot setting, i.e., where we have very limited observations for a species. In the x-axis in this plot we vary the number of observations provided to each model for a set of different species and on the y-axis we measure the quality of the range predictions.
oisinmacaodha.bsky.social
We observe improved range prediction performance compared to existing methods, e.g., SINR from Cole et al. at ICML 2023 or LE-SINR from Hamilton et al. at NeurIPS 2024.

Top row: Gabar Goshawk
Bottom row: Black-naped Monarch
oisinmacaodha.bsky.social
In this example, we see a prediction for FS-SINR using a single presence observation as input shown as a white dot (left). Conditioning the model with text (e.g. middle and right), can dramatically change the range predictions.
oisinmacaodha.bsky.social
FS-SINR can be conditioned on in-situ presence observations for a species not seen during training in addition to text descriptions of their ranges or images of the species if available.
oisinmacaodha.bsky.social
In the previous video, we illustrated test time range predictions for FS-SINR for the European Robin where we vary the number of presence observations (shown as white circles). As more observations are added, the predictions improve, becoming more similar to the expert range map (right bottom).
oisinmacaodha.bsky.social
This week at #ICML we are presenting our new work titled Feedforward Few-shot Species Range Estimation.

TLDR;
* Our model, FS-SINR, can estimate a species' range from few observations
* It does not require an retraining for previously unseen species
* It can integrate text and image information
Reposted by Oisin Mac Aodha
fgvcworkshop.bsky.social
We are not missing out! FGVC12 is excited to support "FOMO25: Foundation Model Challenge for Brain MRI".
This MICCAI25 challenge is still running and there is still time to participate!

Submission deadline: August 20, 2025
Join here: fomo25.github.io
Check out the thread below👇
oisinmacaodha.bsky.social
CrossSDF: 3D Reconstruction of Thin Structures From Cross-Sections

We will be presenting our work on thin structure reconstruction at the final poster session (4-6pm) at #CVPR2025 today.

Stop by poster #457 to learn more.
oisinmacaodha.bsky.social
To explore this question, we developed a new benchmark, DepthCues, to evaluate human-like monocular depth cues in large vision models. We show that strong performance on these cues emerges in more recent larger monocular depth models.
oisinmacaodha.bsky.social
DepthCues: Evaluating Monocular Depth Perception in Large Vision Models

Do automated monocular depth estimation methods use similar visual cues to humans?

To learn more, stop by poster #405 in the evening session (17:00 to 19:00) today at #CVPR2025.
oisinmacaodha.bsky.social
MVSAnywhere: Zero-Shot Multi-View Stereo

Looking for a multi-view stereo depth estimation model which works anywhere, in any scene, with any range of depths?

If so, stop by our poster #81 today in the morning session (10:30 to 12:20) at #CVPR2025.
oisinmacaodha.bsky.social
Check out Nikolas @tsagkas.bsky.social and Danier's work on understanding the limitations of pre-trained visual representations for visuomotor robot learning at the Embodied AI Workshop at CVPR 2025 today.
tsagkas.bsky.social
Danier Duolikun presents our work on pre-trained visual representations for visuomotor robot learning today at #CVPR2025 in the 6th Embodied AI Workshop!
🗣️ Talk: 15:30, Room 101 D
📌 Poster: 12:00–13:30, ExHall D (#140–169)
Come say hi!

More info here: tsagkas.github.io/pvrobo/
oisinmacaodha.bsky.social
You can find Room 104E on Level 1 (i.e. street level).
oisinmacaodha.bsky.social
We have a great line up of fantastic speakers.