Cornelius Wolff
@cowolff.bsky.social
59 followers 250 following 10 posts
PhD Student at the TRL Lab at CWI, Amsterdam
Posts Media Videos Starter Packs
Reposted by Cornelius Wolff
madelonhulsebos.bsky.social
Excited to be at ACL! Join us at the Table Representation Learning workshop tomorrow in room 2.15 to talk about tables and AI.

We also present a paper showing the sensitivity of LLMs in tabular reasoning to e.g. missing vals and duplicates, by @cowolff.bsky.social at 16:50: arxiv.org/abs/2505.07453
cowolff.bsky.social
🧪 Paper link: arxiv.org/pdf/2505.07453
📅 I’m presenting Thursday, July 31st at the TRL workshop

I’ll be around all week, so if you’re also interested in tabular learning/understanding and insight retrieval, feel free to reach out to me. I would be happy to connect! (4/4)
cowolff.bsky.social
Turns out:
🔹 BLEU/BERTScore? Not reliable for evaluating tabular QA capabilities
🔹 LLMs often struggle with missing values, duplicates, or structural alterations
🔹 We propose an LLM-as-a-judge method for a more realistic evaluation of the LLMs tabular reasoning capabilities (3/4)
cowolff.bsky.social
The paper's called:
"How well do LLMs reason over tabular data, really?" 📊

We dig into two important questions:
1️⃣ Are general-purpose LLMs robust with real-world tables?
2️⃣ How should we actually evaluate them? (2/4)
cowolff.bsky.social
Headed to Vienna for ACL and the 4th Tabular Representation Learning Workshop! 🇦🇹
Super excited to be presenting my first PhD paper there 📄 (1/4)
cowolff.bsky.social
Huge thanks to @madelonhulsebos.bsky.social for all the support on getting this work off the ground on such short notice after I started my PhD 🙏
And I am excited to keep building on this research!
📄 Paper link: arxiv.org/pdf/2505.07453
arxiv.org
cowolff.bsky.social
What did we find?
Even on simple tasks like look-up, LLM performance drops significantly as table size increases.
And even on smaller tables, results leave plenty of room for improvement, highlighting major gaps in LLMs' understanding of tabular data and the need for more research on this topic.
cowolff.bsky.social
Furthermore, we extended the existing TQA-Benchmark with some common data perturbations like Missing Values, Duplicates and Column Shuffling.
Using this dataset and the LLM-as-a-judge, we tested the response accuracy to basic reasoning tasks like look-ups, subtractions, averages etc.
cowolff.bsky.social
But only measuring if an answer from a LLM is actually correct turned out to be surprisingly tricky.
🔍 The standard metrics? BLEU, BERTScore?
They fail to capture the correctness of the outputs given in this space.
So we introduced an alternative:
An LLM-as-a-judge to assess responses more reliably.
cowolff.bsky.social
Tables are everywhere, but so are LLMs these days!
But what happens when these two meet? Do LLMs actually understand tables, when they encounter them for example in a RAG pipeline?
Most benchmarks don’t test this well. So we decided to dig deeper.👇
cowolff.bsky.social
"Can LLMs really reason over tabular data, really?"
That’s the title and central question of my first paper in my new role as a PhD student, which has been accepted to the 4th Table Representation Learning Workshop @ ACL 2025! arxiv.org/pdf/2505.07453

🧵Here’s what we found:
Reposted by Cornelius Wolff
madelonhulsebos.bsky.social
Eager to contribute to democratizing insights from tabular data? We have 2 new PhD openings! ✨

1) Fundamental Techniques in Table Representation Learning
2) Reliable AI-powered Tabular Data Analysis Systems

⏰ Apply by: 30 June 2025
📅 Start: Fall/Winter 2025
🔗 Info: trl-lab.github.io/open-positions
Open positions | TRL Lab
trl-lab.github.io
Reposted by Cornelius Wolff
madelonhulsebos.bsky.social
Excited to share the new monthly Table Representation Learning (TRL) Seminar under the ELLIS Amsterdam TRL research theme! To recur every 2nd Friday.

Who: Marine Le Morvan, Inria (in-person)
When: Friday 11 April 4-5pm (+drinks)
Where: L3.36 Lab42 Science Park / Zoom

trl-lab.github.io/trl-seminar/
Details about the seminar talk titled TabICL: A Tabular Foundation Model for In-Context Learning on Large Data by Marine Le Morvan