Mark Dredze
@mdredze.bsky.social
2.7K followers 380 following 66 posts
John C Malone Professor at Johns Hopkins Computer Science, Center for Language and Speech Processing, Malone Center for Engineering in Healthcare. Parttime: Bloomberg LP #nlproc
Posts Media Videos Starter Packs
Reposted by Mark Dredze
jhucompsci.bsky.social
Congratulations to CS faculty @mdredze.bsky.social, Jason Eisner, Peter Kazanzides, and @tom-lippincott.bsky.social
on their @jhu.edu Nexus Awards! Learn more about their funded projects here: www.cs.jhu.edu/news/compute...
Headshots of Mark Dredze, Jason Eisner, Peter Kazanzides, and Tom Lippincott.
Reposted by Mark Dredze
williamjurayj.bsky.social
🚨 You are only evaluating a slice of your test-time scaling model's performance! 🚨

📈 We consider how models’ confidence in their answers changes as test-time compute increases. Reasoning longer helps models answer more confidently!

📝: arxiv.org/abs/2502.13962
mdredze.bsky.social
I know I can improve my ARR reviews, but there really is no need for name calling. 😁
mdredze.bsky.social
Helpful
Insightful
Probing
Valuable
Thoughtful
Illuminating
Constructive

In author feedback, these are synonyms for "we hate your review."
mdredze.bsky.social
Do reviewers purposely write confusing reviews with typos to demonstrate that the review wasn't written by a LLM?
mdredze.bsky.social
Golden idea for an NLP paper: a group of llamas is called a "cria herd".

That would make a great name for a LLM method, model, or paper.

Just remember to acknowledge me in your paper.

You're welcome.
mdredze.bsky.social
Idea for GenAI app: rewrite click bait headlines to normal headlines in the browser.

Input: you’ll never guess this one company organizing the best deals of the year

Output: Amazon has a modest sale on phone chargers
mdredze.bsky.social
The ARR submission checklist is already pretty extensive, but I suggest we add an additional question:

"I certify that I know the difference between \citet and \citep."
mdredze.bsky.social
ARR: Reviews are due today.

Me:
mdredze.bsky.social
I feel seen. This is why I always access my API keys from my laptop.
mdredze.bsky.social
Do you have any of those fortune cookies that mock academics?

Sure!
mdredze.bsky.social
Starting a new year and reflecting on how lucky I am to work at @hopkinsengineer.bsky.social with amazing people @jhucompsci.bsky.social @jhuclsp.bsky.social.

I was promoted to full professor in 2023, and my students presented me with this amazing poster of current and former PhD students.
mdredze.bsky.social
Examining the generated QA pairs, you can really see the difference. Our generations (bottom) look harder and more interesting.

Try our strategy for your synthetic generation task? Check out our paper, being presented at #ML4H2024 .
arxiv.org/abs/2412.04573
mdredze.bsky.social
Training a Clinical QA system on our data gives big improvements, whether we generate data from Llama or GPT-4o. These improvements are both in F1 and any overlap between the extracted and true answers.
mdredze.bsky.social
The generated pair has a lot of advantages: it doesn't use the same language as the report, it includes harder questions, and the answers are sometimes not in the report (unanswerable questions.) The result? Harder, more diverse and more realistic QA pairs.
mdredze.bsky.social
Second, we use a summarize-then-generate strategy. The LLM first summarizes a given clinical record in a structured format. The summary keeps the key points but loses the details, such as specific terminology and content. We then use the summary to generate a new QA pair.
mdredze.bsky.social
We explore two strategies. First, we craft instructions to encourage QA diversity. We formulate these as constraints on the answers to the questions. It helps, but we need more.
mdredze.bsky.social
We can ask an LLM to write QA pairs, but they turn out to be too easy and repetitive. They don't come close to what you can get with real data. We need more diverse data! Typical methods (e.g. annealing) don't work. What can we do?
mdredze.bsky.social
Paper at #ML42024!

Clinical QA can help doctors find critical information in patient records. But where do we get training data for these systems? Generating this data from an LLM is hard. 🧵
mdredze.bsky.social
Takeaways: If you can fine-tune a model on a specific clinical domain, that's great. If you can't, you should probably use models that are better overall, even if they aren't trained on clinical data.

Many more details in the paper!
arxiv.org/abs/2412.05845
Are Clinical T5 Models Better for Clinical Text?
Large language models with a transformer-based encoder/decoder architecture, such as T5, have become standard platforms for supervised tasks. To bring these technologies to the clinical domain, recent...
arxiv.org
mdredze.bsky.social
It turns out that when you have just a little supervised data, the models trained on more data and tasks, even when out of domain, do BETTER on the new clinical domain.
mdredze.bsky.social
Maybe the real advantage for domain-tuned models lies in the low resource setting. With lots of supervised data, an out of domain model can do well. What about with just a few training examples?