Literally a professor. Recruiting students to start my lab.
ML/NLP/they/she.
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Feeling very grateful that reviewers and chairs appreciated concise mathematical explanations, in this age of big models.
www.biorxiv.org/content/10.1...
1/2
Session 6 on Fri 12/5 at 4:30-7:30pm, Poster 2001 ("a space odyssey")
Details on this thread by the brilliant lead author @annhuang42.bsky.social below:
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
Session 6 on Fri 12/5 at 4:30-7:30pm, Poster 2001 ("a space odyssey")
Details on this thread by the brilliant lead author @annhuang42.bsky.social below:
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
arxiv.org/pdf/2410.03972
It started from a question I kept running into:
When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
OpenAI has since made the chatbot safer, but that comes with a tradeoff: less usage.
My guess is that this will get worse as AI tech improves. For instance, fake videos of minorities doing crime.
Minor league umpires have a higher accuracy rate on ball-strike calls than major league umpires but
(a) they are worse on easy calls and
(b) they are worse on hard calls.
blogs.fangraphs.com/your-final-p...
Minor league umpires have a higher accuracy rate on ball-strike calls than major league umpires but
(a) they are worse on easy calls and
(b) they are worse on hard calls.
blogs.fangraphs.com/your-final-p...
And rightfully so.
So tell me, why do we put up with it when the song is “Frosty the Snowman”?
And rightfully so.
So tell me, why do we put up with it when the song is “Frosty the Snowman”?
I am particularly excited about Olmo 3 models' precise instruction following abilities and their good generalization performance on IFBench!
Lucky to have been a part of the Olmo journey for three iterations already.
Best fully open 32B reasoning model & best 32B base model. 🧵
I am particularly excited about Olmo 3 models' precise instruction following abilities and their good generalization performance on IFBench!
Lucky to have been a part of the Olmo journey for three iterations already.
@profsophie.bsky.social! Also, Boston is quite nice :)
@profsophie.bsky.social! Also, Boston is quite nice :)