Maria Valentini
@mvalentini.bsky.social
550 followers 130 following 2 posts
computer science/cognitive science PhD student @ CU Boulder • computational psycholinguistics, NLP for education, AI ethics
Posts Media Videos Starter Packs
Reposted by Maria Valentini
carlbergstrom.com
This is the third story I've read in a month about how AI chatbots are leading people into psychological crises.

Gift link
They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling.
www.nytimes.com
Reposted by Maria Valentini
markriedl.bsky.social
I don’t really have the energy for politics right now. So I will observe without comment:

Executive Order 14110 was revoked (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence)
mvalentini.bsky.social
We focus on automatically evaluating contextual informativeness relative to multiple target words in child-directed text, with implications for improving the automatic generation of educational stories for early childhood vocabulary intervention. Can’t wait to share, and learn about others’ work :)
Reposted by Maria Valentini
mariaa.bsky.social
1. Can you stop companies from training generative AI using your data? No, not currently.
2. Is this dataset meant for training generative AI? 🤷‍♀️ but more likely for research and statistical analysis.
3. Is it ok to duplicate and distribute people’s data without agency to opt out? I’d argue no.
danielvanstrien.bsky.social
First dataset for the new @huggingface.bsky.social @bsky.app community organisation: one-million-bluesky-posts 🦋

📊 1M public posts from Bluesky's firehose API
🔍 Includes text, metadata, and language predictions
🔬 Perfect to experiment with using ML for Bluesky 🤗

huggingface.co/datasets/blu...
bluesky-community/one-million-bluesky-posts · Datasets at Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
Reposted by Maria Valentini
carlbergstrom.com
So many people, CS researchers included, think that you can explore how an LLM works by simply asking it to tell you what it is doing or "thinking".

Here @jennhu.bsky.social provides an excellent illustration of how that approach fails even at the most basic level.
jennhu.bsky.social
To researchers doing LLM evaluation: prompting is *not a substitute* for direct probability measurements. Check out the camera-ready version of our work, to appear at EMNLP 2023! (w/ @rplevy.bsky.social)

Paper: arxiv.org/abs/2305.13264

Original thread: twitter.com/_jennhu/stat...