Gabriela Femenia
@gfemenia.bsky.social
1.1K followers 1.6K following 120 posts
Law Library Director and Associate Professor at Temple Beasley School of Law. Former medievalist and fan of archaic information technologies. She/her/hers
Posts Media Videos Starter Packs
gfemenia.bsky.social
This is the advice I consistently give people about trying AI: use something you know very well already, so you can actually determine the quality of the output for yourself, and get some idea of how the sausage is being made
scalzi.com
The "AI" responses that Google gives about me and my work are consistently error-prone, which I know because I am me. If I know Google's "AI" responses give incorrect answers about things I know about, I can't trust it to give correct answers about things I don't know. So, no, I don't use it.
johngordon.bsky.social
I’m surprised you don’t use ai answer engines in research you currently do with Google
gfemenia.bsky.social
I actually bought the shirt at ALA so I can!
gfemenia.bsky.social
Shame clearly isn't enough; there should be more sanctions
emilymbender.bsky.social
When the second case of a lawyer getting caught using synthetic text extruding machines hit the news, I wondered: Don't these people gossip?? I would have thought the first case would be so embarrassing as to make things very clear.

www.404media.co/18-lawyers-c...

>>
18 Lawyers Caught Using AI Explain Why They Did It
Lawyers blame IT, family emergencies, their own poor judgment, their assistants, illness, and more.
www.404media.co
gfemenia.bsky.social
The legal market is a comparatively small part of TR's profits so I am not so sure pushback from firms would work, and law firms are very used to taking whatever "improvements" the vendors slap on anyway. We don't get a choice in academic libraries; they just turn the stuff on at will.
gfemenia.bsky.social
Some of us do, frequently, and get branded "Luddites" for our trouble.
Reposted by Gabriela Femenia
karlbode.com
Stanford researchers found that AI-generated "workslop" is actually making people less productive, in part because workers have to correct errors or decode the useful information/intent buried in a flood of auto-generated garbage:
AI-Generated “Workslop” Is Destroying Productivity
Despite a surge in generative AI use across workplaces, most companies are seeing little measurable ROI. One possible reason is because AI tools are being used to produce “workslop”—content that appea...
hbr.org
gfemenia.bsky.social
It was a whole spiral of blue-eyed soul started off by Steve Winwood. I may start regularly playing Song Algorithm Roulette on my commute instead of law and news podcasts because my mental health was much better the rest of the day.
gfemenia.bsky.social
Yesterday I had the singularly GenX satisfaction of playing one 80s song on Apple Music, and then correctly guessing what else the algorithm would supply based on shared characteristics. I may have to make a mix tape for old times' sake next.
gfemenia.bsky.social
Shocked, I tell you.
briangreenberg.net
For a while there, the news made it seem like like AI was going to swallow every job whole but the AI gold rush is hitting a wall.

⚠️ Big firm AI pilots are failing
🧠 Turns out, trust and expertise matter more than ever.

fortune.com/2025/09/10/a...
#AI #Cybersecurity #HumanSkills
'Human skills' are at a premium again now that big companies are backpedaling on error-prone AI | Fortune
AI adoption rate among large companies has dipped from a peak of 14% earlier this year to 12% as of late summer.
fortune.com
Reposted by Gabriela Femenia
audiolibrarian.bsky.social
article from today's daily journal
Reposted by Gabriela Femenia
amndw2.bsky.social
You know what else is being affected by the tariffs on parcels worth less than $800? International interlibrary loan. I'm hearing reports of libraries overseas that won't lend to the US anymore. (That's in addition to the libraries here that have shut down their ILL b/c of lost IMLS funding.)
gfemenia.bsky.social
The whole thread, and the cited Guardian article, is a must-read but this is the heart of everything.
emilymbender.bsky.social
The next time someone tells you this is just how it is/"AI" is inevitable/this junk is here to stay, please remember that the future is not yet written and we don't have to put up with this.
gfemenia.bsky.social
It is one of the tragedies of my academic life that social media was not around when I was a medievalist grad student so I could be the one to go viral with the manuscript memes
Reposted by Gabriela Femenia
elienyc.bsky.social
Here’s my write up on the most racist decision to come out of the Supreme Court in a while. The court approved of Trump’s racial profiling of Latinos with Brett Kavanaugh saying being harassed based on the color of your skin is “common sense.”
My latest in @thenation
The Supreme Court Just Gave the OK to Racial Profiling
The court’s ruling allowing ICE to resume its indiscriminate round-ups of LA’s Latino residents can only be described as one thing.
www.thenation.com
Reposted by Gabriela Femenia
wolvendamien.bsky.social
How many times, in how many contexts, from how many internal and external researchers, or from how many CEO's are people going to have to receive this message before they believe it:

"Hallucinations" are an inherent part of the large language model architecture.
www.nature.com/articles/d41...
Can researchers stop AI making up citations?
OpenAI’s GPT-5 hallucinates less than previous models do, but cutting hallucination completely might prove impossible.
www.nature.com
gfemenia.bsky.social
Do not use a spicy autocomplete tool to do a librarian's job.
cfiesler.bsky.social
But my overall thoughts on using an LLM to help with finding scholarly literature is:

(1) Assume anything in the output could be incorrect
(2) Don't assume the output is comprehensive (i.e., omg don't use an LLM for a systematic lit review)

i.e. maybe it can help, but it can't do this for you.
Reposted by Gabriela Femenia
kashhill.bsky.social
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death.

Overwhelming at times to work on this story, but here it is. My latest on AI chatbots: www.nytimes.com/2025/08/26/t...
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
www.nytimes.com
Reposted by Gabriela Femenia
babewiththepower.bsky.social
Turn Off Google AI Overview
Set "Web" as Default

👉🏼🔗: tenbluelinks.org
Reposted by Gabriela Femenia
wolvendamien.bsky.social
There is not yet wide evidence that "gen AI" actually enhances researchers' and developers' results. Some of the most headline-grabbing work that claimed it does is now actively disavowed by the institution under which auspices it was done. That is a MASSIVE ding on the reliability of that research.
chanda.blacksky.app
This should get WIDE circulation:
MIT stating that it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.”

gizmodo.com/mit-backs-aw...
MIT Backs Away From Paper Claiming Scientists Make More Discoveries with AI
The retracted paper had impressed a Nobel Prize winner in economics.
gizmodo.com
gfemenia.bsky.social
It is one of my GenXiest qualities to be suspicious of that tone and to immediately assume insincerity and ulterior motive.
faineg.bsky.social
another way in which LLMs make me feel like an alien as compared to their many fans: I find their default fawning conversational tone to be intensely, viscerally repulsive and suspicious.
toiletbones.bsky.social
for my specific style of Totally Normal Brain, a person acting weirdly fawning and inexplicably congratulatory like an llm would make me immediately skeptical of anything they say, so llms emulating that behavior as an intentional design choice is like wow i fucking hate this.