p. sampson
banner
pdotsamp.bsky.social
p. sampson
@pdotsamp.bsky.social
phd student-worker at penn, nsf grfp fellow, spelman alum. autonomy and identity in algorithmic systems.
they/she. 🧋 🍑 🧑🏾‍💻
https://psampson.net
Reposted by p. sampson
At the end of the day, the Black Lives Matter era was about whether people should be killed in the street, and lots of people decided yeah and put those little blue flags on their cars. It spread to everyone because it stopped for no one.
January 24, 2026 at 4:45 PM
Reposted by p. sampson
I don’t think people quite understand that the occupation by ICE in the Twin Cities has more or less erased non-white people from public life. They cannot safely exist in any space accessible to the public and moving between private places leaves folks vulnerable to kidnapping by armed masked goons.
January 16, 2026 at 10:15 PM
Reposted by p. sampson
AI companies should be extremely careful not to repeat the many mistakes that have been made — and harms that have resulted from — the adoption of personalized ads on social media and around the web. (4/5)
January 16, 2026 at 7:50 PM
truly torpedoing my turducken to see the critical theories (+ the scholars that put such work forth) that i take immense pride in drawing on in my work and am dead specific in describing simply abstracted away by an ai summary i cannot edit!!!!!! in the main archive for my contributions!!!!!!
December 17, 2025 at 7:08 PM
Reposted by p. sampson
(2) Give authors of papers the ability to edit the AI-generated summary. Ideally directly, but if that's not possible, to easily request a change.

(3) Give authors the ability to opt out of having an AI summary on their paper entirely. (Or require an opt in.)
December 16, 2025 at 11:51 PM
Reposted by p. sampson
As someone who has ::checks notes:: 117 publications in the ACM Digital Library, can confirm I was not asked nor told about this. It doesn't appear on ALL papers, and I can't intuit a rhyme or reason to which ones.

Maybe I shouldn't be, but I'm legitimately surprised there's not an opt out option.
December 16, 2025 at 11:39 PM
Reposted by p. sampson
I wrote this brief talk on why “augmenting diversity” with LLMs is empirically unsubstantiable, conceptually flawed, and epistemically harmful and a nice surprise to see the organisers have made it public

synthetic-data-workshop.github.io/papers/13.pdf
December 16, 2025 at 10:57 AM
Reposted by p. sampson
December 14, 2025 at 1:30 AM
becoming one of those computer scientists who won't stop yapping about my analog hobby of choice.

(canon ae-1 · kodak ultramax 400)
September 25, 2025 at 5:01 PM
Reposted by p. sampson
the non-ironic, casual deployment of the white usage of "woke" in news articles and opinion pieces makes me want to scream
July 12, 2025 at 5:03 AM
Reposted by p. sampson
I've always felt somewhat uncomfortable with the framing of AI risk around the actions of "malicious actors". Because sometimes the malicious actor is the company that built the thing. And the model is causing harm because it was successfully steered into doing what it's creators wanted it to do.
July 9, 2025 at 5:38 PM
the research homies of all time are cooking btw 👇🏽👇🏽👇🏽
I'm incredibly fortunate to have had the opportunity to work with this team. Truly one of the best collaborative experiences I have had to date (special s/o to our MVP @mkgerchick.bsky.social for leading this)!

Check out Marissa's talk on our paper "auditing the audits" if you're at #FAccT2025!

⬇️
Excited to be at #FAccT2025 and honored to see our work “auditing the audits” resulting from one of the first enacted AI laws in the U.S. received an honorable mention! I’ll be presenting this work on Wednesday at 11 AM Athens time: programs.sigchi.org/facct/2025/p...
June 23, 2025 at 4:13 PM
Reposted by p. sampson
y'know it's kind of a tell that they think celebrating the end of slavery is DEI
June 19, 2025 at 3:56 PM
Reposted by p. sampson
my hottest educational take is that schools should actively, non-punitively teach students how to admit when they don't know something, aren't sure or have made a mistake, with various teaching frameworks adapted to support this ideal, because people who can't admit fault are breaking the world
June 18, 2025 at 9:13 PM
Reposted by p. sampson
Work can be so endlessly rigorous about the technical half of the word but the socio half just stands in for anything and everything
June 11, 2025 at 5:36 PM
Reposted by p. sampson
the mainstream media has been steadily buying into the idea that science and academia are somehow "captured" by left and liberal ideologues, rather than the more obvious explanation that measured analysis leads people away from reactionary thought
June 12, 2025 at 2:08 AM
Reposted by p. sampson
anyway, if your problem with technology was those inconvenient people who do technical work, and how they ask questions, have opinions, take ethical stances, etc.
then it might be especially tempting to just replace them with ai. fewer tedious coffees, for one thing.
April 30, 2025 at 4:06 PM
Reposted by p. sampson
Important points @ach.bsky.social ACH2025 conference by Mohammad Suliman about the gaps in accessibility for research and programming. Taking notes for our new ACH Anthology!
June 12, 2025 at 5:49 PM
“fort robert e. lee” lmao give us irredeemable loser energy.
June 11, 2025 at 12:55 PM
mood.
"I will constitute the field." is such a mic drop. Set using some of printshop's shiny newish type (Centaur); quote from Louise Glück's "Witchgrass":
June 3, 2025 at 7:37 PM
Reposted by p. sampson
I catch blocks if I talk about C19 but:

There’s a new, more contagious strain spreading—“razor blade throat” is one symptom. But, most spread is SYMPTOMLESS. You DON’T KNOW you’re spreading it.

More spread, more mutations & variants—>more disabled & dead.

Put a mask on in public. We keep us safe.
June 3, 2025 at 6:14 AM
Reposted by p. sampson
There are 2 previous historical cases of countries destroying their science and universities, crippling them for decades: Lysenkoism in the USSR and Nazi Germany. The Trump administration will be the 3rd.
It's not just budgets but research, institutions, expertise, and training the next generation.
May 31, 2025 at 4:43 AM
Reposted by p. sampson
Few tech critiques have aged as well as Phil Agre saying "Oh come on. Face recognition will work well enough to be dangerous, and poorly enough to be dangerous as well." Obviously lands well beyond face rec.
June 1, 2025 at 6:13 PM