Karen Levy
@karenlevy.bsky.social
3.5K followers 290 following 12 posts
Law/tech, surveillance, work, truckers. Faculty Cornell Information Science, Cornell Law / Fellow @NewAmerica / Data Driven: http://tinyurl.com/57v559mv / www.karen-levy.net
Posts Media Videos Starter Packs
Reposted by Karen Levy
informor.bsky.social
If your community is thinking about Community Notes, we have some notes to guide your Community Notes thinking right here. #CommunityNotes
travislloydphd.bsky.social
New preprint! Crowdsourced Context Systems (CCS) like X's and Meta's Community Notes are popping up on various social media platforms. How can we better understand, critique, and design such systems?
Reposted by Karen Levy
travislloydphd.bsky.social
New preprint! Crowdsourced Context Systems (CCS) like X's and Meta's Community Notes are popping up on various social media platforms. How can we better understand, critique, and design such systems?
Reposted by Karen Levy
nkgarg.bsky.social
New piece, out in the Sigecom Exchanges! It's my first solo-author piece, and the closest thing I've written to being my "manifesto." #econsky #ecsky
arxiv.org/abs/2507.03600
Screenshot of paper abstract, with text: "A core ethos of the Economics and Computation (EconCS) community is that people have complex private preferences and information of which the central planner is unaware, but which an appropriately designed mechanism can uncover to improve collective decisionmaking. This ethos underlies the community’s largest deployed success stories, from stable matching systems to participatory budgeting. I ask: is this choice and information aggregation “worth it”? In particular, I discuss how such systems induce heterogeneous participation: those already relatively advantaged are, empirically, more able to pay time costs and navigate administrative burdens imposed by the mechanisms. I draw on three case studies, including my own work – complex democratic mechanisms, resident crowdsourcing, and school matching. I end with lessons for practice and research, challenging the community to help reduce participation heterogeneity and design and deploy mechanisms that meet a “best of both worlds” north star: use preferences and information from those who choose to participate, but provide a “sufficient” quality of service to those who do not."
Reposted by Karen Levy
pegahmoradi.bsky.social
For @thehill.com, I wrote about what the Biden-Trump autopen controversy can tell us about automation, AI, and trust: thehill.com/opinion/whit...
thehill.com
Reposted by Karen Levy
mariannealq.bsky.social
It's finally public! 🎉

Excited to announce I'll be joining UIUC's iSchool as an Assistant Professor in Fall 2026. My lab will focus on AI information ecosystems, computational social science, and social computing. I will start recruiting PhD students this cycle, so please reach out if interested.
ischoolui.bsky.social
The #iSchoolUI is pleased to announce that Marianne Aubin Le Quéré (@mariannealq.bsky.social) will join the faculty as an assistant professor in August 2026. Her work traces how AI and other emerging technologies impact online news and civic information ecosystems. ▶️ bit.ly/4kObZ0l
photo of Marianne Aubin Le Quéré
karenlevy.bsky.social
This is a great interview with a real one: a wonderful peek into the brain of @pegahmoradi.bsky.social and her work on automation and labor!
Reposted by Karen Levy
joshkovensky.bsky.social
For your enjoyment: here is Secretary of Education Linda McMahon talking about implementing AI in schools, but pronouncing it "A1," as in "A1 Steak Sauce"

www.youtube.com/live/lxrg28z...
Reposted by Karen Levy
emmharv.bsky.social
✨New Work✨ by me, @allisonkoe.bsky.social, and @kizilcec.bsky.social forthcoming at #CHI2025:

"Don't Forget the Teachers": Towards an Educator-Centered Understanding of Harms from Large Language Models in Education

🔗: arxiv.org/pdf/2502.14592
A screenshot of our paper:

Title: “Don’t Forget the Teachers”: Towards an Educator-Centered Understanding of Harms from Large Language Models in Education

Authors: Emma Harvey, Allison Koenecke, Rene Kizilcec

Abstract: Education technologies (edtech) are increasingly incorporating new features built on LLMs, with the goals of enriching the processes of teaching and learning and ultimately improving learning outcomes. However, it is still too early to understand the potential downstream impacts of LLM-based edtech. Prior attempts to map the risks of LLMs have not been tailored to education specifically, even though it is a unique domain in many respects: from its population (students are often children, who can be especially impacted by technology) to its goals (providing the ‘correct’ answer may be less important than understanding how to arrive at an answer) to its implications for higher-order skills that generalize across contexts (e.g. critical thinking and collaboration). We conducted semi-structured interviews with six edtech providers representing leaders in the K-12 space, as well as a diverse group of 23 educators with varying levels of experience with LLM-based edtech. Through a thematic analysis, we explored how each group is anticipating, observing, and accounting for potential harms from LLMs in education. We find that, while edtech providers focus primarily on mitigating technical harms, i.e. those that can be measured based solely on LLM outputs themselves, educators are more concerned about harms that result from the broader impacts of LLMs, i.e. those that require observation of interactions between students, educators, school systems, and edtech to measure. Overall, we (1) develop an education-specific overview of potential harms from LLMs, (2) highlight gaps between conceptions of harm by edtech providers and those by educators, and (3) make recommendations to facilitate the centering of educators in the design and development of edtech tools.
karenlevy.bsky.social
Absolutely adore the idea that someone would confuse Deloitte with Deee-lite
karenlevy.bsky.social
We were made for this moment!!!! 🤖✒️
Reposted by Karen Levy
pegahmoradi.bsky.social
this is literally my superbowl
Reposted by Karen Levy
pegahmoradi.bsky.social
The Heritage Foundation-backed "Oversight Project" pointing out that Biden used an autopen to sign a number of documents *would* be damning...if it wasn't the case that nearly every president since JFK has done the same
Reposted by Karen Levy
nkgarg.bsky.social
*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...
Reposted by Karen Levy
sjgreenwood.bsky.social
Please repost to get the word out! @nkgarg.bsky.social and I are excited to present a personalized feed for academics! It shows posts about papers from accounts you’re following bsky.app/profile/pape...
Reposted by Karen Levy
pegahmoradi.bsky.social
My paper (with @karenlevy.bsky.social and Cristobal Cheyre!) on how self-checkout deepens these relational demands for cashiers is forthcoming in @acm-cscw.bsky.social 2025 :)

🔗 Link here: arxiv.org/pdf/2401.00205
Reposted by Karen Levy
pegahmoradi.bsky.social
I spoke with @annlarson.bsky.social about how store tech can increase demands on frontline retail workers, for her latest in The Nation!

Workers often have to make shoppers feel better when store tech frustrates or confuses them, making retail work more socially and relationally demanding:
Text from article: Automation may also have unintended social consequences. Pegah Moradi, a PhD candidate and researcher in Information Science at Cornell University, studies automation in retail. “When [an electronic shelf label] breaks down in the store,” she explained, “the worker has to repair the system even if they don’t have the technical expertise. The customer becomes frustrated: There is no price available, or why has the price changed? The burden falls on the employee to alleviate the tension of that situation.” The technology may put employees in the position of performing what Moradi calls “relationship management,” adding social pressure to service jobs but without additional pay or training.
Reposted by Karen Levy
emmapierson.bsky.social
Our article on using LLMs to promote health equity is out in New England Journal of Medicine AI!

85% of equity-related LLM papers focus on *harms*.

But also vital are the equity-related *opportunities* LLMs create: detecting bias, extracting structured data, and improving access to health info.
Reposted by Karen Levy
allisonkoe.bsky.social
📢Announcing 1-day CHI 2025 workshop: Speech AI for All! We’ll discuss challenges & impacts of inclusive speech tech for people with speech diversities, connecting researchers, practitioners, policymakers, & community members. 🎉Apply to join us: speechai4all.org
Banner for CHI 2025 workshop with text: "Speech AI for All: Promoting Accessibility, Fairness, Inclusivity, and Equity"
Reposted by Karen Levy
randomwalker.bsky.social
📢 📢 Come join our vibrant interdisciplinary group of about 40 scholars at the Princeton Center for Information Technology Policy working to understand and improve the relationship between technology and society. We are looking at all levels: 🧵
Reposted by Karen Levy
imadityav.bsky.social
An exciting opportunity for PhD students to spend a summer at our campus in NYC and be part of our new Security, Trust, and Safety (SETS) Initiative! Deadline: January 21, 2025.
mantzarlis.com
Attention PhD students interested in digital safety: applications are now open for the Security, Trust, and Safety Fellowships at Cornell Tech!

Eligible projects must focus on adversarial threats to security, privacy and user safety on digital infrastructures.

🚡🚡🚡 mailchi.mp/tech/applica... 🚡🚡🚡
karenlevy.bsky.social
Exciting! Please tag me when it’s out.
karenlevy.bsky.social
❤️ mine used to say “speaking of…” as a way to announce the topic of what she was going to say. Like “speaking of ice cream, can I have some ice cream?” when nobody had been speaking of ice cream