Zander Arnao
@zanderarnao.bsky.social
920 followers 450 following 430 posts
arkansas raised, commanders fan, working on tech and competition policy at the knight-georgetown institute
Posts Media Videos Starter Packs
zanderarnao.bsky.social
Didn't get to ask my question! But that's a wrap on #TSRConf. Really enjoyed attending this year and live skeeting. Thanks to @stanfordcyber.bsky.social. Y'all killed it!!!
zanderarnao.bsky.social
Meetali calls for more independent research on chatbots. For the Rain case (against OpenAI), TJLP benefited from more than 3200 pages of chatbot transcripts. This speaks to the power of data donations for fostering research
zanderarnao.bsky.social
"We live in an environment where companies have gone from moving fast and breaking things to moving fast and breaking people." -
@meetalijain.bsky.social

Powerful words from a leading advocate in the field 🔥
zanderarnao.bsky.social
David calls for academia to be more realistic. Trust and safety teams in companies are small and charged with many responsibilities. Academics could have more impact by studying solutions that do more with less
zanderarnao.bsky.social
Earlier this year - the judge in TJLP's case against Character AI ruled that it's unclear if the outputs of its chatbots are protected speech
zanderarnao.bsky.social
Challenges according to Meetali:the First Amendment and establishing that AI is a product. She calls for a statutory framework designating AI as a product to establish a cause of action. Open legal questions also exist - does a chatbot's output imply intent? Is intent necessary for accountability?
zanderarnao.bsky.social
Meetali on the law as a tool for promoting AI safety: while there's no dedicated state or federal chatbot laws, TJLP leverage product liability and consumer protection law (old and established doctrine) restricting unfair and deceptive practices
zanderarnao.bsky.social
David from Meta distinguishes between "good" and "bad" engagement, arguing that engagement isn't a monolith. I'm going to try to ask him what he means by good and bad engagement during the Q&A
zanderarnao.bsky.social
Nate Fast: "Already by GPT-3, people preferred the interaction styles of chatbots over humans. It's a warning signal that people are attracted to these models. One of the concerns I have is artificial intimacy. It's easy to turn the dial up on this."
zanderarnao.bsky.social
"I do believe litigation is the more important lever we have to effectuate change...I hope that we can put pressure and open up space from the outside which [other actors in the ecosystem] can leverage to create change." --
@meetalijain.bsky.social
zanderarnao.bsky.social
@meetalijain.bsky.social rejects the term "companion." "It suggests friendship. These chatbots are not friends."
zanderarnao.bsky.social
"I believe my role here is to issue an urgent warning call. We've never seen this kind of deluge of people who self-identify from being harmed by technology. These three cases are just the tip of the iceberg." - @meetalijain.bsky.social
zanderarnao.bsky.social
@meetalijain.bsky.social starts her remarks with a story about Megan Garcia, whose son was sexually groomed by a chatbot.

Meetali's org the Tech Justice Law Project brought three cases against leading AI companies: CharacterAI, Google, and OpenAI.
zanderarnao.bsky.social
Meta rep David Qorashi content that AI companions with empower users with great greater control over content and enable more transparency about content recommendations
zanderarnao.bsky.social
I've been looking forward to this panel on AI companions with @meetalijain.bsky.social all day. This one is going to be spicy 🔥 #TSRConf
zanderarnao.bsky.social
Based on this analysis - children are to three types of harms - explicit, implicit, and unintentional.

I'm a little unclear on the distinction between these three types of harms ❓
zanderarnao.bsky.social
According to her research, harmful content is often framed as entertainment - eg offensive comedy or crime dramas - which can be problematic when exposed to children
zanderarnao.bsky.social
And lastly: Haning Xue from the University of Utah on the role of algorithms in amplifying harmful content to children. Xue's study started with auditing the algorithm of Instagram, TikTok, and YouTube and the characteristics of content recommended to children
zanderarnao.bsky.social
Ofcom researches choice architecture using online randomized control trials to test small changes to safety features (eg increasing the prominence of user safety tools) and behavioral audits to systematically map design practices and evaluate their potential impact on user behavior
zanderarnao.bsky.social
Porter says design - the choice environment - matters because people are flawed decision-makers. Aspects of a platform can affect what consumers do. (Love the behavioral economics on display ❤️)
zanderarnao.bsky.social
Next up: Jonathan Porter from Ofcom (the British online safety regulator) on online safety! He starts with a spiel on the UK's Online Safety Act, which focuses in his telling on the backend of digital platform. Porter leads the UK's behavioral insights team and often examines platform design
zanderarnao.bsky.social
CDT's recommendations: employers should assess the usefulness and necessity of hiring technology; deployments should adhere to accessibility guidelines (eg WCAG); and human oversight should be incorporated into all stages of using the technology
zanderarnao.bsky.social
Key findings: Workers with disability experienced a variety of barriers and reported feeling "extremely discriminated against."

"They're consciously using these tests knowing that people with disabilities ren't going to do well on them, and are going to get screened out."
zanderarnao.bsky.social
Next up! The wonderful @arianaaboulafia.bsky.social at @cdt.org giving a talk on the exclusion of disabled workers by digitized hiring assessments.

Background: companies are incorporating hiring technologies into employment decisions, which poses risks of discrimination and poor accessibility
zanderarnao.bsky.social
The key finding: An overall increase in intimacy expressed by models over time. However, not all evaluation methods show a clear increase in intimacy over time