Stanford CIS
@stanfordcis.bsky.social
97 followers 45 following 20 posts
Stanford Center for Internet & Society. See also @vanschewick.bsky.social
Posts Media Videos Starter Packs
stanfordcis.bsky.social
Omer Tene & Lars Oleson explores privacy by design for facial recognition in their latest @iapp.bsky.social post. They suggest the technology shouldn't leak personal information any more so than your dog, who also recognizes you but doesn't sell out your privacy iapp.org/news/a/recog...
IAPP
iapp.org
stanfordcis.bsky.social
@hartzog.bsky.social and @markpmckenna.bsky.social argue for more precision in their references to “scale” in regards to technology. Differentiating between “scale as more” and “scale as different” H & M suggest they have different implications for regulation papers.ssrn.com/sol3/papers....
Taking Scale Seriously in Technology Law
<p>Issues of scale—the relationship between the amount of an activity and its associated costs and benefits—permeate discussions around law and technologies. In
papers.ssrn.com
stanfordcis.bsky.social
FTC vs Amazon opens this week over dark patterns - design tricks that make you accidentally subscribe and struggle to cancel. "The question is when design crosses the line where a reasonable consumer doesn't have a fair shot of understanding what's going on" says @andreamm.bsky.social bit.ly/4mvJV2f
The 'dark patterns' at the center of FTC's lawsuit against Amazon
This week, the trial starts in a consequential FTC lawsuit against Amazon. The suit alleges that Amazon for years "tricked" people into buying Prime memberships that were purposefully hard to cancel.
www.npr.org
stanfordcis.bsky.social
CIS Affiliate Alex Feerst argues that learning - by humans or AI - isn't a copyright-relevant act in his latest Foundation for American Innovation paper. "Regulate outputs, not inputs; legalize learning." www.thefai.org/posts/promot...
Promote the Progress, Legalize Learning | The Foundation for American Innovation
The Foundation for American Innovation.
www.thefai.org
stanfordcis.bsky.social
xAI workers training Grok report encountering CSAM requests due to the company's approach allowing explicit content, unlike other AI companies that block such requests. "If you don't draw a hard line at anything unpleasant, you have a more complex problem" says @riana.bsky.social bit.ly/4gyVbcK
Behind Grok's 'sexy' settings, workers review explicit and disturbing content
Workers say they've faced sexually explicit content while xAI has marketed Grok to be deliberately provocative. Experts say the company should be cautious.
bit.ly
Reposted by Stanford CIS
stanfordcis.bsky.social
A recent article from CIS Affiliate Omer Tene highlights the increasing enforcement of privacy regulation from the California Privacy Protection Agency (CPPA) and state Attorneys-General (AGs) and the requirement for businesses to honor consumer opt-out requests www.goodwinlaw.com/en/insights/...
Multistate Privacy Enforcement Sweep Puts Global Privacy Control in the Spotlight | Insights & Resources | Goodwin
State AGs and CPPA crack down on weak opt-out tools and push for stricter data risk assessments in online advertising. Read more in Goodwin's alert.
www.goodwinlaw.com
Reposted by Stanford CIS
daphnek.bsky.social
When people first hear about "jawboning" -- meaning government pressure to suppress speech through threats and other extra-legal measures, like what the FCC is doing now -- they always want to talk about "coercion" by the govt.

I've always thought this is a red herring. Current events show why. 1/
stanfordcis.bsky.social
"...most schools are not yet addressing the risks of AI-generated child sexual abuse materials with their students. When schools do experience an incident, their responses often make it worse for the victims"says @stanfordhai.bsky.social fellow @riana.bsky.social hai.stanford.edu/news/how-do-...
How Do We Protect Children in the Age of AI? | Stanford HAI
Tools that enable teens to create deepfake nude images of each other are compromising child safety, and parents must get involved.
hai.stanford.edu
stanfordcis.bsky.social
CIS Affiliate @hartzog.bsky.social argues that facial recognition technology is the most dangerous surveillance tool ever invented & given the unique threats this tool poses to privacy, civil liberties .. and democracy, the only appropriate response is a ban papers.ssrn.com/sol3/papers....
Normalizing Facial Recognition Technology and The End of Obscurity
This article argues that facial recognition technology is the most dangerous surveillance tool ever invented. Given the unique threats this morally suspect tool
papers.ssrn.com
Reposted by Stanford CIS
rcalo.bsky.social
If you're in Boston September 8, I'll be speaking about my new book at MIT Media Lab (11:00 AM) and Boston University School of Law (4:00 PM). global.oup.com/academic/pro...
global.oup.com
Reposted by Stanford CIS
rcalo.bsky.social
My new book Law and Technology: A Methodical Approach is available for pre-order ($40) over at Oxford University Press. The book explores what scholars and society can do about emerging technology. global.oup.com/academic/pro... (new link)
stanfordcis.bsky.social
"To at least preserve the option of using this color in connection with automated driving, safety regulators around the world should be on the lookout for turquoise—in new vehicles, in imported vehicles, and in retrofitted vehicles," says CIS Affiliate BW Smith cyberlaw.stanford.edu/blog/2025/08...
Turquoise lamps on cars that cannot drive themselves
Back in 2011, as Nevada was developing regulations for automated driving, there was debate about whether vehicles should have a special external signal to indicate that they are in automated driving m...
cyberlaw.stanford.edu
Reposted by Stanford CIS
techpolicypress.bsky.social
There are many contexts where “truthiness” won’t do, writes Riana Pfefferkorn. “From human rights activists to camera manufacturers, from academics to public servants, a lot of people are working very hard to keep it possible for society to tell what’s real from what’s fake.” It's an uphill battle.
The Ongoing Fight to Keep Evidence Intact in the Face of AI Deception | TechPolicy.Press
There are many contexts, from private interactions between individuals to the financial markets, where “truthiness” won’t do, writes Riana Pfefferkorn.
buff.ly