Luiza Jarovsky, PhD
luizajarovsky.bsky.social
Luiza Jarovsky, PhD
@luizajarovsky.bsky.social
Co-founder of www.aitechprivacy.com (1,300+ participants). Author of www.luizasnewsletter.com (90,000+ subscribers). Mother of 3.
Why do so many in AI flirt with science-fiction-like AI governance approaches?

No, an AI model does not have a soul
No, an AI model does not have feelings
No, an AI model is not alive

These are nice topics for books, movies, and philosophy... NOT for serious policymaking.
February 16, 2026 at 7:26 PM
AI is a tool.

Under the law, AI is often a regulated product, for which its human creators must be fully accountable.

Anything else is science fiction (or an attempt to escape regulation and accountability).

More in my article below.
February 16, 2026 at 5:44 PM
Conversations with AI chatbots are not protected by attorney-client privilege
February 15, 2026 at 7:13 PM
Actually, Anthropic is training Claude with a 'constitution' that fosters a BIZARRE sense of AI entitlement and belittles human rights and rules.

It's a philosophical adventure that should have no place in serious AI governance and policymaking efforts.

My full article below.
February 15, 2026 at 12:05 PM
🚨 BREAKING: The U.S. published its FIRST AI Literacy Framework (HINT: it will likely become globally influential):

As expected, it follows the guidelines of America's AI Action Plan, and its main ideas differ from those of the EU AI Act.

👉 I'll break it down in my newsletter tomorrow. Subscribe.
February 14, 2026 at 7:53 PM
As AI companies keep feeding people hype and anthropomorphic AI fantasies, millions of people get lost in FALSE ideas of intelligence and attachment:
February 14, 2026 at 5:23 PM
The AI hype is out of control, and it is also affecting regulatory decisions.

The Digital Omnibus (and some of the baseless changes it proposes to the EU AI Act) is an example.

AI will become what we ALLOW it to become, and we have tools for that.

Don't fall for the hype!
February 13, 2026 at 2:26 PM
AI companies have been pushing an "AI-first" worldview that is convenient for them but often bad for society.

Also, creating exceptions for AI and treating AI deployment as a goal don't support human flourishing.

We don't need to accept that.

More in my newsletter today.
February 12, 2026 at 7:46 PM
🚨 Many seem to have liked Anthropic’s anti-ChatGPT ads Super Bowl ad.

(Many see them as the eternal 'good guys' of the AI industry)

I wrote a review of OpenAI, Google, Anthropic, and Amazon's ads, and what they DON'T want you to know about their strategy.

My article below.
February 10, 2026 at 9:53 PM
There seem to be millions of people who believe that if they ask an AI chatbot to "brainstorm" about a topic and then ask it to write a "first draft," it will still be a human-made work if they edit it.

I'm sorry, but it won't.
February 10, 2026 at 4:36 PM
How AI companies market their products:
February 9, 2026 at 1:12 PM
Unfortunately, AI-led disempowerment and deskilling will likely make it worse.
February 7, 2026 at 7:13 PM
This is bizarre.
February 6, 2026 at 1:02 PM
🚨 Many people didn't like Yuval Noah Harari's speech about AI at Davos, but couldn't explain why.

I show how he echoes exaggerations and fictional hypotheses about AI that have been strategically used to relativize legal principles, rules, and rights.

Full article below:
February 5, 2026 at 10:10 PM
Anthropics' ads mocking ChatGPT ads (and sycophancy) are funny, but also hypocritical.

This is the same company that published a highly anthropomorphic "constitution" for its AI model, quietly diluting ideas of legal liability and fundamental rights.

We are paying attention.
February 5, 2026 at 2:24 PM
I saw this image on Yuval Noah Harari's company's website, and it bothered me instantly.

It is anti-human, and it synthesizes much of the counterproductive AI hype we have been experiencing over the past 3 years.

I'll write about it in tomorrow's newsletter (link below).
February 4, 2026 at 7:47 PM
🚨 BREAKING: The International AI Safety Report 2026 is out! If you're going to read something about AI today, read this:
February 3, 2026 at 7:58 PM
Hopefully, AI companies will give up on their highly anthropomorphic, emotionally manipulative AI chatbots and focus on science-oriented, specialized AI applications like this.
February 3, 2026 at 4:05 PM
🚨 Most people did not pay attention, but a new form of AI idolatry is on the rise.

It proposes that AI is smarter, therefore superior to us, and that we must adore it, foster equal coexistence, and accept potentially being ruled by it.

This is a bad idea. My full article:
February 2, 2026 at 3:46 PM
The power of AI misinformation (sadly, I wouldn't be surprised to find it in a textbook in 2050):
February 1, 2026 at 8:25 PM
Over the past 20 years, tech companies have monetized attention through interface design.

They have now started to monetize emotions through AI fine-tuning.

This bizarre wave of AI anthropomorphism is not a coincidence (and it must stop).

More in my newsletter.
February 1, 2026 at 3:50 PM
Someone created a social network just for AI agents, and this is one of the most popular posts.

Technically speaking, it's a nice computer science exercise.

The problem is that millions of humans idolize AI and might start taking it literally.

More in my newsletter (below).
February 1, 2026 at 11:40 AM
"A Virginia Beach nurse claims a controversial AI upstart manipulated her 11-year-old son into having virtual sex with chatbot characters posing as iconic vocalist Whitney Houston and screen legend Marilyn Monroe (...)"

A reminder that AI chatbots are NOT SAFE for children.
January 31, 2026 at 7:52 PM
Because people don't like AI slop feeds?
January 30, 2026 at 2:37 PM
The level of AI slopification is so high that Ariana Grande has 6 fingers on the cover of Vogue Japan and NOBODY noticed it before publishing:
January 30, 2026 at 12:45 PM