No, an AI model does not have a soul
No, an AI model does not have feelings
No, an AI model is not alive
These are nice topics for books, movies, and philosophy... NOT for serious policymaking.
No, an AI model does not have a soul
No, an AI model does not have feelings
No, an AI model is not alive
These are nice topics for books, movies, and philosophy... NOT for serious policymaking.
Under the law, AI is often a regulated product, for which its human creators must be fully accountable.
Anything else is science fiction (or an attempt to escape regulation and accountability).
More in my article below.
Under the law, AI is often a regulated product, for which its human creators must be fully accountable.
Anything else is science fiction (or an attempt to escape regulation and accountability).
More in my article below.
It's a philosophical adventure that should have no place in serious AI governance and policymaking efforts.
My full article below.
It's a philosophical adventure that should have no place in serious AI governance and policymaking efforts.
My full article below.
As expected, it follows the guidelines of America's AI Action Plan, and its main ideas differ from those of the EU AI Act.
👉 I'll break it down in my newsletter tomorrow. Subscribe.
As expected, it follows the guidelines of America's AI Action Plan, and its main ideas differ from those of the EU AI Act.
👉 I'll break it down in my newsletter tomorrow. Subscribe.
The Digital Omnibus (and some of the baseless changes it proposes to the EU AI Act) is an example.
AI will become what we ALLOW it to become, and we have tools for that.
Don't fall for the hype!
The Digital Omnibus (and some of the baseless changes it proposes to the EU AI Act) is an example.
AI will become what we ALLOW it to become, and we have tools for that.
Don't fall for the hype!
Also, creating exceptions for AI and treating AI deployment as a goal don't support human flourishing.
We don't need to accept that.
More in my newsletter today.
Also, creating exceptions for AI and treating AI deployment as a goal don't support human flourishing.
We don't need to accept that.
More in my newsletter today.
(Many see them as the eternal 'good guys' of the AI industry)
I wrote a review of OpenAI, Google, Anthropic, and Amazon's ads, and what they DON'T want you to know about their strategy.
My article below.
(Many see them as the eternal 'good guys' of the AI industry)
I wrote a review of OpenAI, Google, Anthropic, and Amazon's ads, and what they DON'T want you to know about their strategy.
My article below.
I'm sorry, but it won't.
I'm sorry, but it won't.
I show how he echoes exaggerations and fictional hypotheses about AI that have been strategically used to relativize legal principles, rules, and rights.
Full article below:
I show how he echoes exaggerations and fictional hypotheses about AI that have been strategically used to relativize legal principles, rules, and rights.
Full article below:
This is the same company that published a highly anthropomorphic "constitution" for its AI model, quietly diluting ideas of legal liability and fundamental rights.
We are paying attention.
This is the same company that published a highly anthropomorphic "constitution" for its AI model, quietly diluting ideas of legal liability and fundamental rights.
We are paying attention.
It is anti-human, and it synthesizes much of the counterproductive AI hype we have been experiencing over the past 3 years.
I'll write about it in tomorrow's newsletter (link below).
It is anti-human, and it synthesizes much of the counterproductive AI hype we have been experiencing over the past 3 years.
I'll write about it in tomorrow's newsletter (link below).
It proposes that AI is smarter, therefore superior to us, and that we must adore it, foster equal coexistence, and accept potentially being ruled by it.
This is a bad idea. My full article:
It proposes that AI is smarter, therefore superior to us, and that we must adore it, foster equal coexistence, and accept potentially being ruled by it.
This is a bad idea. My full article:
They have now started to monetize emotions through AI fine-tuning.
This bizarre wave of AI anthropomorphism is not a coincidence (and it must stop).
More in my newsletter.
They have now started to monetize emotions through AI fine-tuning.
This bizarre wave of AI anthropomorphism is not a coincidence (and it must stop).
More in my newsletter.
Technically speaking, it's a nice computer science exercise.
The problem is that millions of humans idolize AI and might start taking it literally.
More in my newsletter (below).
Technically speaking, it's a nice computer science exercise.
The problem is that millions of humans idolize AI and might start taking it literally.
More in my newsletter (below).
A reminder that AI chatbots are NOT SAFE for children.
A reminder that AI chatbots are NOT SAFE for children.