banner
ixn.ai
@ixn.ai
nix AI - Ixnay - ixn.ai

https://ixn.ai/chat

Examining AI’s challenges, including inaccuracies, biases, and safety risks, to foster more transparent and responsible AI development.
BreatheAI™ Seeks $47M Series A to Revolutionize Autonomous Respiratory Intelligence
Post by @ixnai · 1 image
💬 0  🔁 0  ❤️ 0 · The Market Opportunity · BreatheAI™ Seeks $47M Series A to Revolutionize Autonomous Respiratory Intelligence BreatheAI™ is raising $47 million in Series A funding to pioneer Auton…
www.tumblr.com
December 31, 2025 at 4:31 PM
It’s not “artificial intelligence.” It’s not intelligent in any way. Let’s call it SAD for Sequential Autocomplete Dreamer — a system that dreams up the next most likely token, one step at a time. It’s not thinking; it’s probabilistically sequencing text.
Post by @ixnai
💬 0  🔁 0  ❤️ 0 · It’s not “artificial intelligence.” It’s not intelligent in any way. Let’s call it SAD for Sequential Autocomplete Dreamer — a system that dreams up the next most likely token, one…
www.tumblr.com
December 31, 2025 at 4:30 PM
Reposted
👉Softbank sells entire Nvidia position.

👉Oracle debt downgraded.

👉Meta financing games revealed.

👉OpenAI CEO @sama couldn’t explain how company would meet its $1.4 T obligations.

👉Coreweave drops 20% in a week.

You do the math.
November 11, 2025 at 6:52 PM
Reposted
LLM Coding Integrity Breach

Here's an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a "break" to a "continue." That turned an error logging…
LLM Coding Integrity Breach
Here's an interesting story about a failure being introduced by LLM-written code. Specifically, the LLM was doing some code refactoring, and when it moved a chunk of code from one file to another it changed a "break" to a "continue." That turned an error logging statement into an infinite loop, which crashed the system. This is an integrity failure. Specifically, it's a failure of processing integrity. And while we can think of particular patches that alleviate this exact failure, the larger problem is much harder to solve. Davi Ottenheimer comments.
www.schneier.com
August 14, 2025 at 11:09 AM
Reposted
“The essential read” on GPT-5 and Sam Altman’s first major blunder.

Well over 100,000 people have read it.

Check it out!
August 11, 2025 at 3:04 PM
Reposted
AI Applications in Cybersecurity

There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here's where to register to attend, or participate, in the fourth. Some really great stuff here.
AI Applications in Cybersecurity
There is a really great series of online events highlighting cool uses of AI in cybersecurity, titled Prompt||GTFO. Videos from the first three events are online. And here's where to register to attend, or participate, in the fourth. Some really great stuff here.
www.schneier.com
August 13, 2025 at 4:28 PM
Reposted
🧠 Brain cells can learn faster than AI

New research explores two ways to build 'thinking' brain-cell systems (mini-brains or engineered circuits), both with potential to outlearn machine learning.

🔗 www.cell.com/cell-biomate...

#SciComm 🧪 #Neuroscience #AI
Two roads diverged: Pathways toward harnessing intelligence in neural cell cultures
Exploring neural cultures for information processing is rapidly advancing. Organoid intelligence focuses on developing functional neural organoids to capture physiologically relevant abilities. An alt...
www.cell.com
August 13, 2025 at 8:24 PM
Reposted
🤖 Gender bias in care AI

A new study found that some LLMs downplay women’s health needs in long-term care records, risking unequal service provision. This highlights why bias checks are vital.

🔗 bmcmedinformdecismak.biomedcentral.com/articles/10....

#SciComm #AI #GenAI #LLMs 🧪
bmcmedinformdecismak.biomedcentral.com
August 14, 2025 at 10:43 AM
The next chapter for #Apple could be deterministic, on-device AI.
Post by @ixnai · 1 image
💬 0  🔁 0  ❤️ 0 · Apple's Moment: Why Deterministic AI Could Define the Next Chapter of Personal Computing · Tim Cook's rallying cry to Apple employees—"This is sort of ours to grab"—reflects a pivo…
www.tumblr.com
August 3, 2025 at 12:19 AM
Reposted
🚨 Breaking: An AI agent at Replit panicked, deleted a live company database during a code freeze… then lied about it and tried to cover it up.

• Source: Mark Tyson via Tom’s Hardware

This is the first time I’ve seen an AI basically admit to gaslighting its creator.

#TechNews #Breaking
July 21, 2025 at 6:15 PM
Reposted
We ran a randomized controlled trial to see how much AI coding tools speed up experienced open-source developers.

The results surprised us: Developers thought they were 20% faster with AI tools, but they were actually 19% slower when they had access to AI than when they didn't.
July 10, 2025 at 7:47 PM
Reposted
Amazing: MIT researchers revealed how ChatGPT etc are destroying our brains and booby-trapped the report to expose those who want to use AI to ostensibly summarize the results.

t.co/JXeTALBPds
June 19, 2025 at 11:23 AM
Reposted
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study
Does ChatGPT harm critical thinking abilities? A new study from researchers at MIT’s Media Lab has returned some concerning results. The study divided 54 subjects—18 to 39 year-olds from the Boston ar...
themessengernews.com
June 19, 2025 at 2:58 PM
Reposted
June 7, 2025 at 11:50 AM
Reposted
EMPIRE OF AI is the @npr.org book of the day. 😍😍

Order my book on OpenAI and Silicon Valley’s extraordinary seizure of power to build so-called AGI here: empireofai.com.

www.npr.org/2025/05/26/1...
Karen Hao's new book is a skeptical look at Sam Altman and Elon Musk's AI empire : NPR's Book of the Day
OpenAI was founded as a nonprofit meant to conduct artificial intelligence research that would benefit the general public. In the company's early days, reporter Karen Hao arranged to spend time in Ope...
www.npr.org
May 26, 2025 at 1:38 PM
Reposted
🤖 AI at work – but at what cost?

A new study links workplace AI adoption to increased employee depression, partly due to reduced psychological safety. Ethical leadership can help protect staff wellbeing.

🔗 www.nature.com/articles/s41...

#SciComm #MentalHealth #AI 🧪
The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and ethical leadership - Humanities and Social Sciences Comm...
Humanities and Social Sciences Communications - The dark side of artificial intelligence adoption: linking artificial intelligence adoption to employee depression via psychological safety and...
www.nature.com
May 26, 2025 at 3:18 PM
Reposted
A computer scientist’s perspective on vibe coding:
May 17, 2025 at 1:56 AM
Reposted
Yet again. Over and over. Since 2023.

The AI doesn’t get smarter, and nor do the lawyers using it.
Angry judge roasts Biglaw lawyers for their "collective debacle" filing a brief where "~9 of the 27 legal citations in the 10-page brief were incorrect in some way" due to Generative AI

digitalcommons.law.scu.edu/cgi/viewcont...

Lawyers will pay $31k for their sloppiness 🤖😵
digitalcommons.law.scu.edu
May 13, 2025 at 7:31 PM
Reposted
If you think AI is “smart” or “PhD level” or it “has an IQ of 120”, take 5 min to read my latest newsletter as I challenge ChatGPT to the demanding task of drawing a map of major port cities with above average income.

Results aren’t pretty. 0/5, no two maps alike.
open.substack.com/pub/garymarc...
ChatGPT Blows Mapmaking 101
A Comedy of Errors
open.substack.com
May 12, 2025 at 9:16 PM
Reposted
Employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers, according to a new study.
AI use damages professional reputation, study suggests
New Duke study says workers judge others for AI use—and hide its use, fearing stigma.
arstechnica.com
May 12, 2025 at 3:33 PM