Kyle Dent
banner
kdent.bsky.social
Kyle Dent
@kdent.bsky.social
Technology & Society, Data Science, Software Engineering
I have a story on Medium with LLMs, the Turing Test, Eliza and most importantly some memories of my friend and AI pioneer Danny Bobrow.

"What a decades-old chatbot, a confused executive, and a good story reveal about our stubborn misunderstanding of machine intelligence"

medium.com/ai-advances/...
Danny’s Eliza
What a decades-old chatbot, a confused executive, and a good story reveal about our stubborn misunderstanding of machine intelligence
medium.com
July 12, 2025 at 3:43 PM
In the first of many related lawsuits, a federal judge agrees with Big Tech that fair use lets them slurp up books and other copyrighted material even without authors' permission. Stay tuned as the question continues to make its way through the courts. techcrunch.com/2025/06/24/a...
A federal judge sides with Anthropic in lawsuit over training AI on books without authors' permission | TechCrunch
The ruling isn't a guarantee for how similar cases will proceed, but it lays the foundations for a precedent that would side with tech companies over creatives.
techcrunch.com
June 25, 2025 at 1:16 PM
How do you convey to users the capabilities of a system without a visible interface? Benedict Evans argues that wrapping LLMs in the right interface will help people use models in ways that are more suited to their actual capabilities.
www.ben-evans.com/benedictevan...
Building AI products — Benedict Evans
How do we build mass-market products that change the world around a technology that gets things ‘wrong’? What does wrong mean, and how is that useful?
www.ben-evans.com
June 20, 2024 at 2:19 PM
I don't see a link to the original paper, but from the article, just strap yourself in for some wild, thrill rides of extrapolation and then you too can believe LLMs are more than stochastic parrots.
New Theory Suggests Chatbots Can Understand Text | Quanta Magazine
Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.
www.quantamagazine.org
February 3, 2024 at 3:16 PM
Wow, I wouldn't have thought this was even possible. My first thought was that the copyright cases would benefit, but they still have to show infringing material produced by an LLM. Of course, if they get some DeepMind researchers on it...
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia page...
www.404media.co
December 6, 2023 at 3:34 PM
Sam Altman is back and the board is out.
Sam Altman reinstated as OpenAI CEO with new board members
After days of talks and drama, an agreement was reached to bring back Sam Altman as CEO of OpenAI.
www.washingtonpost.com
November 22, 2023 at 3:03 PM
Reposted by Kyle Dent
'The Art of Insight' is officially published today (I say 'officially' because Amazon U.S. made it available a week ago, at least in the U.S.) www.albertocairo.com
November 15, 2023 at 6:53 PM
The prevailing belief is that generative AI models will keep getting better and better. Maybe not. Future models will need high quality training data (and lots of it) to keep improving, but Big AI may have already shot their wad on data collection.
AI Companies Are Running Out of Training Data
Data is the vital force of large AI models, and thus of the industry itself. But it's also a finite resource — and companies could run out.
futurism.com
November 15, 2023 at 5:32 PM
The AI Executive Order came out today. It spans a lot from positioning America to demonstrate by example how to balance benefits with risks from AI to advancing equity and civil rights. It promotes new standards for AI safety and adds requirements for companies developing very LLMs.
FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intel...
Today, President Biden is issuing a landmark Executive Order to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI). The Executive Order est...
www.whitehouse.gov
October 30, 2023 at 4:36 PM
"Cruise, the self-driving arm of General Motors, ... [has] halted its robotaxi service across the US and [will] no longer operate its vehicles without safety drivers behind the wheel."

If only other applications of AI (hiring, sentencing, law enforcement, e.g.) got as much scrutiny and oversight.
GM’s Cruise Halts All US Robotaxi Service After Suspension for Pedestrian Who Was Dragged
Cruise suspended its driverless robotaxis nationwide two days after losing its self-driving permit in San Francisco for an incident in which a pedestrian was trapped under an autonomous vehicle.
www.wired.com
October 27, 2023 at 9:58 PM
Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go | ACM Computing Surveys
Detecting Harmful Content on Online Platforms: What Platforms Need vs. Where Research Efforts Go | A...
The proliferation of harmful content on online platforms is a major societal problem, which comes in many different forms, including hate speech, offensive language, bullying and harassment, misinform...
dl.acm.org
October 9, 2023 at 4:04 PM
"Voice assistants and advanced chatbots are only as accurate as the websites, news reports and other data they draw from across the web. These tools risk baking in and amplifying the falsehoods and biases present in their sources."
Amazon’s Alexa has been claiming the 2020 election was stolen
Alexa says the 2020 race was stolen, even as parent company Amazon promotes the voice assistant as a reliable source of election news.
www.washingtonpost.com
October 8, 2023 at 8:57 PM
Reposted by Kyle Dent
It was obvious from the outset that predictive policing was a scam (not to mention a vehicle for accelerating overpolicing), but it's really valuable to have this careful reporting of just bad it is:

themarkup.org/prediction-b...
October 2, 2023 at 7:17 PM
Following the 2020 election, 3 out of 10 Americans believed the result was fraudulent. Today that number is ... unchanged. Fact-checking may not be working and the effort is tapering off across social media, news organizations, and independent organizations.
Fact Checkers Take Stock of Their Efforts: ‘It’s Not Getting Better’
The momentum behind organizations that aim to combat online falsehoods has started to taper off.
www.nytimes.com
October 3, 2023 at 1:08 PM
At least with snake oil, you could empty the bottle and use it for something.
Predictive Policing Software Terrible at Predicting Crimes
A software company sold a New Jersey police department an algorithm that was right less than 1 percent of the time.
www.wired.com
October 2, 2023 at 6:28 PM
As you read about some states out-competing each other to ban the most books, don't think it won't happen near you. School districts in the Hudson Valley in New York State chalked up 200 of their own complaints seeking book removals from school libraries.
NY Schools are Banning Books. Here’s What You Can Do About it
A scene from Long Island: Three school board members return from a parent convention with a list of “objectionable” books. They search the catalog and realize that nine are in school libraries in ...
www.nyclu.org
October 2, 2023 at 6:04 PM
We may be awash in AI-generated images and text, but at least it's not protected by copyright, so that's something.
A federal judge denies copyright protection for AI-generated art
A decision from the D.C. district court means AI gets a backseat to humans for intellectual property rights
open.substack.com
August 24, 2023 at 12:33 PM
To what extent should social media companies moderate content? The next installment of “Should They or Shouldn’t They?” is coming soon to a Supreme Court near you.

If the laws passed by Florida and Texas stand, brace yourself for a flood of hate speech, misinformation and violent content.
The Biden administration urges the Supreme Court to take up content moderation cases
The Court asked the administration to file these briefs earlier.
www.theverge.com
August 19, 2023 at 1:18 PM
Tech companies and their data collection practices are something to be aware of. Here’s this week’s AI Matters newsletter. If you haven’t subscribed already, check it out and sign up.
https://open.substack.com/pub/kyledent/p/zooms-missteps-show-techs-plans-to?r=715xo&utm_campaign=post&utm_medium=web
August 18, 2023 at 12:48 PM
"... even a small “false positive” error rate means some students could be wrongly accused — an experience with potentially devastating long-term effects."

And we don't actually know what the false positive rate is. My guess is it's not real small. https://wapo.st/47ApWZV
August 16, 2023 at 8:48 PM