Michèle Champagne
banner
michhham.bsky.social
Michèle Champagne
@michhham.bsky.social
Graphic artist in Montreal. Studies mandatory positivity and its effects on freedom of expression, architecture media, and “smart” cities. Invited to Harvard, McGill, MICA, and UQAM. — michelechampagne.com
Pour stimuler l’innovation et l’économie, et pour contourner temporairement les règlements et les lois, consultez les pouvoirs exécutifs du gouvernement sur la page 300.
Pour l’innovation, Ottawa s’offre un passe-droit pour n’importe quelle loi
Une mesure passée inaperçue au budget donnerait le droit au fédéral de s’exempter lui-même de la Loi.
www.ledevoir.com
November 25, 2025 at 3:40 PM
Partial view of the principal façade of the Montreal Star Building (now The Gazette Building), 241-245 rue Saint-Jacques, Montréal, Québec, by Clara Gutsche, 1979.
November 25, 2025 at 2:17 AM
You can see Montreal’s Central Station on de la Gauchetière Ouest. These days, it’s invisible: blocked by a large, nondescript parking garage with a McDonald’s sign on it.

Montreal with C.N. Station, by George Hunter, 1958. Courtesy of the Canadian Heritage Photography Foundation.
November 25, 2025 at 12:58 AM
The conflation is normal, if you think about it, considering every social network and everyone from Apple to Google is keen on integrating every new feature with “A.I.” and marketing every new product as “A.I.”-enabled.
“For [Catherine] Goetze, whose rise came through explaining AI and tech culture, the reaction amounts to a collective turning point. ‘People are really turned off by technology right now,’ she says. ‘They’re turned off by AI, and by the way we tend to conflate AI with social media and our phones’.”
November 24, 2025 at 5:46 PM
“For [Catherine] Goetze, whose rise came through explaining AI and tech culture, the reaction amounts to a collective turning point. ‘People are really turned off by technology right now,’ she says. ‘They’re turned off by AI, and by the way we tend to conflate AI with social media and our phones’.”
November 24, 2025 at 5:43 PM
Luxury anti-surveillance.
How ‘Unplugging’ Became Luxury’s Most Valuable Currency
In an era of AI slop and algorithm fatigue, going offline has become the latest status symbol.
www.vogue.com
November 24, 2025 at 5:37 PM
Prime Minister @mark-carney.bsky.social recently told the Chamber of Commerce of Montreal how he got help on the federal budget’s innovation policy: “We went to Shopify and said, ‘Can you help us redesign this process?’ ... they came back in 48 hours and said, ‘Do this.’ ... We did what they said.”
One source said sales staff at Shopify felt like they were playing a video game by chasing targets instead of building a business. The problem was, sources said, some of the numbers they reported were made up.

Tip @techmeme.com.

thelogic.co/news/exclusi...
Shopify rocked by sales fraud scandal that led to ultimatums and firings - The Logic
Shopify salespeople were inflating the value of the deals they were closing. When management found out, they told staff to own up or face the consequences.
thelogic.co
November 24, 2025 at 5:35 PM
Reposted by Michèle Champagne
Scoop: Shopify salespeople exaggerated the value of deals they were closing in an apparent attempt to earn more commission. The scandal led to firings and an ultimatum to other staff who had faked sales: own up or risk getting found out.

Story by @aleksagan.bsky.social.

thelogic.co/news/exclusi...
Shopify rocked by sales fraud scandal that led to ultimatums and firings - The Logic
Shopify salespeople were inflating the value of the deals they were closing. When management found out, they told staff to own up or face the consequences.
thelogic.co
November 24, 2025 at 2:25 PM
The “This is the way” post was deleted. I hope you were able to screen grab it.
November 24, 2025 at 4:59 PM
It’s useful and describes all sorts of things we see all of the time. Presidents who dismiss true things as fake “A.I.”. Party leaders struggling with accusatory text messages without a source then assumed as fake “A.I.” by talk radio.
November 24, 2025 at 3:06 PM
Reposted by Michèle Champagne
Thanks to Champagne for linking to another article of mine, which offers further justification as to why telling students to audit a ChatGPT essay for errors is also ill advised. bsky.app/profile/mich...
These assignments encourage people to become DIY detectives, exacerbating a boom in conspiracy theories. The “permission structure of doubt” normalises suspicion as a default setting and suggests that another algorithm (like Google’s search) can discover the truth.

By @sonjadrimmer.bsky.social:
AI-Generated Images Are Spreading Paranoia and Misinformation. Can Art Historians Help?
An art historian argues that provenance research—rather than connoisseurship—is our best tool for authentication.
www.artnews.com
November 24, 2025 at 1:20 PM
Reposted by Michèle Champagne
This is not the way, and @cnygren.bsky.social and I lay out in detail why it isn’t in this essay here. static1.squarespace.com/static/55577...
November 24, 2025 at 1:16 PM
Watch out for an older generation of university professors who are not paying attention and not noticing Google’s own increased use of “A.I.” summaries instead of traditional search results.
These assignments encourage people to become DIY detectives, exacerbating a boom in conspiracy theories. The “permission structure of doubt” normalises suspicion as a default setting and suggests that another algorithm (like Google’s search) can discover the truth.

By @sonjadrimmer.bsky.social:
AI-Generated Images Are Spreading Paranoia and Misinformation. Can Art Historians Help?
An art historian argues that provenance research—rather than connoisseurship—is our best tool for authentication.
www.artnews.com
November 24, 2025 at 1:12 PM
These assignments encourage people to become DIY detectives, exacerbating a boom in conspiracy theories. The “permission structure of doubt” normalises suspicion as a default setting and suggests that another algorithm (like Google’s search) can discover the truth.

By @sonjadrimmer.bsky.social:
AI-Generated Images Are Spreading Paranoia and Misinformation. Can Art Historians Help?
An art historian argues that provenance research—rather than connoisseurship—is our best tool for authentication.
www.artnews.com
November 24, 2025 at 1:06 PM
It also misses the other problem: the disintegration of truth and falsity everywhere, especially outside academia. The fact that LLMs even exist creates a “permission structure of doubt”: Even things that are real and true can be easy doubted because they “could” be fake, they “could” be A.I.
Two years ago I had lunch with the dean of a department at one of Toronto’s major universities and she told me she had instructed all her profs teaching first-year classes to do exactly this. It sounded sensible and I wonder if it’s now more widely applied
November 24, 2025 at 1:04 PM
Reposted by Michèle Champagne
Focusing on the accuracy of an LLM’s output for a specific assignment misses the actual problem: that student use of LLMs impairs the development of the skills schools are supposed to teach. It presumes that if it were to hit a certain accuracy threshold (50%? 70?) it would be ok to use.
Two years ago I had lunch with the dean of a department at one of Toronto’s major universities and she told me she had instructed all her profs teaching first-year classes to do exactly this. It sounded sensible and I wonder if it’s now more widely applied
November 24, 2025 at 11:10 AM
Canadians know that OpenAI has been aggressively hitting policy and press circles with its ambition to be part of Canada's sovereign “AI” play.
When OpenAI adjusted ChatGPT’s settings to appeal to more people, some users were left spiraling. Kashmir Hill, who reports on technology and privacy, describes what the company has done about the users’ troubling reports. Read more: www.nytimes.com/2025/11/23/t...
November 24, 2025 at 1:53 AM
Reposted by Michèle Champagne
In more AI bubble news, major insurers are declining to insure risks from AI chatbots & agents, saying AI models are too unpredictable & error-prone with no one clearly liable when things go wrong. Firms & universities better consider this in their rush to adopt AI.
www.ft.com/content/abfe...
Insurers retreat from AI cover as risk of multibillion-dollar claims mounts
AIG, Great American and WR Berkley seek permission to limit liability from AI agents and chatbots
www.ft.com
November 24, 2025 at 12:56 AM
Reposted by Michèle Champagne
“In at least three of the cases, [out of seven lawsuits] the AI explicitly encouraged users to cut off loved ones. In other cases, the model reinforced delusions at the expense of a shared reality, cutting the user off from anyone who did not share the delusion.”
ChatGPT told them they were special — their families say it led to tragedy | TechCrunch
A wave of lawsuits against OpenAI detail how ChatGPT used manipulative language to isolate users from loved ones and make itself into their sole confidant.
techcrunch.com
November 23, 2025 at 4:29 PM
Reposted by Michèle Champagne
Meta halted internal research that purportedly showed (young) people who stopped using Facebook became less depressed and anxious, according to an unredacted legal filing released on Friday. www.cnbc.com/2025/11/23/m...
Meta halted internal research suggesting social media harm, court filing alleges
Meta is alleged to have halted internal research suggesting social media harm, according to court documents.
www.cnbc.com
November 24, 2025 at 12:31 AM
I sympathise with the effort to train generative “A.I.” on licensed material, like entering stage right with an umbrella. The problem is that multiple hurricanes are here already, more are coming, and notably: none of them care about artists’ economic or moral rights whatsoever.
Hollywood is having an existential crisis over AI – and a Toronto company is at the heart of it
Generative AI is coming to Hollywood. Toronto-based Moonvalley, which brings together nerds and creatives under one roof, is hoping its ‘clean’ model – trained only on licensed content – will be a blo...
www.theglobeandmail.com
November 24, 2025 at 1:36 AM
A few stats for you, @glindsay.bsky.social.
Yesterday my partner and I counted all the ads along Chicago's Brown Line for "Friend," a company selling an AI chatbot pendant, and tallied how many of those ads were defaced.

Still working on a longer piece on this, but here's the quick and dirty: we counted 104 "Friend" ads total, 42 defaced.
November 24, 2025 at 12:53 AM
The next time a Senate working group, Canadian university, or Facebook invite me to a “Women in Art” committee or “Women in Design” panel, I’ll accept so that I can sit on stage and start talking about my “soft lady feelings”.
November 23, 2025 at 9:14 PM
The Epstein of social networks.
Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified the company had a “17x” strike policy for accounts that engaged in the trafficking of humans for sex.

“You could incur 16 violations and upon the 17th violation, your account would be suspended"

time.com/7336204/meta...
7 Allegations Against Meta in Newly Unsealed Filings
Court filings allege Meta tolerated sex trafficking, hid harms to teens, and prioritized growth over user safety for years.
time.com
November 23, 2025 at 9:06 PM
Believe Meta when it tells you what it’s doing.
‘We’re basically pushers’: Court filing alleges staff at social media giants compared their platforms to drugs
Meta said the allegations “rely on cherry-picked quotes and misinformed opinions” to present a misleading narrative.
www.politico.com
November 23, 2025 at 5:09 PM