Internet Safety Labs
banner
internetsafetylabs.bsky.social
Internet Safety Labs
@internetsafetylabs.bsky.social
Historically, we've mainly focused on 2 and 3. We really must get keener about codifying acceptable and unacceptable risk in digital product behavior. /fin
November 24, 2025 at 9:13 PM
4️⃣ Governance must cover at least 3 things (in no particular order): (1) constraints on digital product behavior, (2) constraints on human behavior and use of digital products (cybercrime), and (3) constraints on corporate behavior in building digital products. /7
November 24, 2025 at 9:13 PM
So when we're dealing with Digital Product Safety, a responsible manufacturer must keep aware of both ends of the [not really a] spectrum, and first and foremost, the design-based behaviors. /6
November 24, 2025 at 9:13 PM
3️⃣Digital products possess the unique quality that they are capable of inflicting harms and hazards through their independent action (i.e. without human support). i.e. the products BEHAVE in a hazardous or harmful manner. In addition, humans can use digital products in a harmful manner. /5
November 24, 2025 at 9:13 PM
Digital products are riddled with hazards that can become harms. Direct, immediate harm, like suicide coaching by chatbots, eg.,are presently lower frequency, but the dumpster fire accelerant of so-called "AI" technologies seems to be increasing immediately harmful digital product behavior. /4
November 24, 2025 at 9:13 PM
Note that the majority of behaviors ISL tracks in our safety labels are more accurately understood as hazards: they are necessary conditions that, when combined with other conditions, can and will result in harm to the person using the tech. /3
November 24, 2025 at 9:13 PM
2️⃣ Note the change in language from "Programmatic Harm" to "Design-based hazards and harms". We think this lines up better with language used in litigation, and also more accurately covers both hazards and harms baked into digital products by design. /2
November 24, 2025 at 9:13 PM