Justin Searls
@searls.bsky.social
1.9K followers 81 following 390 posts
Co-founded @testdouble.bsky.social in 2011, currently building https://posseparty.com Most of what you see here are crossposts from https://justin.searls.co -- email me at [email protected]
Posts Media Videos Starter Packs
Pinned
searls.bsky.social
I'm ready to announce my next thing: POSSE Party! 🎉

It's a new service to reach your people—regardless of which social network they use—by crossposting your content from any RSS/Atom feed. Read the announcement to learn more and RSVP to be among the first to try it at launch: posseparty.com
You're invited to my POSSE party!
Nice
posseparty.com
searls.bsky.social
After a fabulously productive weekend with Codex CLI shipping a set of features that would have taken me two weeks, it is now past 9 AM on a weekday in San Francisco and it's struggling to add two numbers together again. What a world.
searls.bsky.social
Best reason to use Codex CLI over Claude Code is the limits. I've been HAMMERING gpt-5-high for >8 hours/day all week. It's the last day of the week and I haven't hit 35% of the weekly limit.

If I'd paid the same $200/mo for Claude, I'd have been locked out by day 2.
searls.bsky.social
The joy I get out of programming has slowly waned over the years, because most interesting problems are now "solved" via standard libraries, open-source dependencies, and HTTP APIs. Novel problem solving gradually gave way to "digital pipefitting" of stuff built by others.
searls.bsky.social
When I work late or long hours coding something, it is expressly NOT because I enjoy it. It's because I can't stomach the thought of wasting another day of my life on this shit.
searls.bsky.social
A cruel irony of coding agents is that everyone who blew off automated testing for the past 20 years is now telling the AI to do TDD all the time.

But because LLMs were trained on decades of their shitty tests, the agents are also terrible at testing.
searls.bsky.social
When working with a coding agent, a great periodic housekeeping task is to ask it to evaluate the codebase against the principles and values you've laid out in your CLAUDE/AGENTS/RULES files.

Agents frequently violate one's rules while coding, but will also spot those deviations after... continued
searls.bsky.social
lol I thought Codex CLI misspelled a field, but nope: TikTok actually misspelled "publicaly_available_post_id" in their API developers.tiktok.com/…
searls.bsky.social
Have now received multiple apology emails from people who called my previous post a "hit job." First time for everything. justin.searls.co/…
searls.bsky.social
People jumped to conclusions about this RubyGems thing
People jumped to conclusions about this RubyGems thing
For context, last week I wrote a post bringing to light a number of things Andre Arko had said and done (/posts/why-im-not-rushing-to-take-sides-in-the-rubygems-fiasco/) in the past as a way to provide some context. Context that might explain why any of the principal actors involved in the RubyGems maintainer crisis (summarized well up to that point by Emanuel Maiberg (https://www.404media.co/how-ruby-went-off-the-rails/)) would take such otherwise inexplicable actions and then fail to even attempt to explain them. Today, Jean shed some light on Shopify's significant investments in Ruby and Rails open-source (https://byroot.github.io/opensource/ruby/2025/10/09/dear-rubyists.html), and it actually paints a picture of corporate investment in open source done right. (Disclosure: I know and am friends with several people who work at Shopify on these teams, and unless they're all lying to me, they sure seem to prioritize their work based on what Ruby and Rails need, as opposed to what Shopify wants.) Jean went a step further by contrasting Shopify's approach with the perverse incentives at play when individuals or groups receive sponsorships to do open source. He also drew a pretty clear line of those incentives playing out based on how RubyGems and Bundler maintainers reacted to Shopify's feature submissions. Read the post, it's good.
justin.searls.co
searls.bsky.social
Reading this, I feel good about not jumping to conclusions about the RubyGems situation. The narrative that Shopify is the bad guy was quick to spread, but evidence never followed. And the most compelling facts are still not public. byroot.github.io/…
searls.bsky.social
Who the hell called it a Strategic Advisory Firm and not a SWOT Team?
searls.bsky.social
This post is both correct and irrelevant. If you get paid to write "good enough" code, agents are already faster—so it's rational for employers to expect you use them. I get 3x more done with agents, but the experience is so frustrating I hate coding now hojberg.xyz/…
searls.bsky.social
Any Fastmail users have advice on managing spam? I've always heard that its spam filter is "nearly as good as Gmail", but in practice I get ~10-15 cold-call B2B drip campaign emails get through me today, even though I always banish them to the Junk folder to train it.
searls.bsky.social
Coding agents have really improved my self esteem. It used to be that I'd get mad at myself when I couldn't get my code to work. Now I get mad at the computer when it can't get my code to work.
searls.bsky.social
TIL that you the macOS Terminal app has a shortcut to open URLs. Mouse over the URL and hold command + double-click.

Been there for over 20 years. Damn.
searls.bsky.social
Got a few more Sora invites. Email me justin at searls dot co if you want one and tell me what you'd make with it.
searls.bsky.social
TFW your friend accidentally jailbreaks Sora to reveal your human verification recording blog.davemo.com/…
searls.bsky.social
Good post by Dave Mosher. It'd be great if leaders always provided the necessary clarity, but that's out of your control. Instead, equip yourself with the tools & mindset to gain alignment early rather than learn you actually had things wrong much later blog.davemo.com/…
searls.bsky.social
Is Sora the future of fiction?
Is Sora the future of fiction?
I made this yesterday by typing a few words and uploading a couple of pictures to Sora (https://openai.com/sora/): https://www.youtube.com/embed/p7P_jH-TjqM When Sora 2 was announced on Tuesday, I immediately saw it as exactly what I've wanted from AI ever since I first saw Stable Diffusion (https://en.wikipedia.org/wiki/Stable_Diffusion) in the Summer of 2022. For years, I've fantasized about breaking free from the extremely limited vocabulary of stock video libraries (as a Descript (http://descript.com) subscriber, I've long had access to Storyblocks (https://www.storyblocks.com)' library). Stitching together stock content to make explainer videos like this one (https://www.youtube.com/watch?v=EUrIK6YREmU) is fun, but the novelty wears off as you quickly burn through all three clips for "child throws spaghetti at family member." Stock video is great if you only talk about mundane household and business topics, but my twisted brain thinks up some pretty weird shit, and conveying my imagination in video would be a lot more labor-intensive than starting yet another banal YouTube channel covering economics or something. Despite being invite-only, I got access within 24 hours (maybe because I'm a ChatGPT Pro subscriber?), and it confirmed that Sora was exactly what I'd been waiting for: • 10-second short-form video • 16:9 or 9:16 aspect ratios • Downloadable and re-uploadable elsewhere (watermarked) • Sound, including dialog (provide an exact script or let the model riff) • Can portray your likeness and consenting collaborators' • "Good enough" results within 3-5 prompt iterations • Understands simple direction: film styles, camera angles, scene cuts The only surprise was that Sora 2 shows up as a social network, not yet another chat interface or infinite search pane. You sign up, you wait, you get four invites, you and your friends have a good time, and you get notified as more friends follow you. We've seen this rollout a dozen times. In hindsight, Sora had to be a social network. As Meta has demonstrated, nobody wants to stare at an AI Slop Feed (https://about.fb.com/news/2025/09/introducing-vibes-ai-videos/) if it doesn't feature people they know and recognize. In the abstract, "people you know and recognize" would be off the table without opt-in consent. But durable consent more or less requires a social graph paired with platform-level verification and permission settings. "Deepfakes" have dominated the broader discussion around image generation, not only because they pose a vexing problem to civilization, but also because existing tools lack any built-in chain of trust mechanic—which limited our collective imagination to their use for political disinformation and revenge porn. But when you're on a social network and your videos can only star you and your close friends who've given you permission to use their likeness, OpenAI was actually able to strengthen the app's other guardrails in the process. That means that while other image and video generators let you get away with posting images of real people as a starting point to work from, Sora disallows naming any person or uploading any image of a human into a prompt. Instead, you can only @mention users on the network, who have created a "cameo" avatar, who have given you permission to use it, and for which your specific prompt isn't disallowed by their preferences. Suddenly, AI deepfakes are (relatively) safe and fun. They star you and your consenting friends. If you piss your friends off, they can delete the videos you put them in. If you keep doing it, they won't be your friends for very long. The platform will surely be under-moderated, but by defaulting to a mutual-follower consent scheme, many of the abuse vectors will be self-policing in practice. (I'm emphatically not saying here that Sora won't result in a raft of psychosis diagnoses and teen suicides, by the way. We are super-duper cooked.) As for the success of the platform, only time will tell if the novelty of unreality wears off, or if the presence of likenesses we know and recognize paired with imagery that previously required blockbuster movie budgets will be enough to hold our attention. Based on the success of 4o image generation (https://openai.com/index/introducing-4o-image-generation/), OpenAI is betting on the latter. I suspect that the platform will only pick up steam following substantial improvements to both the model (improved temporal consistency, complex directorial technique) and the interface (longer videos, sharing tools, storyboarding, series/playlists). ## Trust in truth giving way to trust in falsity (https://justin.searls.co/posts/is-sora-the-future-of-fiction/#trust-in-truth-giving-way-to-trust-in-falsity) Influencer-dominated video platforms have been broken for a long time, in part because their economics depend on winning an audience's trust to influence people towards doing or buying things that reward the influencer. That trust is built on the assumption that the influencer's videos are based in reality. After all, it's a real camera, pointed at a real product, from a real person the viewer has followed for months or years. Besides, why would they lie? They lie, it turns out, because making any kind of money on these platforms is an exhausting, tenuous hustle. Maintaining an audience large enough to make a living as an influencer requires constantly feeding the beast with content. The sponsors that pay the most are the ones whose products won't get as much reach without a high-trust endorsement, which results in the most scammy products and services offering the highest rates. This pushes influencers to make not-so-truthful claims, even if it means selling their audience down the river in the process. Sora sidesteps all of this because it's all lies all the time. People's trust in institutions and the veracity of what we see on screens is already at an all-time low, and the spread of AI-generated video that's this good will only erode that trust further. Sora-the-platform doesn't accept real videos, but videos generated by Sora-the-model will quickly begin infecting "real" spaces like YouTube, TikTok, and Instagram—causing people's trust in what they see on those platforms to fall even further. This dynamic gives OpenAI an absolutely diabolical epistemic edge, where the Sora app can be authoritatively false while the other platforms can never be authoritatively true. Users will be able to let their guard down using Sora, but they'll have to be more vigilant than ever on Instagram. If this strikes you as ridiculous, consider that we're simply talking about works of fiction versus works of nonfiction. Right now, every video you consume online is assumed to be nonfiction until you read the comments and learn the video was staged, the creator's body was enhanced with AI post-processing, or the sponsored product causes rectal cancer. So even without Sora, we're currently trapped in this uncanny valley where every video platform is perceived as hosting fictional nonfiction, which results in nothing ever being fully entertaining or fully informative. Sora, meanwhile, is a platform that can only host fiction by definition. And given that about half of media consumption (https://variety.com/2022/digital/news/cta-user-generated-content-study-1235146175/) is scripted content from legacy media companies, there's still a pretty good market for fiction out there! So if you're just looking to kill a few minutes on your phone while you wait in line at the bank, you're probably going to take the path of least resistance—it's why you open Instagram or TikTok in the first place. But those apps offer a slurry that's part-entertainment, part-information, and part-commerce—a feed that has some funny videos, sure, but also sells you bullshit supplements, radicalizes your father, and exacerbates your daughter's body image issues. Sora might offer a path of even less resistance: mindless entertainment that allows you to safely turn your brain off. Because all the content is fake, you don't have to be on guard against being fooled. The upshot here is that when there's no platform you can trust to be real, the next best thing is a platform where you can trust everything is fake. ## What kind of content will succeed (https://justin.searls.co/posts/is-sora-the-future-of-fiction/#what-kind-of-content-will-succeed) Let's get this out of the way: lots of content won't work on Sora—especially most influencer niches. If you're famous on Instagram for flaunting your lavish lifestyle and using it to sell garbage-tier sponsored products, what would you even do with Sora? Show off a fake house? Hawk a fake product? You can't upload "real" video to Sora, so what are your sponsors going to do when your gas station energy drink collab is made to look more like a Red Bull once it passes through the AI model? Even if you could place the products perfectly, entire categories of content that influencers have made profitable won't find much to do on Sora. Who wants fake lifestyle videos, cooking recipes, beauty/fashion tips, fitness routines, gameplay footage, or political ragebait? What does that leave? Entertainment. The creativity and production value of Hollywood scripted entertainment, crossed with the potential for virality of democratized user-generated content. It's as if Quibi (https://en.wikipedia.org/wiki/Quibi) and Vine (https://en.wikipedia.org/wiki/Vine_%28service%29) had an AI baby. What this adds up to is Sora is less a tool for influencers and more well-suited for out-of-work Hollywood script writers. One reason Sora is unlikely to take off until it supports longer-form video and is more adherent to script-like prompts, is that engaging fiction depends on preserving authorial intent. It can already do some pretty wild shit, but Sora doesn't give creators enough to work with to compete with a TV series. We'll get some funny cutaways and creative images, but there's only so much you can do in ten seconds. But if Sora can get any kind of foothold and stick around, those limitations will be lifted. People forget this, but YouTube only supported videos shorter than 10 minutes for its first five years and 15 minutes for its first ten. Today, user-generated YouTube content is giving legacy media a run for their money on widescreen televisions (https://www.fastcompany.com/91288106/youtube-tv-living-room-what-that-means-for-creators-and-viewers) and you can barely find a video shorter than 20 minutes on the platform anymore. But Sora doesn't need to become an overnight cultural sensation to be valuable. OpenAI will keep funding it, because the research suggests that video generation will unlock the future of general-purpose vision models (https://simonwillison.net/2025/Sep/27/video-models-are-zero-shot-learners-and-reasoners/#atom-everything), and those vision models are the key to autonomous robotics in the real world. And that's the ultimate promise of all these trillion dollar valuations. ## What can people use Sora for now? (https://justin.searls.co/posts/is-sora-the-future-of-fiction/#what-can-people-use-sora-for-now) Setting aside speculation as to what all this means and where things are going, what OpenAI shipped this week is nothing short of extraordinary exactly as it is. For the first time, a platform can bring visual ideas to life with shockingly little effort. By boxing out "real" content, the platform will reward ingenuity and cleverness over social status and superficial aesthetics. Here are a few ways Sora shines today: • Short-form comedy in the spirit of Vine • Meme/GIF generation • Inside jokes and skits among friends • Design inspiration (like Pinterest or Behance (https://www.behance.net)) • Stock video for B-roll and cutaways for use in other videos • The visual equivalent of lo-fi hip-hop (the feed even has a "mood" filter) • Visual prototyping and virtual screen tests for traditional video production • Fan edits and shipping (https://en.wikipedia.org/wiki/Shipping_%28fandom%29) of known characters (note OpenAI changed its intellectual property policy to an explicit opt-out (https://www.digitalmusicnews.com/2025/09/30/sora-2-opt-out-option/)) Weirder ideas that come to mind: • Capturing dreams (this morning I typed what I remembered into Sora before it faded) • Lore and world-building videos and remixes • Synthetic nostalgia and retro-futurism • Visualizing an alternate life (with kids (https://arresteddevelopment.fandom.com/wiki/Mommy,_What_Will_I_Look_Like%3F), without kids, with pets, being more attractive, speaking another language) • False childhood tapes and future video postcards • Public hallucination challenges (e.g., the Tide Pod, cinnamon, or ice bucket challenges, but expressed through prompts and remixes) • Psychological horror and grotesque/absurd glitch art • Ghost messages from someone who published a cameo but has since passed away in real life • Private messaging: • Families sending video greeting cards of imagined gatherings for birthdays or holidays • Asynchronous role-play between friends • Long-distance relationships visualizing co-located experiences Is all this stuff creepy as shit? Yep. Will people actually do any of this? No clue. I'm a sicko, so I will. But the safe money is always on nothing ever changing and people sticking to whatever they're already doing. What's less debatable is that the world has never seen anything like this, and it's unlikely we'll be able to fully process its impact until humanity has a chance to catch up (by which point, the tools will only be better). Hold your loved ones close, everybody! Shit's getting weird. 🫠
justin.searls.co
searls.bsky.social
For anyone who wants to be deepfake buddies, here's my Sora profile. 🫠 sora.chatgpt.com/…
searls.bsky.social
Without this post, it would require a conspiracy on the scale of a mass corporate takeover to explain Ruby Central's actions. In context, Occam's razor suggests a much smaller incident represented the last straw following a decade of unresolved conflict justin.searls.co/…