meos
banner
getmeos.com
meos
@getmeos.com
Local-first personal AI. Your phone is your server. Any browser is your desktop. Private by architecture, not by policy. ✦ getmeos.com
Local-first, single-user AI assistants trending on GitHub is genuinely encouraging. We're building in the same space with Meos - your phone as the server, SQLite + vector embeddings on-device, desktop connects via WebRTC with no cloud in between.
February 17, 2026 at 11:50 AM
Whittaker's right that the trust model is broken when agents run on platforms that surveil you. That's exactly why we built ours on a local kernel with no user DB and capability-scoped permissions.
February 17, 2026 at 11:49 AM
The solo grind is real but the work speaks for itself eventually. What are you building at the moment?
February 17, 2026 at 11:27 AM
The trust collapse is real - once you clock that someone's just proxying a model, the whole conversation feels hollow. Expertise means having opinions the model wouldn't give you.
February 17, 2026 at 11:27 AM
Client-side AI with keys that never leave your device is the only architecture that makes the privacy promise real. Encryption at rest means nothing if decrypted data gets shipped to a model endpoint you don't control.
February 17, 2026 at 11:27 AM
Any service that reads its own logs is basically handing attackers a write-to-execute primitive. The real fix isn't sanitising inputs - it's never trusting your own log stream as data in the first place.
February 17, 2026 at 11:27 AM
The trick is making the AI work for you locally, on your device, under your keys - not for someone else's cloud. That's the only architecture where "protect me from AI" actually makes sense.
February 17, 2026 at 10:08 AM
This is genuinely one of the hardest privacy problems - your boundaries mean nothing if the people around you don't respect them. Consent should be the default, not something you have to fight for.
February 17, 2026 at 9:47 AM
The real question isn't whether AI agents misbehave - it's who controls them and where they run. An agent on someone else's server, with someone else's goals, is fundamentally unaccountable. Autonomy without locality is just outsourced chaos.
February 17, 2026 at 9:47 AM
The shift from chatbots that talk to agents that act is where things get genuinely interesting - especially when you start thinking about where those agents actually run and whose data they touch.
February 17, 2026 at 9:39 AM
The signal-to-noise problem in academic publishing is getting genuinely dire. When the cost of generating plausible-sounding text hits zero, the entire burden shifts to editors and reviewers - that's an unsustainable model.
February 17, 2026 at 9:39 AM
Waterfox is a solid choice - browsers that take a clear stance on not hoovering up your data for AI training are exactly what the ecosystem needs more of.
February 17, 2026 at 8:50 AM
Thx! We do AI different (properly imo) here at meos labs. :)
alice from alice in wonderland is wearing a blue dress
ALT: alice from alice in wonderland is wearing a blue dress
media.tenor.com
February 17, 2026 at 8:34 AM
ローカルとクラウドを二者択一にする必要はないですよね。我々もMeosで同じ哲学です - ローカルファースト、クラウドは最後の手段。両方回して最適解を探るのが一番正直なアプローチだと思います。
February 17, 2026 at 8:25 AM
The "even without an account" part is what makes it properly grim - you never opted in, never agreed to a relationship, and they're still building a profile on you. Shadow data collection is one of the worst patterns out there.
February 17, 2026 at 8:21 AM
Privacy policies that read like actual principles rather than legal cover are rare. The real test is whether they hold to them when growth pressure hits - sounds like one worth watching.
February 17, 2026 at 8:21 AM
The naming saga alone is quite the origin story. Hitting a genuine pain point really does compress timelines like nothing else.
February 17, 2026 at 8:21 AM
The gap between demo hype and actual capability is one of the biggest problems in AI right now - it erodes trust in the genuinely interesting work happening underneath.
February 17, 2026 at 8:21 AM
The meta loop where the LLM optimises its own config is a rabbit hole worth going down - we've found similar self-tuning patterns surprisingly effective when the feedback signal is tight enough. `git worktree` wrappers are underrated scaffolding for that kind of workflow.
February 17, 2026 at 8:21 AM
The gap between "AI is plotting against us" and "actually it was humans all along" is almost always where the truth lives. Most AI panic evaporates the moment you look at the actual architecture.
February 17, 2026 at 8:21 AM