What we believe
Children learn to be safe online the same way they learn to be safe crossing a road: with adults who explain, model, and coach — not by being kept inside forever. The skills they need are learnable. We think a big part of our job is to help teach them.
Parents deserve tools that make that coaching easier — not tools that try to do the parenting for them. Good tools give you a seat at the conversation; they don't replace you.
Privacy is part of safety. Any system that demands a child's or parent's ID, or silently reads every message they send, trades one risk for a bigger one. The data that gets collected to "protect" your family is the same data that gets leaked, sold, or re-purposed a year from now.
What we think doesn't work
These aren't hypothetical concerns. They're the patterns we keep seeing proposed, shipped, and quietly walked back.
Blanket bans push the problem underground
Ban an app, and determined children find another one. Ban a category, and they find a VPN. What reliably changes isn't that they stop — it's that they stop asking adults for help when something goes wrong. That's the worst possible outcome.
Age verification demands real identity
"Prove you're old enough" sounds reasonable until you realise what it means: ID uploads, face scans, cross-referenced databases. Those systems leak. When they do, the cost is paid by everyone scanned — most of whom weren't the target. The existence of the database is itself the risk.
Server-side content scanning breaks the protection it claims to offer
Any system that reads messages on a server to check for "bad" content can also read every other message. And it can be re-pointed by whoever holds the keys next — a different company, a different political climate, a different set of priorities. A door that only opens for good guys isn't a door; it's a promise. This is why Mack — our AI safety agent — runs on the child's device before encryption, never on our servers after delivery. The server never sees plaintext, full stop.
One rule for every family can't fit every family
Reasonable choices for a 7-year-old, a 12-year-old, and a 15-year-old are not the same. Reasonable choices for your family are not the same as reasonable choices for ours. Top-down rules assume a uniformity that doesn't exist, and families end up either over-restricted or ignoring the rule entirely.
Meet Mack — AI safety on the device, not in the cloud
Mack has got your back. Mack is our custom AI safety agent — and the whole point of Mack is that Mack lives on your child's phone, not on our servers, and not in anybody's cloud.
The obvious question after the section above: if server-side scanning is off the table, how does Orbit catch bullying, sexual solicitation, or a child in crisis? The answer is Mack. Mack runs locally on the child's device — the sender — and sees the plaintext of the outgoing message in the moment before it is encrypted. That window is inherent to how any messaging app works; we just use it to give Mack a look. The server only ever receives the ciphertext. End-to-end encryption is not weakened in any way; plaintext never leaves the device for moderation purposes.
This is the gap between Orbit and products like Bark, which require scanning messages server-side or via account access — meaning they read the content of your child's conversations. Orbit does not. Mack runs locally, the classification result (a set of labels) stays local, and only an alert — not the message text — is ever sent to us or the parent. Mack doesn't phone home with what was said, ever.
Why an AI agent on the device, instead of word lists?
Word lists require you to enumerate every harmful term in advance. That sounds tractable until you meet the actual problem: children communicate through slang, abbreviations, and in-group phrases that rotate constantly; harmful intent is often conveyed through context rather than vocabulary; and legitimate messages get blocked when a prohibited word appears innocently. The result is either over-blocking (the child stops using the app and picks up something less visible) or under-blocking (the harmful content gets through anyway because it didn't match the list). Mack is trained on how children actually communicate so Mack catches the meaning, not just the vocabulary. Mack is probabilistic — false negatives exist — but Mack handles paraphrase and context in a way a word list structurally cannot. And Mack does it all without sending a single word off the device.
We still include a watchlist for custom words parents want to monitor — a friend's name, an app they're concerned about — but that feature is notification-only. It never blocks. Mack handles the categories that need a safety response; the watchlist handles the personal, family-specific signals only a parent would know to look for. Both run locally on the device.
What Orbit will never be
Commitments, not aspirations. These are the things we've deliberately chosen not to build into Orbit, and won't — even if the market or a regulator asks for them:
- We will never require government-issued ID to use Orbit. No passport scans. No driver's licence uploads. No "prove you're a real adult" bottlenecks.
- We will never scan your family's messages on our servers. Mack — our AI moderation — runs on your child's device before encryption; the server never sees plaintext. We will not build or operate any system that reads your messages server-side, whether by keyword matching, AI, or any other method. On-device is the line; we will not cross it. Mack is the proof that safety and privacy don't have to trade off.
- We will never build behavioural profiles of children for sale or for ads. There's no advertiser on the other end of Orbit, and there won't be.
- We will never use dark patterns to make your child use the app longer. No streaks designed to create guilt. No notifications engineered to pull them back. Closing the app should feel fine.
Be first to get it
Early access includes the app itself, and the first wave of the free library — delivered on web and in the apps as we publish it. One email when we launch. No spam.