
Psst… We received a tip…

Meta just patented your ghost…
You are scrolling late at night when his name appears in the comments. Same profile photo. Same style. Same little sarcastic jab he always used when someone posted a bad take. Your stomach drops because you were at his funeral six months ago.
You read it twice, then a third time, like your eyes are the problem. Is this some sick joke? Your brain races to find a normal explanation before it accepts the creepy one. Its Meta’s new feature that can keep dead people’s profiles alive.
This week's patent from Meta uses a language model trained on a user's past behavior to predict and generate social responses in that user's style, a digital ghost idea that Meta may patent and still never deploy.
Here’s the inside scoop


This patent was filed by Meta, for a system turning past behavior into predicted behavior.
And that is what makes it interesting. The creepy part is possible (and even explicitly mentioned as a use case in the patent itself), but the real signal is that Meta wrote down a repeatable way to train a model on one person's interaction history, then use it to generate that person's likely response to new social content.
HOW IT WORKS:
At a high level, the system starts with a normal language model that already knows how to produce text. Then the platform feeds it your history. What you wrote and what you did. Comments, reactions, shares, and other platform actions can all become training material, depending on what data is allowed to be used.
The patent also describes a permissions layer, which is a big detail people skip. The user can set boundaries on what kinds of interactions may be used for training, so in theory the system can exclude certain data types like private or sensitive categories if the platform is designed that way.
Next comes the personalization step. Meta takes that base model and retrains it on the target user's data so the model starts to learn a pattern for that one person.
After that, a bot watches for content that is relevant to the user, such as posts in a feed or items the platform ranks as likely important.
When it finds a candidate post, the system builds a prompt with context. Who posted it, how the user knows them, prior interactions around the same content and other surrounding information that helps the model pick a response that feels more like the user and less like generic AI sludge.
Then the model predicts the interaction. That prediction can be a text response, like a comment, and it can also be a platform action, like a reaction or another engagement move. The system can then use a bot to carry out that predicted response.
There is also a more advanced twist in the patent where multiple versions of the user model can be trained for different ages or life stages. That is Meta trying to claim more ground, including versions of you over time.
Publishing the future

The most likely outcome may be that Meta never ships the full thing, and the patent still matters a lot. It has been reported that Meta currently has no plans to implement this feature. That sounds backwards until you remember what patents are really for at this level. They are options, leverage, and fences around territory you may want later.
That matters because this patent does solve a real problem, even if the obvious version of it feels creepy. Platforms, creators, and high-volume communicators all face a scale problem. Too many messages, too many notifications, too much social maintenance, and not enough human time to keep up.
But the same thing that makes it useful also makes it unpopular. The product value comes from sounding like you and acting like you. The public backlash comes from sounding like you and acting like you. If people cannot tell when they are talking to a person versus a trained proxy, trust on the platform starts to rot.
Regulators are already moving on adjacent categories like AI companions, youth safety, and synthetic content disclosure. Even if Meta never deploys a "social ghost" product, the legal and policy climate around emotionally persuasive AI and unlabeled synthetic interaction is getting tighter, not looser. See Reuters on New York and California drawing early lines around AI companions, Reuters on Australia's AI age-crackdown push, and the European Commission on its Article 50 transparency code of practice process.
Meta may also be patenting this because timing is everything. Today, the reputational risk could outweigh the upside. Tomorrow, the same underlying system might be reframed into safer products like private drafting tools, accessibility aids, or consent-based delegation systems. Filing the patent now preserves the option to pivot later when the market and rules are clearer.
So yes, this could be popular in a narrow sense. It solves labor, speed, and engagement problems that platforms and creators actually have. But it could also be deeply unpopular for the exact same reason, because people do not just want efficient social interaction. They want to know who is really there. (www.ofcom.org.uk)
And that is the real market question. Is this a product category, or is it mostly a pressure wave that creates demand for compliance tools and safer human-in-the-loop versions? Meta may be betting that even if the headline product never ships, the underlying IP still sits at the center of that future.
New York and California were among the first U.S. states to start drawing legal lines around AI companion systems, including disclosure and safety-oriented requirements. Regulation is arriving early, around adjacent harms, of AI which raises the cost of shipping anything that feels deceptive or emotionally manipulative.
A system like this also has an obvious scam angle, where a scammer can mimic how a person talks, what they usually reply to, and when they tend to respond. This makes phishing, fraud requests, and impersonation pitches much harder to spot. Regulators are already warning that voice cloning makes scams more convincing precisely because people trust familiar voices and patterns, including calls that sound like your boss or a family member asking for money or sensitive information. (Consumer Advice)
Italian police froze nearly €1 million after a businessman was tricked by fraudsters who allegedly used AI to mimic the voice of Italy's defence minister, Guido Crosetto. The scammers reportedly targeted multiple high-profile business figures and persuaded at least one victim to send money by making the calls sound official and urgent.
The patent press travels far and wide…

Extra! Extra! Read All About It!
The market for "AI that talks like a human on your behalf" is a stack of adjacent markets that already get budget: social media management, automated business messaging, customer support automation, and conversational commerce. Different labels, same buyer pain. Too many conversations, not enough humans.
You can see that in the public company numbers first. Sprinklr manages digital conversations at scale. They sell enterprise software for big brands to manage social media, customer care, and digital customer experience across channels. It reported $796.4 million in fiscal 2025 revenue, including $717.9 million in subscription revenue.
Manychat builds automated messaging tools for businesses and creators across Instagram, WhatsApp, Messenger, TikTok, and similar channels. It raised $140 million in 2025 and said total funding reached $163.3 million. TechCrunch also reported Manychat had about 1.5 million customers across 170 countries, sends "billions" of messages annually, and was operating around break-even.
Gupshup is a business messaging and conversational AI platform used by companies to run customer communication across channels, especially in mobile-first markets. TechCrunch reported a $60 million 2025 round (equity plus debt), alongside company claims of 50,000+ customers in 100+ countries and 120+ billion messages annually. Earlier, Reuters reported Gupshup raised $100 million in 2021 at a $1.4 billion valuation.
Big checks are going to tools that help brands and large pages answer DMs and inbound messages faster, with better conversion and less headcount. Same underlying direction. Much safer product framing… as long as you can trust the bot won’t misrepresent your product to the customer!
Then there are non-social media customer engagement examples. Parloa is a German startup building AI customer service agents for contact centers and enterprise support teams. Reuters reported it raised $350 million in 2026 at a $3 billion valuation, with total funding above $560 million, and ARR above $50 million.
However, none of these numbers prove there is a giant market for the most controversial version of Meta's patent, a system that acts like a specific person in public social spaces. What they do prove is that the adjacent markets are already large, global, and well funded.
So the investor signal is more about identity aware automation, rather than simply funding ghost accounts. Think reply drafting, approval workflows, brand voice systems, and compliance tooling that proves what the AI generated and who approved it.
The paper boy always delivers

Meta may never ship this patent, and the flashy version may be too hot to touch, but the automation in a unique personal voice may be useful, especially for those operating with large volumes of communication.
What would your industry do with a system that can respond in your voice, and where would you draw the line? Ready to read the source for yourself? Dive into US 12,513,102 B2.
Know someone who'd love this?
Forward this to the person in your life who immediately texted you about the ChatGPT thing, the deepfake thing, or the "did you see what Meta did" thing. They'll thank you. Probably. And if they don't, at least you now have a documented record of their bad taste.
Tell us what you think!
We're a small publication with strong opinions and thick skin. Help us get better: reply to this email with what you genuinely like about Hot off the Patent Press, and if anything’s not working for you.
For the nerds

Manychat taps $140M to boost its business messaging platform with AI with TechCrunch: A useful look inside the commercial side of how automated replies for brands and creators can scale into a global business long before "digital ghost" products go mainstream.
AI companions meet the law: New York and California draw the first lines with Reuters: See how lawmakers are starting to draw legal boundaries around AI systems that involve parasocial relationships with humans.
Deepfake Defences 2: The Attribution Toolkit with Ofcom: Learn how labels, metadata, provenance, and watermarking are being treated as practical tools for synthetic media disclosure, and where those tools can fail in the real world.
Code of Practice for providers of general-purpose AI models: transparency and copyright-related rules with European Commission: Explore the EU's transparency work around AI-generated content to understand why disclosure and compliance infrastructure may matter as much as the model itself.
German AI startup Parloa triples valuation to $3 billion in latest fundraise with Reuters: A strong investor signal from the adjacent customer-service AI market, where buyers already pay for high-volume, human-like response systems.
