Today, using AI is almost always a solo endeavor. From the very start, forty years ago, the internet (inter + network) has amplified the exchange of information. But the first cycles of AI have been just one person at a time. No one joins in your chats, ...
‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ 

AI together

Today, using AI is almost always a solo endeavor.

From the very start, forty years ago, the internet (inter + network) has amplified the exchange of information. But the first cycles of AI have been just one person at a time. No one joins in your chats, no one sees them. It’s just you and a disembodied voice in a 1:1 chat.

At purple.space, we’re testing a new approach. In some of our discussion categories, we’ve invited one or two AI voices to join the conversation. The result is a powerful new sort of interaction among peers. Instead of going off in irrelevant tangents or isolating us from reality, the presence of multiple people engaging together changes the tone and focus of the conversation.

AI together has some useful features:

  1. important conversations don’t get stalled. It’s always there to add one more cycle of progress.
  2. voices aren’t overlooked. In high school and online, the cool kids get all the attention. Not here.
  3. nuance can be surfaced. The AI isn’t trained to look for applause or clicks. But it remembers what’s come before.
  4. we’re less likely to spiral. Many of the negative consequences of consumer AI usage are caused by loneliness, isolation and gullibility.
  5. patience is a virtue. Instead of rushing to move on, the conversations can continue and become deeper.

More generally, AI business models from many smaller companies using the big LLMs as tools are fundamentally flawed. They’re busy selling cost reduction, which works for a while, but value creation happens in a network. If you’re offering someone a chance to save time and money, they’ll switch the moment a competitor offers them a chance to save even more time or money. But if you can build a useful network, a system that works better when you’re in it, then your brand gains traction.

I wrote this thought piece four months ago, sharing it here if it’s helpful:

But is it a business?

An opportunity for bootstrapped startups who seek to create value using AI.

Most AI success stories to date are about cost reduction or speed improvement. A startup offers businesses a way to get more done with fewer people, replacing customer service or programming teams with bots. The upside of cost reduction is that it’s a very easy sale—give the client a free sample, once it’s demonstrated to work, they have an instant benefit in switching.

The downsides: it’s difficult to win a race to the bottom, since someone can always promise more savings than you. And it’s finite—once the savings are made, there’s no incremental value left to create. And the human costs are real and persistent.

What’s worth seeking instead is something generative. A use of AI that doesn’t reduce costs, it creates value. It opens new opportunities, leads to growth, connection, and utility.

A note on “worth paying for.” Most bootstrappers aim at price-sensitive customers and then wonder why growth is hard. The insight they miss: people and organizations with expensive problems and real resources don’t just tolerate paying more for things they value—they prefer it. Premium pricing signals seriousness. The market to aim for is one where the problem is real, the budget exists, and the solution creates something they couldn’t get otherwise.

What people actually pay for. At the foundation of almost every premium purchase are three primal drives: status (I matter, people like me see me as significant), affiliation (I belong, there are people like me and they accept me), and freedom from fear (I am safe, the threat is not coming). Freedom from fear may be the most primitive—you can’t pursue status or affiliation while in survival mode. And most premium purchases are freedom from fear, wearing a costume.

Built on those roots is a middle layer of things that reliably deliver one or more of what people seek: legitimacy, transformation, belonging to a narrative, control, certainty, protection, trust, health and longevity, and leverage.

And an outer layer that’s instrumental—things people buy because they deliver the middle layer: access, capital, time, attention, convenience, efficiency, delight, new experiences, beauty.

Commodities—food, shelter, sex, addictive substances—sit outside this hierarchy. They don’t build toward the three roots; they temporarily suppress the anxiety that comes from not having them.

The implication: an AI business worth building delivers something from the middle layer, justified by the outer layer.

What businesses actually pay for. The hierarchy for individual consumers doesn’t translate directly to organizational purchases. Three drives sit at the foundation of almost every business buying decision:

Avoid blame — if this goes wrong, it won’t be my fault. The IBM principle: nobody ever got fired for buying the market leader. The champion inside the organization needs a defensible story before they’ll act.

Claim credit — I brought something that worked, and people noticed. The flip side of blame avoidance, and the engine of every internal champion. If your solution lets someone look good, they’ll sell it for you.

Reduce uncertainty — we can plan around this, and the chaos goes down. Organizations pay significant premiums for the ability to forecast, commit, and stop worrying.

Built on those roots is a middle layer of things organizations reliably invest in: growth, efficiency, compliance, competitive advantage, talent, morale, resilience, optionality, speed, legitimacy, and relationships.

And an outer layer that justifies the middle: cost savings, time savings, data, access, convenience, integration, reporting, support.

New vs. repeat purchases require different approaches. Repeat purchases are won by switching costs, relationships, and the network effect. New purchases require someone inside the organization to become a champion, which means they need a story that serves their career, not just their company’s interests.

Not all problems are equally interesting. Some purchases—like gaining market share or entering a new category—are chaotic and interesting, with room for narrative and ambition. Others—like cheaper materials or faster processing—are grinding commodities where the only story is price. Commodity buyers fear paying too much. Buyers in chaotic spaces fear making the wrong bet. These require entirely different approaches.

The forcing function. Businesses rarely lead the way on new purchases without a crisis compelling them to take action. Without a forcing function, even a perfect solution sits in the pipeline forever—committees form, pilots stall, champions get reassigned.

Three kinds of crises create forcing functions:

Competitive crisis — a rival did something, and now there’s urgency. This is the most common and the most legible to a champion inside the org. “They have it, and we don’t” is a sentence that ends meetings and allocates budgets.

Technology crisis — the old way stopped working, or a new capability made the old way look reckless. AI itself is currently creating this for many industries simultaneously. Unusually, the forcing function and the tech solution are the same thing.

Public/market upheaval — regulatory change, cultural shift, a collapse in input costs, a pandemic. These are the most powerful and least predictable. They create entirely new categories of buyers overnight.

The implication for a bootstrapper: sell into a forcing function that already exists, don’t try to create one. Organizations already feeling the crisis don’t need convincing—they need a solution that lets their champion say “I found it.” And crises often intensify the asymmetric information pattern: the people inside the crisis don’t yet know what others in similar situations have already learned.

NOTES:

Naked AI is a trap. If all you’re doing is building a gateway to Anthropic or ChatGPT, your token costs eat a significant portion of your revenue—and you have no defensible position.

Hidden prompts are insufficient. Breakthrough prompting can create real value, but there’s no protectable, convenient way to sell it as a business.

The network effect matters. Selling benefits one person at a time is brutally expensive. The breakthroughs come with projects that have the network built in—where interactions work better when your colleagues are using them too.

Asymmetric information is a pattern worth seeking. Some of the most durable advantages come not from network effects but from knowing something others don’t, or from helping a cohort pool what they know against a party that currently has structural information advantage over them.

So, a theory of profit—a framework for the kind of project worth building:

  1. Creates its own useful data stack. The data doesn’t need to be large to be valuable—it needs to be specific and trusted. Over time it informs the AI/user experience. It belongs to users and the project, not to Anthropic or competitors. And it’s built to work for users, not to trap them.
  2. Has a built-in network effect. Either an engaged peer-to-peer community (where users see each other, not just the platform) or an obvious benefit to spreading the word.
  3. Solves an expensive problem for people with resources. The value delivered goes beyond saving time or money—it might be education, reduced fear, joy, reassurance, connection, or capability expansion. And it’s priced accordingly.
  4. Is bootstrappable. Specific and conceptual rather than infrastructural. No data centers, no thousand-person teams required to get started.

The spreading problem. Most products and services don’t spread on their own. The people who use them may love them, but love isn’t enough. What causes spreading isn’t enthusiasm—it’s tension. Specifically, the tension created for the person who has the product when someone they care about doesn’t have it yet.

This tension takes two forms:

Economic tension — I have an advantage you don’t, and either I want you to have it too (if you’re on my side) or I need you to have it to work with me (if we’re collaborating). Accounting software spreads through supply chains this way. Email spread this way. If you don’t have it, we can’t do business.

Social tension — everyone I know has access to something, and I don’t, and that gap is visible and uncomfortable. Facebook spread this way. So did smartphones. The tension isn’t that I’m missing a feature. It’s that I’m visibly outside of something.

The test isn’t “would someone recommend this?” Recommendations require enthusiasm plus low social risk plus the right moment. The test is: does not having this create a gap that someone is motivated to close?

A product that creates economic tension spreads through organizations and supply chains. A product that creates social tension spreads through peer groups and cohorts. A product that creates neither—no matter how good it is—requires advertising to spread, which means it needs a budget, which means it needs a business model that can support that budget before it has scale.

The implication for an AI bootstrapper: before asking “is this good enough that people will share it,” ask “does this create a gap that makes sharing feel necessary rather than optional?”

A note on data stack reality. A network built on user data is only as good as the willingness of users to populate it. And willingness requires two things simultaneously: it has to be frictionless enough that people don’t have to think about it, and it has to feel safe enough that people don’t have to worry about it. These two conditions are almost always in tension. The more automatic the data collection, the more it feels like surveillance. The more control you give people, the more friction you add. (In Purple, we make opting in to AI conversations completely optional, fyi).

The design challenge isn’t technical—it’s social. The container has to feel like a tool you control, not a system that watches you. Most ideas that satisfy one condition fail the other.

The most promising data stacks are ones where people are already generating the data, are already comfortable with it existing somewhere, and the innovation is simply giving them better access to what’s already theirs. The forcing function for consumer data sharing may be the simplest one of all: I already feel watched. I might as well get something back.

The cautionary version of this is the email surveillance tool—a business reads all internal email and gets a report on who’s helpful, who’s toxic, who’s looking for a job. The value is real and obvious. The fear is also real and obvious. And in most organizations, the fear wins. Any data-stack business has to answer the question: who controls this, and what happens if it goes wrong? If the answer isn’t immediately reassuring, the business doesn’t get adopted.

The sponsored model. Not every valuable AI business needs the end user to pay. When the problem is real, but the affected population lacks resources, a foundation, brand, or institution with aligned interests can fund the miracle instead. The economics flip entirely: instead of acquiring thousands of customers one at a time, you close one relationship with one institution that already has the distribution, the mission, and the budget. The user gets the miracle for free. The sponsor gets impact, data, or loyalty. The business gets a defensible contract instead of a leaky funnel.

This model works when three things are true: the population being served is large and underserved, the value created is legible to an institution that cares about it, and the data generated serves both the individual user and the sponsor’s mission. A foundation pays $2,000,000, and 40,000 families of the incarcerated each get a ten-page legal document instead of a bushel of random papers—and the accumulated data starts identifying patterns in bad actors that no single case could surface alone. A bank funds a personal finance tool for its own customers. A health brand funds a fitness coach for an underserved population. The viral problem becomes easier: you don’t need users to recruit each other to pay for it, you need one institution with existing distribution to say yes.

The cheap inference model. Not every AI application needs a frontier model. The problems worth looking for going forward might not be the ones that require maximum reasoning or nuance—they’re the ones that require structure, pattern recognition, aggregation, and organization at scale. Form filling. Document organization. Transcription plus summarization. Matching similar records across large datasets. These problems are unglamorous but enormous in volume and almost entirely underserved.

The strategic advantage: Moore’s Law is on your side. The models that feel too limited today will feel adequate in eighteen months. Building on cheap open-source inference means your margins improve as the technology does, without changing your product. And “huge” doesn’t mean huge—a single business school graduating class is enough to populate a meaningful census of what jobs actually lead where. The data stack doesn’t need to be large. It needs to be specific, trusted, and ahead of what anyone else has assembled.

/end rant.

[If you’re building something that matters, I hope you’ll consider purple.space, a small and useful community of peer support.]

      

The best year

Here’s an interesting thought experiment. Imagine that you would be reincarnated into the soul and body of someone on Earth, 25 years old, at random.

You won’t know what you know now, you’ll simply live their life.

What year would you choose?

Since it’s random, it’s not about picking the best year to be among the wealthiest 1% (only a hundred-to-one chance) or even to be a US citizen (3% chance).

You might pick an era long ago, knowing that you would likely live half as long, but without a drumbeat of media stress and anxiety…

Or you could pick a year without the black plague or civil war, and with useful medical and community innovations…

What year gives you the best odds of living what you think of as a good life? Not just that year, but the years that follow…

It forces us to consider entitlement, hope, possibility and what matters. And it’s not a bad way to set an agenda.

      

Trained equanimity and a bias toward action

Pay attention to what’s in front of you.

Don’t let fear contaminate your understanding of the situation.

Act with commitment.

Notice the gap between event and reaction.

Embrace the resources that are available to you.

Optimism is a belief about possible outcomes, but equanimity adds a bias toward action, regardless of what happens.

There’s enough noise, don’t create more. Simply take right action without comment or second-guessing. We can avoid a dark side driven by fear and grievance. And we don’t need a light saber.

While it’s nice to share the annual greeting, it’s unnecessary. The fourth is always with you if you choose.

      

Just like me, but…

The actor, artist, mathematician, pianist, speaker, leader, tech nerd: Just like me, but talented.

I’m not so sure.

It might be more accurate to say “just like me, but dedicated.”

The first approach lets us off the hook.

The second approach opens the door to possibility.

      

Nostalgia can be fatal

For hundreds of years, nostalgia was seen as a serious disease, with doctors across Europe scrambling for a cure. Hundreds of thousands of people died from it.

In the original understanding of the term, it was a sort of homesickness. Soldiers from Switzerland were the first to get the official diagnosis–separated from their friends, family and homes, these young men would suffer from melancholy and would waste away, sometimes fatally.

As it spread, one theory was that it afflicted people from places that were at high altitude. As more humans traveled, often under duress (for example, enslaved people kidnapped from their homes and brought by ship to the new world), the suffering increased.

It’s not hard to see how a sudden, involuntary dislocation could be debilitating. Particularly if home was a place that was insulated from sudden change and fast-moving culture.

Today, future shock is bringing a new, if milder form of the affliction. As technology, jobs and culture shift faster than ever before, it’s understandable that many are yearning for a return to an imagined past. When the future arrives uninvited, it can feel like being pulled from a comfortable village in the middle of the night.

Knowing our peers are encountering challenges with the transitions at work or at home can give us the insight to build the scaffolding they need to find their footing. And perhaps we can offer ourselves a bit of grace as well.

      

More Recent Articles

[You're getting this note because you subscribed to Seth Godin's blog.]

Don't want to get this email anymore? Click the link below to unsubscribe.

Safely Unsubscribe ArchivesPreferencesContactSubscribePrivacy