Remembering the 'human factor' in tech networks

Why designing tech networks around actual human behavior — not rational-agent models — leads to better systems.

There’s a foundational assumption in most tech network design: people are rational agents who respond predictably to incentives. Set the right rewards, apply the right penalties, and behavior follows.

Crypto and blockchain took this assumption and ran with it. Mechanism design, token economics, game theory — the entire toolkit assumes that if you get the incentives right, the system works. Humans are modeled as utility-maximizing nodes in a network, and the designer’s job is to align their utility functions with the network’s goals.

This assumption is wrong, and building on it produces systems that are brittle, alienating, and often counterproductive.

Three forces, not one

Humans aren’t motivated by a single force. They’re motivated by (at least) three, operating in different contexts and often in tension with each other:

Utility maximization — wanting the best deal. This is the force that economics models well. It’s mediated by money and markets. When you comparison-shop for a flight or negotiate a salary, you’re operating in this mode.

Reputation — wanting good future opportunities. This is mediated by status, relationships, and contracts. When you do good work to build your professional reputation, or fulfill a promise because people are watching, you’re in this mode.

Morality — wanting to do the right thing. This is mediated by values, culture, and community norms. When you return a lost wallet or help a stranger, you’re in this mode.

These three forces don’t blend smoothly. They operate in different domains, and when you introduce one into a domain governed by another, things break.

The daycare problem

The most famous example comes from a study of daycare centers in Haifa, Israel. Parents were regularly picking up their children late, creating problems for staff. The centers introduced a fine for late pickups — a straightforward economic incentive.

Late pickups doubled.

Before the fine, picking up your kid late felt like a moral failure — you were imposing on the teachers, breaking a social norm, being a bad community member. The fine replaced that moral framing with a market framing. Now late pickup had a price, and parents could simply decide whether the price was worth it. For many, it was. The fine didn’t add a cost to lateness — it replaced a higher cost (guilt, social disapproval) with a lower one (a few dollars).

When the centers removed the fine, late pickups didn’t return to the original level. The moral norm had been destroyed and couldn’t be easily rebuilt. The market framing, once introduced, stuck.

This isn’t an isolated curiosity. It’s a pattern: monetary incentives crowd out reputational and moral motivations. Paying people to donate blood reduces donations. Offering your neighbor $25 to help you move reframes the interaction from a favor (reputation/morality) to a transaction (utility) — and $25 is a bad price for a transaction.

The implications for network design

Most crypto networks are designed almost entirely around utility maximization. Stake tokens, earn rewards, get slashed for misbehavior. The assumption is that if the token economics are right, the network will be healthy.

But healthy networks — the ones people actually want to participate in — rely on all three forces. People contribute to open source for reputation and moral satisfaction, not just money. People moderate communities because they care, not because they’re paid. People build things that don’t have clear financial returns because they believe in what they’re building.

When you slap token incentives on top of these behaviors, you risk the daycare effect. The person who was moderating out of community pride becomes a paid moderator — and when the pay isn’t worth it, they stop. The developer who was contributing for reputation starts optimizing for token rewards instead, and the quality of contribution changes. The community that was held together by shared values becomes a collection of economic agents optimizing for returns.

Design for emergence, not control

The alternative isn’t to ignore incentives. It’s to stop treating incentive design as the primary lever and start treating it as one lever among several — and often not the most important one.

Networks should be designed to let behavior emerge rather than to prescribe it. Instead of trying to engineer the “right” behavior through mechanism design, build systems that:

Support communities, don’t define them. A network protocol should provide infrastructure for communities to form and govern themselves, not impose a single governance model. Different communities will have different norms, and that’s fine — it’s the same variation we see in the physical world.

Make space for non-financial motivation. Not every contribution needs a token reward. Reputation systems, recognition, and social norms are powerful motivators that token incentives can destroy. Be very careful about which behaviors you financialize.

Embody values, don’t just incentivize outcomes. The best networks have a culture — a shared sense of what matters and how things should be done. This culture can’t be engineered through token economics. It emerges from the people involved, the decisions the project makes, and the norms that develop over time.

Design for adaptation. No set of incentives will be right forever. Build systems that can evolve as the community’s needs and values change, rather than locking in a fixed mechanism at launch.

Beyond rational agents

The crypto space has enormous potential to build better coordination infrastructure. But realizing that potential requires a more honest model of human behavior — one that accounts for the full range of what motivates people, not just the slice that fits neatly into economic models.

Markets are powerful. They’re also narrow. They work well in specific contexts (allocating scarce resources, facilitating exchange) and poorly in others (building trust, fostering community, inspiring meaningful work). The art of network design is knowing which force to lean on in which context — and knowing that leaning on the wrong one can permanently damage the others.

We don’t need better mechanism design. We need more human design.