Who killed the micropayment? A history.

Amber Case
16 min readMar 18, 2021


30 years into the web’s history, micropayments still aren’t a part of our daily lives. It’s one of the great unsolved mysteries — why didn’t this promising system take off, and who, if anyone, is responsible for its early demise?

As the world wide web soared in the 1990s, many of the systems envisioned by its early architects saw rapid development. But micropayments, which had once been described as an essential nutrient for any web system, seemed to be dead in the water.

There are arguments for micropayments. One is that they might enable the missing middle for creator content, by enabling purchases that might be too small to warrant either a subscription or a one-time $10 purchase. Examples of this are unlocking paywalls for a single article without signing up for an expensive model, or paying small amounts of money for many songs experienced on Spotify.

The Missing Middle — Micropayments could power a previously un-monetized category of creative content on the web

Implementing micropayments was difficult in the past, mostly due to needing to build an expensive storefront system, and the transaction cost for credit card processing could be .45 cents for every dollar, making payments of less than $1 prohibitively expensive.

Image Credit: Coil

And while the Interledger Protocol is a technology that finally addresses this transaction cost, the question still stands. Are micropayments a viable business model? If so, when and why do they make sense?

In order to get the full context of where micropayments come from, we need to look at the history of the Internet and the web, along with Micropayments.

There were multiple attempts at building a micropayment system throughout the 90s, all of which ended up going in different directions than intended — if not failing outright.

Is there a party responsible for this early death of micropayments? Is there a single guilty party, or many? Was it an act of intentional sabotage, or just an accident that came with the unpredictability of the time?

In this article, we’ll be taking a look at a wide variety of players, examining their roles in the evolution of web transactions. This first installment will look at the years before 2000 — when the web was young, experimental, and unpredictable.

Micropayments in the 1960s

The term “micropayment” was coined by Ted Nelson in 1960. Nelson is a major figure in early network development, who came up with much of the concepts and vocabulary which would become central to web development.

After prototyping the very first concept of a user network as far back as 1960, he invented many important terms like hypertext — a concept which would eventually prove fundamental to the navigation of the web.

As Nelson sometimes defines it, he came up with micropayments to make hypertext possible — seeing the system as a sort of economic smoothing which would help various systems run efficiently.

Ted Nelson coined the term micropayment in the 60s

In Nelson’s own words:

“There will be rights holders (authors, publishers) and content purchasers. Somehow royalty payments to each author/publisher must be automatic, to simplify and lubricate transactions.”

Nelson saw that contemporary transaction systems were poorly equipped for the digital age. In particular, contemporary copyright laws weren’t set up to handle works by multiple copyright holders — for example, a project made with public assets like clipart, stock photos, or sound effects.

A more versatile system would need to be put in place, to accurately split the profits between the contributors. Income generated by online traffic could only be represented in fractional amounts, something which traditional currencies were bound to run into trouble with.

Nelson has been vocal about the fact that the current iteration of the web is not quite the one he envisioned. But he still believes some version of what he predicted may still come true.

This system would work alongside TransCopyright, another of Nelson’s concepts which would use metadata inside of the media to help protect ownership of creator content.

Quotations and other re-use of content is something Nelson saw as an essential part of network culture, but creators could be financially damaged if their use is used without credit.

This uncredited re-use isn’t always intentional — creators who rely on third party content don’t have a reliable network for tracing assets back to their source.

TransCopyright was a way to break anything on the web into atom-sized content, while ensuring that the original owner’s payment ID was part of the file. Payments for re-used content would stream back to the original creators, and shared between all parties responsible for a finished product.

For example: a photo of a model could be re-used in many contexts, with a portion of its revenue shared with the model, stylist, photography, makeup artist, as well as anyone responsible for spreading the photo after the fact.

In this way, anything on the web could be broken into atomic content, with a series of rules. If someone used a sentence or photo from an original article, that content would still contain the original owner’s payment ID. Payments could be streamed down the line back to the original content holder, and shared with the person remixing or referencing the content in the new article.

NSFNET — A Brief History of the Web before the Web

In the 70s and 80s, there were a series of early prototype networks: ARPANET, TCP/IP, and others. But since these were not designed for commercial traffic, there was no room for experimenting with micropayments.

Then in 1985, The National Science Foundation started NSFNET — a government-sponsored program which hoped to guide the development of connected networks as a federal asset.

Throughout the late 80s, the NSFNET only allowed government services to use its resources. While the program built much of the foundations of what would later become the web, it was not without controversy.

In particular, some advocates of an open environment were concerned with the tendency to reward federal research grants to already-large companies like IBM, instead of the smaller innovators which many felt had a better understanding of the web’s potential.

When the NSF lifted its ban on commercial traffic in 1991, it opened the door for a new chapter — dominated by AOL, Compuserve, and dozens of other huge service providers which seemed to pop up almost overnight.

T1 NSFNET Backbone, c. 1991

But it wasn’t all corporate bloat. That same year, Tim Berners-Lee debuted the World Wide Web, a web more in line with Ted Nelson’s idealist principles. Berners-Lee had already made numerous contributions to the web, including HTTP in ’89 — which included the famous 402 error code, a response dedicated to failed payments.

402 Payment Required —before web monetization stories, this code was declared and reserved for future use

Berners-Lee saw commercial traffic on the web as an inevitability, and was setting up systems to accommodate it before the market even existed. Even if the 402 error would remain unused for decades, its very existence shows that the web’s early architects viewed transactions as a core part of the concept.

The NSFNET had plentiful resources to explore innovative systems like micropayments, but they chose instead to leave commercial pursuits to other parties. The reasons are unclear — they may have been limited as a federal entity, or have simply been more interested in the exchange of data than of currency.

Regardless: in mid 1995, NSFNET reverted to a research network — ending its control over web regulations. Rather than handing over control to another organization, the NSF left the web to be a self-regulating entity during a period of remarkable growth.

This may have been a questionable decision, as it gave a large amount of power to commercial internet providers — not exactly the independent innovators that Ted Nelson and his contemporaries had seen as the backbone of internet systems.

Is the NSF guilty of not doing more to prepare the web for its eventual commercial role? Did they seal the death warrant on alternative payments by allowing self-regulation — essentially handing over the most power to anyone with the resources to claim it?

The truth is, it’s not that straightforward. The reduced regulations also opened the door for smaller, fresher organizations with an eye on independent content and a philosophy regarding what the web could (and should) be.


The World Wide Web Consortium (W3C) is an international community that develops open standards to ensure the long-term growth of the Web

One of the most notable of guiding forces of the early web was yet another creation of Tim Berners-Lee — the World Wide Web Consortium, or W3C, which was created in 1994 in collaboration with the MIT Computer Lab.

The idea: to create a collaborative environment among many different groups and individuals involved, to propose technical specifications for developing ideas — including a fresh look at transactions.

According to the Consortium:

“One important aspect of “micropayments” is that the definition varies with the audience.”

As a collective of various member organizations, and not a single entity, the Consortium understood this as a core concept not just for micropayments, but the web in general. With so many potential applications for the technology, any one-size-fits-all solution would be missing the point.

The W3C understood that micropayments could provide a framework for many different areas which would make up the future web. Possible revenue streams, such as advertising, would have to run on a system of fractional transactions.

And in the coming millennium, there was a chance that there would be completely new, unpredictable forms of revenue — all of which could develop naturally if running on a more liquid form of currency.

Many of the W3C’s other innovations certainly did end up surfacing by 2000 — in particular, search engines grew from a useful tool to the default method of navigation. These engines were financed by their own traffic — a type of revenue which micropayments were particularly well-suited for.

But somehow, the W3C’s work in the micropayment field failed to gain much traction, and the Consortium stopped supporting micropayments in 1999.

Considering how much money and other resources were being poured into web startups at the time, this is still a bit of a mystery. Was the W3C concerned that a focus on financial systems would distract from their other pursuits? Was the “micro” in the term off-putting for commercially-oriented parties who liked to think in big numbers?

This is still one of the big missing pieces of the puzzle. However, it’s possible that the W3C’s decision has less to do with their own disinterest in micropayments, but may have been a reaction to what was going on with other organizations in the late 90s.

Carnegie Mellon’s NETBILL PROJECT

In 1997, Carnegie Mellon financed the Netbill research project, with a focus on distributed transactions — operations performed across two or more data repositories.

This focus was seen as essential to online transactions, which would have to keep information consistent across a potentially huge number of different databases.

According to the now-archived Netbill website:

“To meet the requirements of flexibility and scalability, and so that independent parties can easily be involved in the commerce transactions, NetBill is being designed as a system of systems. NetBill depends on an infrastructure of authentication, certificate management, internet access (including DNS lookup), databases, real-time customer service and dispute resolution servers, etc.”

In other words: there are many different components needed to make up a healthy micropayment system, each of which has to be built with the others in mind.

The complexity of this multitude of interlocking systems may have been a factor in the lack of traction for micropayments in the mid-90s — after all, the web was still young enough that many financiers and investors were adopting old-fashioned, simplistic approaches, like treating the dot com domain as if it were a real estate market.

And yet, all of the components which made up Netbill’s “system of systems” have since become core components of all online traffic. Authentication in particular is always going to be an essential service, and one which will need to be constantly revised and reinvented.

Before the end of the year, Netbill had been purchased by a company called Cybercash. It would end up being incorporated into a much more familiar name — Paypal, arguably one of the services most equipped to handle micropayments.

But as much as Paypal was derived from early micropayment concepts, it still was mainly used for traditional purchases — exchanges for goods and services — instead of as a backbone for the more fluid systems imagined by earlier pioneers.

Could Netbill have pushed their technology in a different direction, and really paved the way for proper micropayments?

The answer is “maybe” — but of course, this is true of countless other companies as well.


The 2001 documentary “Startup.com” paints an accurate picture of the climate in the late 90s. It focused on the 1998 startup govWorks, Inc., a company with a visionary idea of creating a universal system allowing citizens to pay for power bills, parking tickets, and public services online.

Govworks CEO Kaliel Tuzman appearing on CNN Finance during the .com boom.

A service like this would be useful in any time period, as it could use the web to make life easier, even for people with barely any interest in technology.

But in 2000, it rapidly imploded, losing money before eventually selling itself to a competitor.

The biggest factor may have been the question of regulation across multiple governments — how could a single company deal with the very different rules across 50 states and nearly 20,000 cities?

This failure not only matches up with some of Nelson’s earlier points (“Laws will not change, and must be recognized in the system design”), but also provides an interesting contrast with the current climate — where many tech giants have enough pull to navigate regulations across countless cities and counties.

The govWorks story isn’t directly connected to micropayments — but it illustrates that, even if the conceptual elements are all in place, an ambitious concept can still fall flat on its face.

The world has changed a lot since then — most notably, people have become very used to paying for all variety of services online, including civic fees like power bills, fines, and licensing fees.

Online payments have been adopted by cities and other public entities, showing that there’s a desire to make this sort of payment easier whenever possible. And yet, these transactions still come in the form of many disconnected systems — a far cry from the unified approach dreamed of by govWorks and its micropayment-minded contemporaries.

Of course, govWorks, Inc. was a relatively minor player, and couldn’t have been responsible for the death of micropayments. But their story is illustrative of just how many promising tech companies were trying to reinvent existing systems — and dropping the ball so hard it would shatter.


By 1999, even large tech players like Compaq and IBM were trying their hands at micropayments.

IBM had of course been a dominant force in the computing world for decades, including during the NSFNET years. A core part of their micropayment system would have been a special type of hypertext — which, rather than linking to another page, would be used to accommodate micropayments.

It wasn’t just the big players which were exploring this area. Many smaller companies tried to implement systems in line with this philosophy. Pay2See, Flooz, Millicent, FirstVirtual, Digicash, Beenz, and countless others.

Many of these were Netbill-esque systems which ran on the idea of pre-paid accounts — not unlike the current practice of paying an up-front cost for Internet bills — but none of them seemed to have any staying power.

After being adopted by only a single bank in Missouri, Digicash went bankrupt. FirstVirtual shifted to email and messaging. Flooz found itself at the center of a Russian money laundering scheme, with an estimate of 19% of all their traffic being fraudulent. All of their credits became worthless and non-refundable.

Stories like these offer another clue as to why micropayments didn’t take off right away. The industry was flooded by a multitude of new financial approaches, many of which crashed early.

What incentive did investors have to take a chance on reinventing finance after countless flops? Especially when there were other profitable portions of the tech world which ran on familiar systems? Did the concept of micropayments simply collapse on itself, due to so many competitors racing to perfect the system first?


Even considering some of these early misfires, it’s hard for those of us who are interested in micropayments to understand why they wouldn’t have caught on in the 90s.

Andrew Odlyzko giving a presentation at Kent State, 10 October 2014.

Maybe someone with a less than idealistic perspective can provide a bit of an answer. Polish-American Mathematician Andrew Odlyzko was a noted micropayment skeptic, who researched and published papers about emerging trends in ecommerce in the late 90s.

In 2003, he provided the audience at a Financial Cryptography conference with some strong opinions on why micropayments didn’t take:

“The obstacles of America payment adoption have a little to do with technology, and are rooted in economics, sociology and psychology.”

The problem isn’t just that micropayments are hard for audiences to understand — the concept runs contrary to the way that familiar systems work. Even if there is a vocal community which is ready to migrate to a new way of doing things, the average person is going to be more inclined towards the imperfect system they’re already familiar with.

For a new system to be attractive to the average user, it can’t just be slightly better than what came before. It needs to be 10 times better, and as easy to use and understand as possible.

Clay Shirky

Odlyzko wasn’t even the most vocal critic of micropayments. That honor would go to writer Clay Shirky, who in 2000 unleashed “The Case Against Micropayments”, a heavy-hitting essay that impacted emerging opinion.

In it, he wrote:

“Micropayment systems have not failed because of poor implementation; they have failed because they are a bad idea. Furthermore, since their weakness is systemic, they will continue to fail in the future.”

Shirky saw the entire concept as fundamentally flawed, in particular the desire to simplify transactions which he saw as naive. A system built for countless small transactions, Shirky argued, would give its users a lot more to worry about, instead of less.

“A revolution doesn’t happen when society adopts new tools. It happens when society adopts new behaviors.”

Many credit Odlyzko and Shirky for pretty much killing off the potential of micropayments with his comments. However, Odlyzko isn’t entirely dismissive of micropayments. He later wrote:

“(Micropayments) are most likely to succeed if they piggyback on top of something that is already widely used, such as cell phones, or (in some places) mass-transit smart cards. When offered as an additional feature for something that is already carried by most of the population, micropayments might be able to overcome the usual chicken and egg problem.”

The idea of a complete reinvention may be a slow taker for many audiences, but attaching it to something which they already understand may be an important gateway.

American Cartoonist Scott McCloud, an early advocate of micropayments, published rebuttals to Shirky’s arguments. While he agreed that early implementations of micropayments were failures, he preferred to acknowledge those missteps as growing pains, rather than dismissing them as endemic.

In 2003, McCloud wrote:

“Another factor contributing to micropayments’ dismal first round was the simple fact that until very recently, few users were willing to pay for content while they still felt that they were paying with their time… Selling premium content to those users was as futile as selling pay channels to TV owners in 1952.”

This might be the key — the problem with early micropayments had less to do with any aspect of the technology or cultural climate, but rather that it was impossible for audiences at the time to predict what they would need 20 years down the line.


When elevators were introduced, it created a whole new job market for elevator operators. The idea that passengers would press the buttons themselves didn’t come about until operators went on strike.

There are echoes of this story in newer technologies as well. Cell phones initially had loud, overwhelming ringtones, because people were used to the sound of a landline ringing and couldn’t imagine knowing they had a call without it.

In the early 2000s, there was a lot of early apprehension about putting any kind of credit card info onto the web — you wouldn’t give those numbers out to a stranger, so how can you be comfortable entering them on a website?

But now, over 75% of Americans have purchased something online (Marist Institute, 2018), entering their card info and usually storing it for later use. When you consider that there may be safer, more secure ways of having payment information sent, it seems a little backwards that we’re still entering strings of numbers by hand with each purchase.

The key with normalizing any technology is to get it to become an invisible, pass-through technology. A light switch is such a core part of our everyday life that we don’t even notice when we’re pressing it.

In the 90s, the web was still a very fresh concept. To experience it, you would either have to have a dedicated phone line, or simply not use it at any time you were waiting for a call. It was impossible to pay attention to the web without thinking about how it would affect the rest of your life.

But now, the web is undeniably a core part of our lives. We use it to watch media, for social interaction, and for an increasingly large percentage of our purchases and payments.

In the 90s, we might just not have been ready for a system of universal payments. But in the 2000s, we started using the web for almost everything.

The twist ending to this story: the micropayment isn’t dead after all. It just took a different form in its evolution than predicted.

It may not replace ads or taken over the web just yet, but it’s ready for a period of exponential growth, as the nature of online sales has already changed dramatically.

Micropayments stand to become a new financial interface — one where creators and consumers are both able to participate in the web economy.

This article is the first in a series of articles that looks at the past, present and future of micropayments Web Monetization made possible by The Mozilla Foundation and Coil.

Where do you think think about the future of micropayments? Share your thoughts below!



Amber Case

Design advocate, founder of the Calm Technology Institute, speaker and author of Calm Technology. Former Research Fellow at MIT Media Lab and Harvard BKC.