James Love and Tim Hubbard
______________________________________________________________________
This piece was originally published as a chapter in Code: Collaborative Ownership and the Digital Economy. Edited by Rishab Aiyer Ghosh. MIT Press, Cambridge, 2005. (pp. 207 ñ 229).
______________________________________________________________________
Introduction
Fueled in part by revolutions in information technologies and social concerns over access to medicine, there is a growing awareness that business models and legal and trade frameworks for knowledge goods need to change. In some cases this has occurred, including radical and disruptive innovations. In other cases, older approaches are entrenched, but face growing criticism on a variety of grounds, some focusing on efficiency and efficacy, others on grounds of fairness.
The landscape of these disputes is highly varied. The rise of the Internet as a global system for communication and the World Wide Web as a platform for publishing, the importance of GNU/Linux and other free and open software, the Human Genome Project (HGP), the single nucleotide polymorphisms (SNPs) consortium and other open medical databases, the open journals movement, the open sharing of the Global Positioning Satellite (GPS) data, and the rapid acceptance and much litigated deployment of peer-to-peer file sharing technologies, are just a few areas where social movements, governments, donors, or firms have embraced mechanisms that make information and technologies available to a global public for free.
Among economists, a public good is one that, regardless of its cost to produce, is not rival in consumption. That is to say, the marginal cost of sharing the good is zero, and the use of the good by an additional person does not diminish the availability of the good to others. Another aspect of the economics definition concerns the ability to prevent others from benefiting from the goodósometimes referred to as nonexclusivity of consumption. Few goods meet both criteria perfectly. Some goods are a mixture of private and public benefits. Other goods are nonrival in consumption, but can be managed to exclude access by those who do not pay. For example, television broadcasting, weather reports, databases, music, software, and other goods that are not rival in consumption can be managed so that they are essentially private goods. These are sometimes called quasi-public goods.
Public goods have always attracted the interest of economists, because the price system is, at least theoretically, an inferior way to provide such goods. When the marginal cost of providing a good is zero, the most economically efficient price, on the margin, is also zero. But such goods are often not costless to create. Therein lies the dilemma. How does one allocate resources to create goods that will have a zero price?
Noneconomists use different terms, and sometimes raise different issues. "Information wants to be free" was one popular way of framing the issue, and it implies more than just economically efficient pricing of a good is at stake. Newt Gringrich, as a newly elected Speaker of the House of Representatives, promised "we will change the rules of the House to require that all documents and all conference reports and all committee reports be filed electronically as well as in writing, and that they cannot be filed until they are available to any citizen who wants to pull them up. Thus, information will be available to any citizen in the country at the same moment it is available to the highest-paid Washington lobbyist" (Gillespie and Schellhas 1994). For Gingrich, this was a matter of fairness. By providing the practical means to make Congressional information a public good, he sought to reduce political corruption and empower citizens.
Former President Ronald Reagan and the U.S. National Aeronautics and Space Administration (NASA) embraced the public sector provision of a particular public good as necessary for safe civil aviation. On September 5, 1983, the Soviet Union shot down Korean Airlines Flight 007, when it entered Soviet airspace. President Reagan ordered the military and NASA to freely share the signals from the U.S. Global Positioning Satellite (GPS) system, in order to prevent similar tragedies. Later, NASA was asked to consider charging users for their access to the GPS signal. But NASA concluded that it was more valuable to society as a free good, noting that once the GPS signal was available to the public, a plethora of new nonmilitary and nonaviation uses of the signal were discovered, leading to an estimated $8 to $15 billion in new GPS products and services. NASA saw the free provision of the GPS service as an effective mechanism to encourage technological development and industrial growth.
Richard Stallman has made a career out of promoting the development of free software. Stallman says "'Free software' is a matter of liberty, not price," and that one should think of "free" as in "free speech," not as in "free beer." (Free Software Foundation 1996).
Among software developers, the benefits of making software code freely available was expressed most famously by the phase: "Given enough eyeballs, all bugs are shallow," which Eric Raymond called ìLinus's Law." Raymond was referring to Linus Torvald, the creator and leader of the Linux development effort, who was talking about the benefits of releasing software code early and often, and being "open to the point of promiscuity." In Raymond's account:
Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. "Somebody finds the problem," he says, "and somebody else understands it. (Raymond 1999)
Raymond has emphasized the difference between making the code trans- parent (open) and making it free, but for many the distinction is lost. To a generation that has seen the explosive success of the Internet, which is run on free and open protocols and much free software, and the World Wide Web, which is also based upon free and open standards, and despite Raymond's efforts, the term "open source" is often used as a synonym for free, or more generally as a metaphor for new systems for creating public goods.
The life sciences field is now experimenting with a variety of "open med- icine" initiatives, most notably open databases and open academic journals, often justified on the grounds that greater openness leads to better and faster scientific progress. Linus Torvald's claim that "with enough eyeballs, all bugs are shallow" has resonated with researchers who pushed to have the sequencing of the human genome be free of patents and freely available to researchers globally (Sulston and Ferry 2002), as well as a diverse group of stakeholders who have supported a plethora of other new "open medicine" databases (Cukier 2003) and journals. In launching the new Public Library of Science Journal, PLoS Biology, Patrick Brown, Michael Eisen, and Harold Varmus explained the rationale for a new publishing model for journals. One consideration was clearly to offer researchers a new strategic model for reducing the costs of journals. But also, they were seeking to expand the usefulness of the information itself (for example see Brown, Eisen, and Varmus 2003):
Freeing the information in the scientific literature from the fixed sequence of pages and the arbitrary boundaries drawn by journals or publishersóthe electronic vestiges of paper publicationóopens up myriad new possibilities for navigating, integrating, "mining," annotating, and mapping connections in the high-dimensional space of scientific knowledge ... Consider how the open availability and freedom to use the complete archive of published DNA sequences in the GenBank, EMBL, and DDBJ databases inspired and enabled scientists to transform a collection of individual sequences into something incomparably richer. With great foresight, it was decided in the early 1980s that published DNA sequences should be deposited in a central repository, in a common format, where they could be freely accessed and used by anyone. Simply giving scientists free and unrestricted access to the raw sequences led them to develop the powerful methods, tools, and resources that have made the whole much greater than the sum of the individual sequences. Just one of the resulting software toolsóBLASTóperforms 500 trillion sequence comparisons annually! Imagine how impoverished biology and medicine would be today if published DNA sequences were treated like virtually every other kind of research publicationówith no comprehensive database searches and no ability to freely download, reorganize, and rean- alyze sequences. Now imagine the possibilities if the same creative explosion that was fueled by open access to DNA sequences were to occur for the much larger body of published scientific results.
More recently, some have suggested that the role of open medicine can be expanded to address drug development, making it also possible to address ethical concerns over access to medicine (Hubbard and Love 2003, 2004a, 2004b).
For all of these attempts to create or maintain public goods, there are
problems of financing the effort. In some cases, work is done by individuals,
working in a personal capacity as volunteers (Slashdot.Org editors, many free
software developers, or supported in their efforts by employers (i.e., some free
software coding, the Internet Engineering Task Force (IETF), and the work of many
other standards organizations) (Benkler 2002). In other cases, governments,
foundations or corporate benefactors contribute (GPS, Medlin, the Human
Genome Project, World Wide Web Consortium 1W3CI, etc.). These are fairly
traditional sources of finance for public goods, even if the social organization for
producing the good are novel, as is the case for many of the new collaborative
software projects, for example.
There are also efforts to identify new sources or mechanisms for financing
public goods, either to protect existing public goods efforts, or to expand or create
new projects. As has long been the case (see for example Musgrave 1959), there
are important and controversial aspects of expanding the role of governments in
funding, managing, or regulating such projects, and these issues are also
complicated by the fact that many important public goods projects are truly global.
This chapter examines the problem of financing public goods in three settings. Two
efforts combine a degree of state coercion in mandating funding, with a
decentralized and competitive private sector model for allocating funds. The first is
the problem of compensating artists in a world where the most efficient distribution
systems are peer-to-peer file-sharing networks. The second concerns the problems
of funding the development of new drugs and other medical inventions. Finally, a
proposal for new intermediators to facilitate voluntary collective action to finance
public goods is considered.
Competitive Intermediators
In simple economic models, markets are made up of producers and
consumers. But more realistic assessments include the corporate entities involved
in distribution, marketing, and finance. This is particularly important for knowledge
goods. Marketing and distribution functions are socially and economically important
in their own right. Commercial entities that are positioned between the creators and
the users often take the lion's share of the sales, and engage in activities that are
fundamentally hostile to the interests of the creators (low compensation, unfair
work for hire contracts, disregard for moral rights, etc.) and consumers (high prices,
interoperable standards, misleading quality claims, nonmeritorious product
differentiation, etc.). Parties that finance the creation of knowledge goods influence
profoundly the choices of goods that are available. Products that have high utility to
consumers do not necessarily attract the most investment, due to a number of well-
known market failures.
The reliance upon commercial organizations to market, distribute, and
finance knowledge goods is often the path of least resistance, even when the
inefficiencies are overwhelming, such as in the market for client software for personal
computers, academic journals, or the wasteful private mechanisms for financing new
drug development. There are also important ideological and practical reasons why private
commercial markets play such an important role. Historically, many think of government-
supplied or controlled goods as the primary alternative to a market dominated by private
sellers and consumers. There are nontrivial risks of inappropriate government controls
and inefficiency. We don't want the government to have too much say in what artistic
works and other knowledge goods are created, and we seek to avoid undue
centralization and bureaucracy. With this in mind, we consider models that rely upon
new institutions that are private, decentralized, and that compete with each other.
In looking at various models for funding public goods, we considered institutions
such as pension funds or stock exchanges that provided a variety of services to the
publicópension funds make professional money management accessible to individual
investors, and financial exchanges such as the New York Stock Exchange provide
private regulation of transparency of investments, making capital markets more efficient.
In the Netherlands, the state has historically allocated public funding for competing
nonprofit broadcasting organizations on the basis of the number of subscribers they can
attract. The BBC in England is an independent organization that is funded by mandatory
contributions from everyone who has a television set. In Finland and Germany, the state
requires contributions to religious organizations, in part to provide some public services,
but allow citizens to choose which congregation will receive their money. There are
proposals in the United States to fund private primary schools with government tax-
supported "vouchers" to finance competitive choices for primary education. Indeed,
there are many such cases utilizing a wide array of strategies, where public goods are
provided by nongovernment entities, with sustainable financing.
Compensating Artists in a Word with Peer-to-Peer Filing Sharing
In 1999, Shawn Fanning launched Napster, a peer-to-peer (P2P) software client
with a centralized server. Users who downloaded the Napster software and linked to the
Napster server could connect with others who were willing to share digital music files.
Despite accompanying press coverage that generally described Napster's main activity
as illegal infringement under copyright laws, more than 80 million persons registered to
use the service. Anyone with an Internet connection could freely download a vast sea of
digital MP3 files. The songwriters, performers, producers, and investors who owned and
controlled these works were getting nothing.
By December 1999, Napster was embroiled in litigation with the Recording
Industry Association of America (RIAA) and a number of owners of copyrighted musical
works. The music and film industry had anticipated that digital technologies would be a
problem, but they were stunned at the widespread success of Napster. The various
legal strategies undertaken against Napster were successful in shutting down that
particular service, but not before a number of alternative P2P clients were developed,
including clients such as Gnutella or eMule that did not rely upon centralized servers, or
offshore networks such as Kaaza, that presented new jurisdictional legal problems. The
music and film industry undertook frantic searches for new technological and legal fixes,
and braced themselves for an ongoing battle of the wits between the owners of music,
clever hackers, and a public that was clearly willing to participate in large-scale
anonymous sharing of copyrighted musical works.
For millions of listeners, and even for many musicians and songwriters, the P2P
technologies represented something more interesting than a license to steal. The highly
oligopolistic music industry was charging hefty prices for music, but it was also not
passing on much of the revenue to the songwriters and performers (Henley 2004). The
promotion of music was centered on a small number of acts, often packaged and
managed by major labels like commodities. The industry frequently allowed beautiful
performances to languish or disappear, as distribution efforts were highly selective. For
listeners, if one did not hear music in an overly commercialized and concentrated radio
market, or on a handful of cable television stations, it was difficult to experiment or learn
about new artists or performances.
For millions of P2P users, the experience was far richer than simply stealing
music. It was a chance to enjoy music in a different and better way, free from the
massive marketing efforts the music industry featured. It was the nature of the
searching technology that one typically found multiple performances of a song, often by
unfamiliar artists. A search would lead listeners to try out a new artist, collaboration, or
genre of music.
The music industry and listeners debated and wondered if the "copyright police"
would outsmart the hackers who wrote new file-sharing software programs. And if the
P2P technologies were uncontrollable, would musicians be forced to rely upon a new
"
gift" economy, where listeners would volunteer to compensate artists for works?
Eventually the industry sought to embrace the sale of Internet-downloaded
music through such services as Apple's iTunes, but the limitations of the older
systems of distribution were still evident. The concentration of the commercial music
industry, the unfair contracts between musician and distributors of music, and the
limitations of the limited catalogue pay-per-listen model, left many listeners and
some artists wondering if there was a missed opportunity to build an entirely new
and different way of sustaining artists.
While P2P technologies were embraced first as a triumph of the technology
over the law, almost as a deliberate rebellion against the state, there was also a
serious discussion of P2P as a candidate for a compulsory license. In the past, a
wide range of "new" technologies for disseminating and listening to music had
benefited from compulsory licenses, such as player pianos, juke boxes, radio, and
for the use of songs on records, compact discs, and other recorded music. The U.S.
Congress considered various legislative proposals for a compulsory license on P2P
clients, and some countries, most notably Canada, declared that levies on digital
storage media and devices would compensate artists for P2P-downloaded music (for
an antilevy view see EICTA 2004).
There is an extensive trade framework to regulate the uses of compulsory
licenses for copyright or related rights. The WTO TRIPS agreement on the trade
related aspects of intellectual property (Uruguay Round Agreement 1994) says:
"WTO TRIPS Article 13 (copyright) Limitations and Exceptions: Members shall
confine limitations or exceptions to exclusive rights to certain special cases which do
not conflict with a normal exploitation of the work and do not unreasonably prejudice
the legitimate interests of the right holder."
The Berne Convention for the Protection of Literary and Artistic Works, which
the United States has signed, says national governments can issue compulsory
licenses to use musical works, but only within national boundaries, and in return for
equitable remuneration.
Article 11bis:
(2) It shall be a matter for legislation in the countries of the Union to determine
the conditions under which the rights mentioned in the preceding paragraph
may be exercised, but these conditions shall apply only in the countries where
they have been prescribed. They shall not in any circumstances be prejudicial
to the moral rights of the author, nor to his right to obtain equitable
remuneration which, in the absence of agreement, shall be fixed by competent
authority.
Article 13:
(1) Each country of the Union may impose for itself reservations and conditions
on the exclusive right granted ro the author of a musical work and to the author
of any words, the recording of which together with the musical work has already
been authorized by the latter, to authorize the sound recording of that musical
work, together with such words, if any; but all such reservations and conditions
shall apply only in the countries which have imposed them and shall not, in any
circumstances, be prejudicial to the rights of these authors to obtain equitable
remuneration which, in the absence of agreement, shall be fixed by competent
authority.
The Rome Convention for the Protection of Performers, Producers of
Phonograms, and Broadcasting Organizations, which has been signed by
seventy-one countries, but not the United States, also discusses the use of com-
pulsory licenses.
Article 15
[Permitted Exceptions: 1. Specific Limitations; 2. Equivalents with copyright]
1. Any Contracting State may, in its domestic laws and regulations, provide for
exceptions to the protection guaranteed by this Convention as regards:
(a) private use;
(b) use of short excerpts in connection with the reporting of current events;
(c) ephemeral fixation by a broadcasting organization by means of its own
facilities and for its own broadcasts;
(d) use solely for the purposes of teaching or scientific research.
2. Irrespective of paragraph 1 of this Article, any Contracting State may, in its
domestic laws and regulations, provide for the same kinds of limitations with
regard to the protection of performers, producers of phonograms and
broadcasting organizations, as it provides for, in its domestic laws and
regulations; in connection with the protection of copyright in literary and artistic
works. However, compulsory licenses may be provided for only to the extent to
which they are compatible with this Convention. (Rome 1961)
In a series of workshops at New York (Love 2002) and Banff, Canada, a group of
artists, lawyers, and economists looked at practical issues of how a compulsory license
might work, and like most such inquires, discussed how one might set or collect fees,
with alternatives such as levies on purchases of computer equipment or bandwidth, or
various systems for subscription services, based either upon a flat rate or the amount of
downloaded music. Some thought the fees should be paid directly from general tax
revenue. There was no group consensus about these issues, but there was an
appreciation that it would be good to structure the fee so that it was in some sense free
on the margin (similar to how one now pays for cable television or subscriber-based
radio services), and that it would be a positive feature if listeners could freely
experiment with unknown artists or music types, thus contributing to discovery, growth,
and opportunities for new artists.
But this was only part of the problem. How would the money be distributed to artists? In the traditional approaches, the compensation would be based upon the actual usage of works. The more popular songs and performers would get the most money. This could be based upon very granular measurements of downloaded music, raising privacy concerns, or a method based upon sampling of downloads.
To the artists in the Blur/Banff discussion, the allocation of funds based upon
usage was considered flawed. They would mimic the market, but the market was not
ideal. There was much discussion of the so-called Britney Spears effectómost of the
money now goes to a handful of famous artists, making them fabulously wealthy while
other artists barely eke out an existence. Some artists even wanted a portion of
revenues allocated in a random, lottery-like fashion. Every artist would have at least
some chance of leading the good life. There was considerable interest in allocating at
least some of the funds to projects that are not successful in the marketplace, such as
experimental music, the recording of folk music, or even to the support of infrastructure,
such as performance centers or public recording studios. Some say a role for artists in
allocating funds, perhaps by recognizing the contributions of those who had influenced
the art in an important way, or ensuring that studio musicians or others that supported
the more famous artists were compensated more fairly. Another possibility would be to
have some of the funds allocated by governments or elites, who would make sure that
opera, avant-garde music, or other types of music were supported. But as indicated
before, there were obvious problems in relying on either government or elites to control
allocations, as unpopular or controversial views would be vulnerable to repression or
censorship.
Listeners Would Have to Pay, But Could Choose Who They Paid
To counter the dangers of government control over allocations, or the lack of
legitimacy of elites to allocate funds, there was a proposal that listeners themselves
could directly or indirectly decide who received funds. Listeners would not have the
discretion to avoid the compulsory licensing fee, but they would decide who would
receive the money. There were several variations on this theme, including proposals
that listeners would choose artists directly, or they would choose projects or
intermediators that supported musicians.
The role of the intermediaries was discussed at length. There were after all, lots of
areas where buyer or sellers now choose intermediators for various tasks. For example,
as noted before, companies who sell stocks choose exchanges to list shares, and the
various exchanges compete against each other for the public's trust. The more trusted
is the exchange, the more access to investor support. But even closer to home are the
various institutions created to collectively manage the rights of copyrighted musical
works. These vary considerably from country-to-country depending upon domestic legal
traditions. Some are for-profit while others are nonprofit. Some institutions are purely
voluntary, while in other cases the state mandates participation in the collective
management organization. Contributions to the collective rights organizations may
come from governments, or directly or indirectly from listeners (or performers) of works.
And quite relevant to the Blur/Banff discussions, some of the collection societies seek to
mimic a market allocation, while others set aside portions of funds for a variety of
nonmarket allocations, including cultural affairs, special pensions of artists, or political
activity (Ficsor and World Intellectual Property Organization 2002).
It was proposed that for at least part of the compensation to artists, the money would be
channelled through intermediators. And moreover, that the intermediators would
compete against each other, on the basis of their objectives, competence, and cultural
sensitivities, offering listeners very different alternatives for how the money would be
distributed. Listeners would decide (and continually reevaluate) where to put their
money, effectively choosing the groups that did the best job in supporting artists.
Anything would be possible. For example, an intermediator might propose to:
1. give all the money to performances of specific genre of music, such as African
music, American jazz, or performances of classical music,
2. ensure that 15% of the revenue supported retired blues artists that are
down on their luck,
3. allocate all money on the basis of the volume of downloads, or
4. allow the listeners to directly allocate fees to specific artists, to mention
only a few possibilities.
Governments could possibly regulate the intermediators, on such issues as
transparency and accountability, not unlike government oversight over securities
exchanges.
Governments could also have the money allocated in a mixed system, with some fixed allocations, and some user determined allocations. For example, governments might require that:
1. At least 30% of fees be allocated on the basis of traditional, usage based, distributions,
2. At least 10% support noncommercial music productions,
3. At least 5% be contributed to a retirement fund for burned-out musicians, or
4. There be a minimum contribution to session musicians.
Experiment, Evaluate, and Learn
In the beginning, it would be important to experiment with different approaches,
and also to evaluate and consider changes. There was a proposal to create a role for
musicians and songwriters to bargain with listeners over key features of the allocation
system, including
1. the price of the compulsory license,
2. the minimum allocations to various systems, or
3. to suggest systems of compensation that are more fair that current market
outcomes.
The Blur/Banff discussions were seeking to find a way that the listeners and
artists could build a new social contract that would compete with and possibly replace
the current system of distributing and marketing music. It would seek to liberate the art
from the consequences of marketing the art as a commodity. If the P2P model was
successful, the expenditures on marketing would fall, and the greater share of
resources would be available to artists themselves.
Supporting Health Care R&D
The inability of music, drugs, and other industries (such as scientific publishing)
to modify their monopoly-based business models to address substantial unmet needs
have in all cases led to conflict. The discussion of new business models for the music
industry has been driven by its struggle with consumers and their adoption of P2P. In
the pharmaceutical industry the equivalent struggle is more complex but has initially
been with countries rather than consumers.
Although much of the underlying research that leads to new drugs comes out of
academic institutions funded by government grants, development work has mostly been
carried out by pharmaceutical companies, who have been allowed to obtain patents on
the resulting products. Such patents have allowed companies to charge prices for drugs
that bear no relation to their cost of manufacture, justified as paying back the cost of the
research and development (R&D). However since patents have state-based jurisdiction,
companies have not been able to obtain such patent protection in all countries. This has
allowed some countries, notably India, to develop genetic copies of drugs and
manufacture them at marginal cost prices. The highly competitive generics industry that
has developed has provided access to lifesaving drugs at affordable prices to millions of
people, however, by its very nature has not contributed to initial R&D costs. The
international conflict over this issue led the drug industry to spearhead lobbying for
international agreements to extend and standardize patent protection globally (Drahos
and Braithwaite 2002). This led directly to the World Trade Organization (WTO)
agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS) (Uruguay
Round Agreement 1994) that requires all but the least-developed economies to issue
patents on medicines by January 1, 2005.
The period of development of the TRIPS agreement, and increased use of
patents in bioscience in general, has paralleled the emergence of the AIDS crisis, which
has drawn worldwide attention to the consequence of the drug access problem (Correa
2000). Even consumers in rich countries have found themselves unable to afford or gain
access to the newest drugs as prices have risen (Families USA 2003). Whereas TRIPS
is certainly one way to address the inequities in contribution to drug R&D, it does so at
the price of removing generic competition thereby increasing worldwide drug prices and
creating inequities in terms of access to treatments. It also makes the drug development
system rely too much on patent-based marketing monopolies at a time when it is being
shown that they are a hugely inefficient way of purchasing R&D, with most investment
going to "me too" drugs, and as little as 1.5-3% of drug sales being spent on research
leading to treatments that are better than existing therapies (Love 2003; Hubbard and
Love 2004a, 2004b). Finally, there is growing concern that the increasing number of
patents is in itself inhibiting new research (CIPR 2002; Royal Society 2003). These
issues have led to debate of alternatives or modifications to TRIPS as a mechanism for
support health-care R&D, even by the WTO itself in its 2001 Doha Declaration on
TRIPS and Public Health, which stated that the TRIPS "should be interpreted and
implemented in a manner supportive of WTO members' right to protect public health
and, in particular, to promote access to medicines for all" (Doha WTO Ministerial 2001).
From the success and competitive efficiency of the generics industry it is clear
that patents are not required to ensure a sustainable supply of drugs at marginal cost
prices. The only real benefit to consumers of marketing monopolies and their worldwide
extension via TRIPS is to enforce contributions by all to the cost of R&D. However, data
from drug sales reveals a surprising uniformity in the fraction of a country's GDP that is
spent on drugs, regardless of its populationís per capita income (Love 2003). This
suggests a potential modification to the TRIPS agreement to allow countries an
alternative way to contribute to global health-care R&D by ensuring that a fixed fraction
of their GDP is being spent on supporting health care R&D. Meeting such a GDP-based
R&D spending norm could release a country from the current TRIPS obligation of
allowing patents that block generic drug manufacture, thus enabling all drugs to be
accessible at marginal cost prices. The GDP-based norm could be set under WHO
(World Health Organization) auspices and the WHO or the World Trade Organization
(WTO) could monitor compliance.
Using trade agreements to guarantee sustained national contributions to global
health-care R&D in this simple way would ensure that funding for new R&D continues
even if all drugs are sold at marginal cost prices. At the same time it would allow
countries flexibility on implementation and encourage local R&D capacity building. The
outstanding questions are how to implement systems that efficiently collect the funds
required by the GDP norm and how to use it to fund innovation in a way that rewards
success in the absence of a marketing monopoly.
One obvious approach for governments to meet their new R&D contribution
obligations is for them to collect funds for drug development via taxation and use new or
existing R&D funding agencies to manage the new resources. However, in countries
with a private health insurance system this may be an anathema, and many everywhere
will also worry that centralized national drug development agencies taking decisions on
R&D priorities and allocation of funds would be bureaucratic and inefficient. Management
of drug development is not only about saying yes to promising R&D proposals, but also
saying no. Defenders to the current system say the market may do a poor job of priority
setting, but it may do a good job of saying no to projects that have too little likelihood of
success.
A possible alternative that does not have such potential weaknesses is a
financing scheme that would work through the types of competitive intermediators
discussed. In this case the intermediator's role would be to manage R&D assets on behalf
of consumers. Individuals (or employers) would be required to make minimum
contributions into R&D funds, much as there are mandatory contributions to social
security or health insurance, or to pension funds. Government would set the required
contribution (in order that the country meet the TRIPS-mandated GDP threshold), but
the individual (or employer) would be free to choose the particular intermediator that
received their contributions.
In this model, intermediators would control the allocation of resources to
companies and academics carrying out R&D, but not carry it out themselves (as this
would be a conflict of interest). Instead each intermediator would concentrate on
embracing the business model for resource allocation that it believed was the most
efficient for drug development. This could be a system based upon cash prizes for R&D
outputs,
i
micromanaged small grants, peer-reviewed open research projects, or other
innovations in financing R&D. The intermediator would also adopt its own system of
priority setting. The employer groups or individuals who were required to contribute to
R&D funds would make decisions based upon their assessment of the intermediator's
prowess in developing new treatments. Since in all cases the final product would be a
public good, not owned by any investor, the incentive would be to develop products that
represented therapeutic advances, rather than the profitable "me too" products that
consume most of the current R&D resources.
Intermediates could also adopt "open" research agendas, since the ability to raise
money would not be linked directly to product sales. If employers or individuals believed
open research was more productive than proprietary R&D, more money would flow to
open R&D projects. As a result of implementing such a system, consumers would enjoy
huge savings from the reduction in wasteful marketing practices, which empirically are
far larger than R&D outlays. Moreover, waste within the R&D process would be reduced:
there is enormous evidence that current marketing practices have led to a growing
corruption of the evidence base, as academic researchers enter into business
agreements with private drug developers, and carry out and report questionable
research that promotes products rather than advances science.
How well would intermediators manage R&D funds? This depends in large
measure on how well the contributors can evaluate the intermediator's performance.
Here there are several important policy interventions. First, how much and what type of
transparency is needed to ensure that contributors have reliable and useful information in
order to evaluate the performance of intermediators? Second, what is the optimal policy
on entry? Would a small number of competing intermediators be better than a world
with free entry and larger numbers of competitors? Should individuals pick the
intermediators directly, or should the decision makers be employee groups aggregated
together to have the economies of scale to finance due diligence of R&D intermediators?
And would it be better to limit intermediators to nonprofit bodies, or to limit the amount of
overhead?
Looking at the success of the public goods projects such as the open source
software (OSS) movement and the human genome project (HGP) and the way they
have been managed, one of the most important factors appears to be the effect of
complete data transparency. In both cases this is mandated. In the OSS case the
availability of the code is enforced via licenses such as the GNU public license (Free
Software Foundation 1989). In the HGP case the DNA sequence being collected was
available to all within 24 hours of collection as a result of an agreement between funding
agencies and sequencing centersóthe Bermuda agreement (Sulston and Ferry 2002).
In R&D, one of the greatest drivers is free exchange of knowledge. In both OSS and
HGP the free availability of the different types of data (source code and DNA sequence)
has driven progress in development in the respected fields.
However, one of the most interesting side effects of such transparency is the way
it keeps both producers and managers honest, which can lead to less
distortion in the allocation of resources to different projects. This is exactly what an
intermediator model requires to be successful in the absence of other market pressures.
Anyone is free to analyze the data, check conclusions, and release their findings to all.
In OSS projects anyone is free to evaluate the quality of the source and decide to
contribute to a project, branch the code if they think they improve on what is already
there, or start from scratch if they think they can do better. In the case of the genome
project, continuous release of data allowed output and quality information to be
monitored independently by funding agencies, which was a driver for decision making in
this directed project (Hubbard 2003). However, the availability of the data upon which
these assessments were based allowed others both within and outside the project to
extend and challenge that analysis. Exposing your data (complete with errors) to all
actually turns out to be a largely positive experience and appears to lead to greater trust
(Wellcome Trust 2003). Data secrecy is used frequently as a method of competition
between innovators; however, if mechanisms for evaluating R&D outputs are based on
the continuous release of project data, greater transparency will be rewarded
automatically.
Matching FundsóeBay Meets the Public Domain
The models given above are based in part on nonvoluntary mandates from the state to
finance knowledge goods. The next and final model will be entirely voluntary. The
proposal was developed in a 2002 Rockefeller Bellagio dialogue on collective
management of intellectual property goods (Rockefeller Foundation 2002). The working
premise was that there exists significant willingness to pay for a wide range of public
goods, but that transaction costs are often too high to organize those who would
voluntarily contribute. The commercial market is one mechanism to organize buyers, and
it is most commonly organized by the sellers, rather than the purchasers of goods.
Sellers nearly always withhold access to the goods to those who do not pay. In those
cases where the buyers organize the market, buyers are often motivated to obtain
better prices for themselves, often in markets for private goods, such as cooperative
grocery stores or credit unions.
The market for privately provided public goods exists, but it is too small. There is
a significant number of private nonprofit charities that solicit and spend contributions,
and individuals and corporate entities often contribute time and in-kind resources to
create goods such as public databases, listservers (or listservs), free software, public
domain computer software protocols, or other information goods. The new open-
journals movement is an attempt to organize authors to financially support publishers
that place materials in the public domain. These and countless other efforts have
provided great value to society. But in general, because the financing mechanisms are
more efficient for private goods, society invests too much in private goods, and too little in
public goods.
The Matching Funds proposal is to create a new institutional framework that
would make it easier to match willing funders and willing suppliers of public goods. The
institutional framework would be an intermediator called Matching Funds (MF). The role
of MF would be to provide due diligence on proposals for new public goods, and if the
review was positive, to list the projects for subscribers. Each project would have a
description of the good, a management team to produce the good, and a budget.
Subscribers could offer to contribute any amount toward the final budget, but unless
their contributions were matched by other subscribers sufficiently to fund the entire
budget, the contribution would be returned.
How It Would Work
For the MF entity to work, it would have to enjoy trust and good will, and also
confidence in the ability of its management team. We have proposed MF as a nonprofit
entity, which we believe is appropriate for this type of institution. The management of
MF should be thin. Proposals for public goods should come from the outside, either on
the supply or the demand side. For example, a group seeking to commission the
creation of a public database on pharmaceutical company mergers could propose a
specific research project, complete with a budget and a team of experts and managers
who would volunteer to negotiate contracts with individuals or corporate entities that
would actually perform the work. The MF management would review the proposal, and
if it passed this initial review, the project would be advertised on the MF page for public
comment. Anyone would be free to critique the proposal, and to offer suggestions for
modifications.
The MF management would encourage the project managers to revise the
proposal in response to the community feedback. When and if the MF management
determined that the project was mature, it would be available to subscriptions.
Subscriptions would be binding commitments to fund the project if sufficient support for
the project was forthcoming from the community of persons who wanted the project
done. If the project was fully funded, the work would be commissioned. MF would follow
the project, and allow the community to provide feedback, providing a transparent
record of the performance of the project managers and the persons or corporate entities
that did the work. Over time, competent managers or performers would enjoy greater
confidence from the MF management and contributors, and the MF management would
exclude less-competent managers from new projects.
MF would initially be supported by third-party contributions, such as from
foundations. But if MF was successful, it could charge fees to list the projects, possibly
creating a sustainable business model for public goods. How might the MF fund project
scale? The range could be very large. Contributions could come from individuals, but
also from corporate entities or governments.
Here are some examples of small- or medium-sized projects:
• A database of prices of cancer drugs in different countries.
• A public opinion survey on public attitudes on copyright extension.
• A Free/Libre and Open Source Software (FLOSS) software program to help
organizations conduct secure online voting.
• A collection of course syllabi for economics classes (available under a creative
commons license).
• Financing an information workshop on FLOSS software at WIPO.
• Hiring professional writers to improve GNU/Linux documentation.
Some larger projects that might be appropriate for a MF model would include:
• Sequencing of new genomes.
• Clinical trials that test drugs head-to-head (financed by governments or
insurance companies that insure pharmaceutical purchases).
• Litigation to bust poor quality patents.
• Purchase of a permanent global license for the latest version of the Word-Perfect
Suite from Corel.
The last item is not an absurd example. Indeed, what is absurd are the billions of
dollars spent by consumers to buy very pricey versions of Microsoft Office, largely to
have access to Microsoft's constantly changing proprietary standards for document
formats. If only a fraction of the cumulative licensing fees by local, state, and federal
governments and large corporations could be diverted to a global license of a high-
quality office productivity suite, such as the WordPerfect Office Suite, users would
likely switch to the new free version, and standards would change. The MF license
would obligate Corel to embrace a default document format that was based on open
standards, allowing Corel and its competitors to offer commercial products that
offered new features and improved performance, but that were interoperable with
each other. This would likely be more effective in a shorter period of time than
antitrust litigation or government regulation of Microsoft.
Conclusion
Coase (1937) pointed out in his famous essay on the nature of the firm that
we create social institutions to replace a highly individualized market outcome that is
fraught with high transactions costs and inefficiencies. However, most existing
institutions are organized to sell private goods, often at high prices, and to exclude
those who don't pay from receiving the benefits of knowledge or new technologies. If
we look toward a future of increasing equality and fairness, and it we value the free
flow of information, the benefits of sequential innovation, and the sharing of scientific
information, then we have to strive for new mechanisms to finance public goods and
new institutions that place social priorities first.
Acknowledgments
These ideas have been developed in collaboration with attendees of a series
of workshops hosted by Aventis, the Trans Atlantic Consumer Dialogue, the
Rockefeller Foundation, MÈdecins sans FrontiËres, Oxfam, Health Action
International, and others. The views expressed in this article are those of the authors
and not necessarily of their organizations.
Note
1. For discussion of prize models see Wright (1983), Kremer (1998), and shaven
and van Ypersele (2001).
References
Benkler, Yochai. 2002. Coase's penguin, or, Linux and the nature of the firm. Yale Law
Journal 112(3): 369-446. (A condensed version of this article is included as Chapter
11 of this volume.)
Brown, Patrick O., Michael B. Eisen, and Harold E. Varmus. 2003. Why PLoS became
a publisher. PLoS Biol 1(1): E36. Available at http://www.plosbiology.org/
plosonline/?request=get-document&doi=10.1371%2Fjournalpbio.0000036
Coase, Ronald. 1937. The Nature of the Firm [cited March 1, 2004]. Available at
http://people.bu.edu/vaguirre/courses/bu332/nature_firm.pdf
Commission on Intellectual Property Rights (CIPR). 2002. Commission on Intellectual
Property Rights Report. UK Department for International Development (DFID) 2002
[cited Dec. 15, 2003]. Available at: http://www.iprcommission.org/
Correa, Carlos. 2000. Integrating public health concerns into patent legislation in
developing countries [cited Dec. 17, 2003]. Available at: http://www.sounccenrre.org/
puhlicarions/publicheal rhi toc.htm
Cukier, Kenneth N. 2003. Community property: Open-source proponents plant the
seeds of a new patent landscape. Acumen 1(3): 54-60.
http:/Iwww.cukiercom/writings/opensourcehiotech.html
Doha WTO Ministerial. 2001. Ministerial Declaration adopted 14th November 2001
[cited March 1, 2004]. Available at http://www.wto.org/english/thewto_e/minist_e/
min01_e/mindecl_e.htm
Drahos, Peter, and John Braithwaite. 2002. Information feudalism: Who owns the
knowledge economy? London: Earthscan.
EICTA (European Information and Communications Technology Industry Association),
2004. Available at http://www.eicta.org/copyrightlevies/index.hrml
.
Families USA. 2003. Out of bounds: rising prescription drug prices for seniors. Families
USA Publication No. 03-106 2003 [cited Dec. 16, 2003]. Available at: http://www.
Familiesusa.org/site/DocServer/Out_of_Bounds.pdf?docID=1522
Ficsor, Mihaly, and World Intellectual Property Organization. 2002. Collective Man-
agement of Copyright and Related Rights, WIPO publication No. 855. Geneva: World
Intellectual Property Organization.
Free Software Foundation. 1989. GNU General Public License. [cited March 1, 2004].
Available at: http://www.gnu.org/copyleft/gpl.html
ñ 1996. The Free Software Definition. [cited March 1, 2004]. Available at:
http://www.gnu.org/philosophy/free-sw.html
Ed Gillespie, and Bob Schellhas, eds. 1994. Contract with America: The bold plan by Rep.
Newt Gingrich, Rep. Dick Armey and the House Republicans to change the nation. New York:
Times Books.
Henley, Don. 2004. Killing the music. Washington Post, February 17, 2004, A19.
Hubbard, Tim J. 2003. Human Genome: Draft Sequence. In D. N. Cooper (ed.), Nature
encyclopedia of the human genome. London: Nature Publishing Group.
Hubbard, Tim J., and James Love. 2003. Medicines without barriers. New Scientist June
19: 29.
ñ 2004a. A New Trade Framework for Global Healthcare R&D. PLoS Biology 2(2): 147-
150.
ñ 2004b. Were patently going mad. Guardian, March 4, 2004, 6. Also avail-able at:
http://www.guardian.co.uk/life/opinion/story/0,12981,1161123,00.html
Kremer, Michael. 1998. Patent buyouts: A mechanism for encouraging innovation. The
Quarterly Journal of Economics 113(4): 1137-1167.
Love, James. 2002. Artists want to be paid: The Blur/Banff proposal. 02//:Blur, power at play in
digital art and culture. [cited March 1, 2004). Available at:
http://www.nsu.newschool.edu/blur/blur02/user_love.html
ñ 2003. From TRIPS to RIPS: A better trade framework to support innovation in medical
technologies. [cited Dec. 17, 20031. Available at:
http://www.cptech.org/ip/health/rndtf/trips2rips.pdf
Musgrave, Richard A. 1959. The theory of public finance. A study in public economy. New
York: McGraw-Hill.
Raymond, Eric S. 1999. The cathedral and the bazaar: Musings on Linux and open source by
an accidental revolutionary. Beijing and Cambridge: O'Reilly. Also available from
http://www.catb.org/~esr/writings/cathedral-bazaar/.
Rockefeller Foundation. 2002. Rockefeller Foundation initiative to promote intellectual
property (IP) policies fairer to poor people. [cited March 1, 20041. Rockefeller Foundation
Dialog, Bellagio, Italy, November 20-25, 2002. Available at http://www.
rockfound.org/display.asp?context=1&Collection=1&DoclD=547&Preview=
0&ARCurrent= l
Rome. 1961. International ("Rome") Convention for the Protection of Performers,
Producers of Phonograms, and Broadcasting Organizations, 1961. Available at
http://www.wipo.int/clea/dots/en/wo/wo024en.htm
Royal Society. 2003. Keeping science open: The effects of intellectual property policy on the
conduct of science 2003 [cited Dec. 15, 20031. Available at: http://www.royalsoc.ac.uk/
files/statfiles/document-221.pdf
Shavell, Steven, and Tanguy van Ypersele. 2001. Rewards versus intellectual property
rights. Journal of Law and Economics 44(2): 525-547.
Sulston, John, and Georgina Ferry. 2002. The common thread: A story of science, politics,
ethics and the human genome. London: Bantam Press.
Uruguay Round Agreement. 2004. TRIPS: Trade-Related Aspects of Intellectual Property
Rights. Annex IC of Marrakesh Agreement Establishing the World Trade Organization, signed
15th April 1994 [cited March 1, 2004] Available at:
http://www.wto.org/english/docs_e/legal_e/27-trips_01_e.htm
Wellcome Trust. 2003. Sharing data from large-scale biological research projects: A system of
tripartite responsibility 2003 [cited March 10, 2003] Available, at http://www.wellcome.ac.
uk/en/1/awtpubrepdat.html
Wright, Brian D. 1983. The economics of invention incentives: Patents, prizes, and research contracts. American Economic Review 73: 691-707. i For a discussion of prize models, see Wright (1983) Kremer (1998), and Shavell and van Ypersele (2001).