Sustainablog

This blog will cover some news items related to Sustainability: Corporate Social Responsibility, Stewardship, Environmental management, etc.

11.6.05

Business pushes G8 on global warming

Business pushes G8 on global warming
Financial Times, 10 June 2005 - Leaders of some of the world's biggest
businesses yesterday stepped up pressure on the Group of Eight
industrialised nations with a call for action on climate change ahead of
the summit in Gleneagles next month.
Business leaders from companies such as BP, Ford and British Airways urged
the G8 nations to set up a global system for curbing greenhouse gas
emissions.
In atwo-hour meeting with Tony Blair, the prime minister, business leaders
from 24 companies called for a global emissions trading system or a
similar market-based mechanism. Mr Blair has made global warming one of
the twin priorities, along with Africa, for his chairmanship of the G8
summit in Scotland.
Steve Lennon, chair of the environment and energy commission of the
International Chamber of Commerce, which represents hundreds of thousands
of companies in 130 countries, said: "We see a global system of emissions
trading as inevitable."
The UK's FTSE 100 companies play a large role in global carbon dioxide
emissions, according to a report to be published next week by Henderson
Global Investors.
The UK produces 2.2 per cent of the world's greenhouse gases, but FTSE 100
companies are responsible for 1.6 per cent of total global emissions.
Emissions from the products sold by five of the biggest UK oil and mining
companies account for more than a tenth of the world's emissions from
fossil fuels.
"A carbon-constrained world would therefore pose significant challenges
for investors in a number of UK companies," the draft report noted.
The business leaders called for a long-term international "cap- and-trade"
system or similar market-based mechanism that would set limits on how much
greenhouse gas countries could emit and define "emissions rights".
Rick Samans, managing director at the World Economic Forum, which convened
the meeting with Mr Blair, said: "What companies are seeking is certainty
. . . particularly when they are dealing with investments that have a long
operating cycle. If you are going to build a power plant or factory, you
want to know what the rules will be in order to assess the risk and return
of your investment."
Apart from the US, all the G8 nations have ratified the Kyoto protocol on
climate change, which requires developed nations to cut emissions of
greenhouse gases relative to 1990 levels. President George W. Bush refuses
to discuss measures to tackle climate change beyond 2012, when the Kyoto
treaty expires.
Andrew Sentance, chief economist at British Airways, said: "The timeline
of the Kyoto protocol is not long enough to drive the long-term investment
and innovation required to reduce greenhouse gas emissions."
The statement was also signed by ABB, Alcan, BT, Cinergy, Cisco, Deloitte,
Deutsche Bank, E.ON, EADS, EDF, Eskom, Hewlett-Packard, Petrobras, UES,
Rio Tinto, Siemens, Swiss Re, Toyota, Vattenfall and Volkswagen.

Financiers facing up to climate change risks

Financiers facing up to climate change risks
EurActiv.com, 3 June 2005 - The insurance and investment industry has
woken up to the implications of climate change over the past few years,
recognising the huge impact it could have on the world financial market.
This article looks at the pressing issues, the reaction of the EU and what
is being done.
Background:
The financial world is beginning to worry about how changes in the weather
will change the value of investments. On 10 May 2005 an Institutional
Investor summit on climate risk was held in New York under the auspices of
the United Nations (UN). Attended by leading EU and US investors, the
summit adopted an action plan calling for:
a requirement on asset managers to assess financial risk from climate
change;
investment of 1 billion US dollars in clean technologies;
disclosure by companies of financial risk from climate change;
ranking of the world's largest companies to include their efforts in
climate change risk reduction.
These are all very laudable proposals but the question is how to implement
them. There are those who counsel immediate action to guard against future
catastrophe; those who take a wait and see approach; those who do not
believe climate change will even happen and those who close their eyes to
the whole problem. If something is to be done, all these positions must be
brought together. This will involve a sustained effort from investors,
industry (both the financial and the manufacturing industries) and
governments.
Issues:
EU Lisbon Strategy
A fundamental aspect of the EU’s Lisbon Strategy is that environmental
issues and concern for the sustainability of development are at the heart
of economic progress in the European Union. For further examination of EU
policy see our LinksDossiers on Sustainable Development: EU Strategy and
EU policies on climate change.
The Commission's corporate social responsibility programme is also pushing
businesses towards contributing to sustainable development.
Investment
Investors are realising that they will have to take climate change risk
into account in deciding how to invest. The level of risk associated with
certain investments, according to geographic region or type of industry,
will alter as the effects of climate change are felt. Secondly, whether or
not companies are taking steps to reduce emissions, to promote clean
technologies etc, will have an impact on the level of investment they
attract. Also, investors may see new opportunities in companies and
technologies which, because of their awareness of climate change issues,
are more likely to ride the storm in the long run.
Insurance
The insurance industry has already had to deal with claims related to
changes in climate – floods in the UK, heat waves in France, fires in
Italy. In the UK, house holders in flood-prone areas have found they are
bearing the brunt, with insurance premiums skyrocketing. In Germany, the
NGO Germanwatch has launched an initiative for an insurance-based climate
compensation scheme – an insurance policy specifically geared towards
catastrophes caused by changes in the climate.
Pensions
An issue which is clearly linked to investment is pensions. If there is a
change in the nature of investment, as the nature of risk changes,
traditional pension funds may themselves be in danger. Add to this the
ageing population and the reduction in the labour market making pension
contributions, and you have the potential for breakdown in world systems
for providing income after retirement. With the increasing capitalisation
of the banking system, and the shifting of risk that entails, it is the
individual workers, the contributors to insurance and pension funds, who
are the true owners of capital. Many commentators are now pointing out
that investment must be geared to safeguarding the long-term interests of
these true owners.
International initiatives
The past few years has seen a growing number of bodies and forums formed
to discuss this issue. Some of the leading international bodies are:
AccountAbility: international organisation aimed at enhancing individual
and corporate accountability and sustainable development.
Ceres: group of US investment funds, environmental organisations and
public interest groups campaigning for sustainable investment.
IIGCC: Institutional Investors Group on Climate Change: London-based group
focussed on climate change risks and opportunities for the European
financial market.
UNEP: United Nations Environmental Programme Financial initiative: global
partnership between UNEP and the financial sector (banks, insurers and
fund managers) working to understand the impact of environmental change on
financial performance.
Investor Network on Climate Risk: forum for discussion on climate risk.
SEFI: Sustainable Energy Finance Initiative: promotes increased investment
in energy efficiency and renewable energy.
BASE: UN-funded NGO building partnerships between finance and industry to
promote sustainable energy.
Positions
The view of the IIGCC is that "climate change is arguably the biggest
environmental risk management challenge" facing large investment firms. It
has noted that governments have been slow to recognise the crucial role
institutional investors must play in any effective climate change
programme.
Geenpeace takes the view that global monetary policy is ignoring the need
to direct investment towards environmentally beneficial projects and,
quite to the contrary, detrimental developments continue to attract
funding. It continues to lobby the World Bank and the International
Monetary Fund to change their investment policy.
In June 2004, the Association of British Insurers issued a report which
showed that claims for damage caused by storms and floods in the UK could
treble by 2050. It called for a joint approach from industry and
government on how to deal with the effects, particularly on property, of
changing weather patterns.
AccountAbility issued a report in January 2005 which concluded that the
investment community was not doing enough to include climate change risks
in investment decisions. It said that investors must recognise the need
for long-term financial planning, which takes into account environmental
and sustainability concerns, as opposed to short-term gain.
In the UK, a Climate Change Impacts Programme has been set up to help
businesses understand the impact of climate change and how to deal with
it. It has developed a software tool enabling assessors to calculate the
costs and weigh up the risks of various different strategies.
US researchers have called for the implementation of immediate measures,
such as a small tax on petrol, to be put aside as a society-wide insurance
to pay for the future costs of climate change.

Technology and the elderly: The world's population is getting older. How can technology help old people live independently at home?

Home alone
Jun 9th 2005
From The Economist print edition

Technology and the elderly: The world's population is getting older. How
can technology help old people live independently at home?
LIKE many middle-aged people these days, Edie Stern, who lives in New
York, often finds herself worrying about an ageing parent. Her father,
Aaron, is 87 years old and lives on his own in Florida, hundreds of miles
away. “He's a very independent soul,” she says. Many people in Ms Stern's
position feel torn: they want their parents to continue to live in their
own homes and pursue their own lives, but are concerned about their
parents' growing frailties. Unlike others, however, Ms Stern can at least
feel she is doing something to help resolve this dilemma. As a researcher
at IBM, a big computer firm, she is one of many people developing new
technologies intended to make it easier, less stressful and even healthier
for older folks to continue living at home.
Demand for such technologies could be enormous, since baby-boomers are on
the cusp of retirement. About 10% of the world's population was 60 or
older in 2000—but that figure will more than double to 22% by 2050. Some
countries will be especially hard hit: 28% of the population in Italy and
Japan will be over 65 by 2030. In the rich world, there will be two old
people for every child by 2050.
Consider the daily chore of taking the right pills at the right time. As
people grow older, the combinations of medicines they must take often
become elaborate cocktails. Pills are easily confused and labels can be
hard to read. So MedivoxRx Technologies, a division of Wizzard Software,
based in Pittsburgh, has developed Rex, the talking pill bottle. Pressing
a button on its base plays back spoken prescription information, stored in
a microchip, through a miniature speaker. This information can either be
generated automatically from prescription data, or recorded directly using
a docking station: “Mum, take this arthritis pill for your shoulder pain,
but not more than three times a day.” A new version of Rex, now in the
pipeline, will warn if a bottle is opened too many times in a day.
Similarly, Bang & Olufsen's Medicom division is test marketing a device in
several European countries that helps people remember how many pills
they've taken. The Helping Hand device holds “blisterpacks”, those cards
of pills packaged under individual bubbles of plastic. When it is
medication time, Helping Hand beeps and flashes, and sensors track how
many pills have been taken and when. A read-out indicates prescription
compliance—a green light means the user is on track. Other products are
still in the research labs, such as the system devised by Hewlett-Packard
(HP) to let pharmacists print bar-codes directly on to pills. They can
then be held up to a scanner the size of a coffee cup, which says out loud
what the pill is and when to take it. A second device holds all of a
person's pills and dispenses each one at the appropriate time.
Managing the chronic diseases that accompany old age, such as arthritis,
diabetes and hypertension, involves more than just popping the right
pills, however. Other new technologies focus on remote management of such
chronic diseases. Health Hero Network, for example, has developed the
Health Buddy, a dedicated computer that offers daily coaching for some 45
health conditions. The latest version has a colour screen and ports for
connecting medical sensors, such as a device for measuring diabetics'
glucose levels. Used by American health-care organisations to look after
over 5,000 chronically ill patients, the Health Buddy plugs into the phone
and sends data between patient and doctor every day. A Japanese version is
now in the works, and approval is pending in the Netherlands, a country
with a particular enthusiasm for telemedicine. The firm plans a British
launch too.
At IBM's research lab in Zurich, researchers are working on a
mobile-health toolkit to link medical devices with wireless networks.
Called mHealth, the kit could, for example, work with Bang & Olufsen's
Helping Hand so that a forgotten pill triggers a mobile-phone call. HP,
meanwhile, is working on wearable wireless sensors, the size of sticking
plasters, that could be used for remote monitoring of heart activity and
other information. The idea behind all of these monitoring systems is to
allow old people to remain in their own homes for as long as possible,
even when they are being treated for chronic illnesses, rather than moving
into a nursing home.
Another category of devices monitor non-medical activities: Has Mum got up
today? Did Dad have any breakfast? Lance Larivee, who works in the
software industry and lives in Portland, Oregon, is testing a new system
from Lusora, a start-up based in San Francisco. The Lusora Intelligent
Sensory Architecture (LISA), which will go on sale later this year, is a
collection of wireless devices including a wearable panic alarm and
various monitoring devices that are placed around the home and detect
motion, sound and temperature. Data from these devices can be accessed
securely via the internet. So Mr Larivee can, for example, check online to
see if his 87-year-old grandmother—who lives alone in Los Altos,
California—has opened the refrigerator yet today.
Living Independently, a firm based in New York, last year began selling a
similar system, called QuietCare, the development of which was funded in
part by America's National Institutes of Health and Ageing. It too
combines motion detectors with a secure website where customers can check
activity. But the system is also backed up by ADT Security Services, a
home-security firm. ADT is told what patterns of activity—or lack of
activity—should trigger particular pre-determined responses, such as
calling for an emergency doctor.
Such systems need not rely on elaborate cameras and sensors, however. Any
electronic device that is central to the daily routine can potentially be
used as a barometer of well-being. In Japan, over 2,200 people use the
i-pot system devised by Zojirushi, Fujitsu and NTT. As its name suggests,
the i-pot is an internet-connected kettle. Whenever it is used—which is
several times a day in tea-loving Japan—it sends a wireless signal to a
central server. Usage records can be checked on a secure website, and the
pot also sends a twice-daily summary by e-mail to a family member or other
designated recipient.
Back at IBM, Ms Stern is working on something called the Friends & Family
Portal, which could tie many of these concepts together. Bringing together
health updates, a listing of doctor's appointments, chronic disease data
and other information, the portal is designed to house everything those
concerned about an elderly person would want to know. A “buddy list” keeps
everyone connected via e-mail or instant messaging. Ms Stern's father, for
example, could upload his glucose readings to the portal so that his
doctor in Florida, his daughter in New York and his son in Denver could
all see that he is keeping his diabetes in check. Patients who know that
other people are paying attention, says Ms Stern, are more likely to
follow doctor's orders. “It's a virtuous circle,” she says.
While the demand for all these technologies seems certain to grow, this
kind of monitoring inevitably raises the question of privacy—a prickly
issue that has derailed other technologies in the past. Will the elderly
tolerate a barrage of devices monitoring and tracking them, revealing
everything down to when they had breakfast or last had a cup of tea?
Richard Jones, the boss of Lusora, responds with a question of his own:
“What's a greater loss of privacy than moving out of your own house?” He
has a good point.

Only practitioners of sustainable development will get Time's business

Only practitioners of sustainable development will get Time's business
The Vancouver Sun, 3 June 2005 - Forest companies received a warning from
Time Inc. Thursday that if their logging practices are not up to snuff,
then the world's largest magazine publisher doesn't want their business.
David Refkin, director of sustainable development for Time Inc., told the
Global Forest and Paper Summit that sustainable development is important
to the business world, and it is time for forest companies to stop viewing
it as a threat.
Time Inc., which buys 650,000 tonnes of paper a year for its publications,
does an annual review of its suppliers, evaluating them on a scale of
sustainability targets.
"Our strategy has been to reward leaders, encourage laggards, and for
those who have egregious practices: No business," Refkin said.
Refkin was part of a panel discussion on sustainability that included
outspoken environmental advocate Tzeporah Berman and native leader Garry
Oker of the Doig River First Nation.
The panelists challenged some of the assumptions industry CEOs had
presented earlier in the day about the progress the forest industry is
making in achieving sustainable practices.
Weyerhaeuser CEO Steven Rogel had said the forest industry is part of the
environmental solution, noting that wood locks up greenhouse gas-producing
carbon. And Abitibi Consolidated CEO John Weaver said all Abitibi's forest
lands will be third-party certified by the end of 2005.
"As an industry we are looking more and more to the various biodiversity
challenges we have across the forest and how we can set aside certain land
uses," Weaver said.
Refkin said overall, however, only six per cent of the world's logging is
third-party certified.
"That's a start, but you have a long way to go," he said.
Berman, who had taken part in a protest outside the summit on Wednesday,
called on the executives to collaborate with environmental organizations
to identify endangered forests and then to defer logging while solutions
for protecting habitat are found.
"It's not enough to claim you are collaborating while you continue to log
endangered forests," she said.
Berman, part of a new wave of environmental activism that focuses on
collaboration over confrontation, praised one of the CEOs, Tom Stephens of
Boise Cascade, for his role in changing British Columbia forest practices
when he was CEO of MacMillan Bloedel.
She said unlike much of the rest of the developed world, Canada still has
much of its original forest, which presents Canadian companies with a
unique opportunity to protect wildlife habitat.
"We have one of the biggest opportunities to get it right. I urge you to
work with us now for meaningful change on the ground."
Avrim Lazar, president of the Forest Products Association of Canada, said
in a later interview that he welcomed the comments from critics, saying it
is part of the dialogue the association is trying to kindle.
"What I found most interesting in Tzeporah Berman's talk is the clear
message that there is a new model for the environmental community and the
forest industry to work together.
"Ten years ago she would never be invited to speak inside and the industry
would be saying she is full of it.
"We are now some place very different. We and the environmental movement
have learned how to talk to each other.
"When we have constructive dialogue, things change on the ground." Lazar
said.
ghamilton@png.canwest.com

British drivers are badly done by. A road-pricing scheme could help change that

Jam yesterday
Jun 9th 2005
From The Economist print edition

British drivers are badly done by. A road-pricing scheme could help change
that

THEY are hemmed in by more bus lanes, picked on by more traffic wardens,
spied on by more speed cameras, punished with more fines and soaked more
enthusiastically by the exchequer than ever before. Now, to cap it all,
British drivers are threatened with a road-pricing system that would track
their movements and bill them according to where they travel when. So much
for the freedom of the road.
The driving lobby suspects that the government would use road pricing to
screw more money out of it. But it need not be that way. Combined with a
reform of motoring taxes, road pricing could make life better, not worse,
for drivers.

Small-island blues
Britain's roads are the most clogged in Europe. The length of the queues
is not just fraying drivers' nerves but also generating pollution,
damaging health and raising businesses' costs. And it's getting worse: by
2010, traffic volumes are expected to be a quarter higher than they were
in 2000.
Road pricing is an economist's dream solution because it replaces a system
that rations road use by queuing, which wastes people's time, with one
which rations it according to the value different drivers place on their
journeys. And as demand varies, so can price: in cities at rush-hour
prices can be set high; at night and in the countryside they can be kept
low.
But road pricing has always been regarded in Britain as a politician's
nightmare. While other Europeans are used to motorway tolls, Britons have
regarded the idea of charging them for using roads as analogous to
charging them for breathing.
Ken Livingstone, the mayor of London, has gone some way to changing that.
The congestion charge he introduced two years ago has reduced traffic by
15% and increased speeds by 22% in central London without much damage to
business. The scheme's success has given the government the courage it
needed to broach the idea of extending road pricing nationally, but it
will still have to work hard to sell the idea to motorists. The best way
of doing that would be to introduce road pricing along with a reform of
the inefficient system of taxing and investing in transport.
Road taxes should charge drivers for four sorts of damage they do: to road
surfaces, to the climate, to other people's health and to other drivers by
creating jams. The two main current sorts of tax—fuel duty and vehicle
excise duty (VED)—do not do that well. VED, an annual tax, penalises some
dirty cars, while fuel duty taxes people for burning up petrol, and thus
for contributing to climate change. Neither tax, however, does much to
discourage congestion, which wastes time and damages health.
Road prices, by contrast, could be set to take into account all those four
sorts of damage. Charges would depend crucially on assumptions about the
costs of economic damage; but according to one set of calculations (see
article), mid-range assumptions about climate-change and pollution costs
would lead drivers to pay about the same overall as they do at present.
Rural drivers would pay less; urban ones would pay more but get around
faster, so saving money in other ways.
Once a road-pricing system was up and running, it would also provide
valuable information about demand. At present, investments in the road
system are based on clunky cost-benefit calculations. By charging for road
use, the government would discover just what people were prepared to pay
for. That would encourage private investment in roads, and might lead to a
more rational system of allocating public investment in transport of all
types. At present, roads do unreasonably badly: in 2002-03, road users
paid £26.5 billion ($48 billion) in fuel and VED, and the government spent
£7.5 billion on the roads, while rail passengers got £2.8 billion in
subsidy. What's more, a system that charged cars specifically for the
pollution they cause would undermine one of the main arguments for
subsidising rail.
Drivers' wariness about road pricing is understandable, but they would
probably benefit. They might pay more, they might pay less, but they would
almost certainly get better roads and swifter journeys. So at least they'd
spend less time fuming in jams about the injustice of it all.

Driven to radicalism
Jun 9th 2005
From The Economist print edition

A national road-pricing scheme is looking less like a fantasy and more
like reality
TRANSPORT and politics do not mix. Road and rail policy operates on
time-spans of a decade or more, offering little electoral advantage to
ministers, who are never more than five years away from losing their jobs.
So it was nice to see Alistair Darling, the transport secretary, taking
the long view last week by restarting the debate on using pricing to
reduce congestion on Britain's worsening roads.

Everyone agrees that big changes are needed. The problem is not a lack of
capacity (most roads are empty most of the time) but the lack of an
efficient system for allocating it. Cars impose big costs on society
mainly in the form of congestion, but also through pollution, wear and
tear, and road accidents. In the absence of a market, road space is
allocated by queuing, putting Britain at the top of the European
congestion league (see chart 2). The current tax system (under which
drivers pay an annual charge on their cars and high taxes on fuel) is good
at encouraging fuel efficiency, but bad at reducing congestion, which
concentrates around a few urban areas and trunk roads. As a result, rural
drivers pay too much and urban ones too little. A system that charged
individual drivers for the costs of increased traffic density, vehicle
emissions and damage to the roads would encourage more efficient use of a
scarce resource and be fairer to boot.
Road pricing is easy to do. Many European countries have toll roads.
Sweden already runs a paper permit-based system for lorries. Singapore
used permits to control access to the city centre from 1975 until 1998,
when it switched to a network of electronic sensors. The British
government is toying with something fancier: a system based on satellite
positioning. At the moment, the technology is too inaccurate to use in
cities or forests. But while a low-tech system could be built at once, the
government has set a tentative date of 2014 for any British system—by
which time better satellites will be available.
Whether it is politically feasible is another question. Economics suggests
that pricing the roads properly would make everybody better off through
faster journeys, cleaner air or cheaper travel. Yet convincing the voters
will be hard.
Fuel and car taxes are unpopular because the government earns far more
from them than it spends on roads. In 2000, a surge in fuel prices led to
blockades outside refineries that succeeded in choking off the nation's
petrol supplies. That was one of only two times in Tony Blair's
premiership that Labour's approval ratings have fallen below those of the
opposition Tories. Mr Blair has not forgotten, and fuel taxes have not
risen since in real terms.
Still, the evidence suggests it can be done. When Ken Livingstone, the
mayor of London, introduced the capital's congestion-charging scheme in
2003, he sold the policy to a wary public by promising to corral the
(fairly modest) revenues and spend them on better public transport. The
government has indicated that it is thinking of sugar-coating its scheme
by cutting fuel and car taxes so that, overall, motorists would pay no
more to the Treasury than they do now—about £26.5 billion ($48 billion) a
year in 2002-03. Pricing the roads according to public opinion rather than
the true cost of motoring sounds like a compromise. But, interestingly,
what sounds like a political trade-off between efficiency and
acceptability may be the right outcome. Stephen Glaister, a professor at
Imperial College, says that, assuming that various environmental costs lie
roughly in the middle of their likely ranges, an efficient system could
generate about as much revenue as today's taxes. Only the distribution
would change, with the heaviest users of the roads paying more.
Motorists might be mollified by assurances that they won't have to pay
more, but it would leave ministers unable to repeat the bargain made by Mr
Livingstone. A revenue-neutral system would mean no extra cash to spend on
improvements to public transport or better roads. The busiest railways and
bus routes tend to run alongside the busiest roads, and without more
capacity, the risk is that congestion would merely be shunted from cars on
to already-crowded trains and buses. Rod Eddington, British Airways' chief
executive—who has been hired by the government to think big thoughts about
the future of transport—has said that he favours lots of investment in new
infrastructure. A revenue-neutral scheme would mean the money for that
would have to come from unpopular tax hikes elsewhere.

In the long run, though, road pricing could make transport decisions
easier. Mr Glaister points out that road charging would remove one of the
main justifications for the subsidies paid to the railways and buses, as
well as highlighting the financial pressures they face. Heavily subsidised
rural lines, which already spend most of their time transporting fresh
air, would become even less viable as passengers switched to cheap
motoring. Crowded urban routes would be even more packed with refugees
from the roads, making a good case for extra investment. Diverting
capacity to where it will do the most good is a sensible policy, but,
again, there will be political problems. The countryside lobby, for
instance, will oppose cuts to rural bus and train services.
Of course, 2014 is a long way off. Road pricing has been suggested before,
first in 1964 and again in 1971. Both times the government lost its nerve.
But with traffic expected to rise inexorably over the next 20 years, doing
nothing is getting harder. Mr Darling has called for a decision before the
next election. Both the main opposition parties support the idea in
principle. Making it happen in practice will be difficult, but the worse
the traffic gets, the more appealing any solution begins to look.

Shell and energy firms submit planning application for world's largest wind farm

Shell and energy firms submit planning application for world's largest
wind farm
Associated Press, 7 June 2005 - The Royal Dutch/Shell Group of Cos. and
other energy firms applied Tuesday to build a 1.5 billion pound ($2.7
billion) electricity-generating wind farm with the potential to supply a
quarter of the British capital, making it the world's largest.
The London Array project proposed by ShellEnergy Ltd., Duesseldorf,
Germany-based E.ON U.K. Renewables and joint venture Core Ltd. would place
some 270 wind turbines on offshore platforms where the Thames River meets
the North Sea around 60 miles outside London.
The turbines would generate around 1,000 megawatts and connect into
Britain's national grid to supply power for more than 750,000 homes,
helping meet Prime Minister Tony Blair's target of generating 10 percent
of electricity from renewable sources by 2010.
Blair's government has repeatedly expressed its commitment to offshore
wind farms as a way of cutting emissions linked to global warming. As one
of the windiest countries in Europe, Britain is naturally predisposed to
turbine power generation.
Shell said its proposed London wind farm could be built by 2010-11, if the
government grants permission before the end of 2006. The plant would avoid
emissions of up to 1.9 million tons of carbon dioxide every year and could
make up to 10 percent of the government's 2010 target, it said.
"This project will supply the equivalent of a quarter of London's domestic
load and will surely, once and for all, bury the myth that wind energy is
insignificant," said Core Director Erik Kjoer Sorenson. "Furthermore, it
is merely the first of a number of similar sized wind power schemes that
will place the U.K. market at the forefront of offshore renewable energy
development worldwide."
E.ON, the world's biggest publicly traded utility, will own a third of the
London Array project and Shell another third. The remaining third will be
held by Core, a joint venture between Farm Energy and Denmark's Energi E2
A/S. The partners will share the investment in proportion with their
equity stakes.
Greenpeace Executive Director Stephen Tindale welcomed the application.
"It is crucially important that we clean up the way we generate our energy
in response to this threat and that means developing renewables like wind
power as fast as possible," he said.
More than 140 countries around the world have signed the Kyoto Protocol on
global warming, which requires countries to sharply reduce greenhouse gas
emissions to 1990 levels by 2012. The protocol came into effect last year
despite U.S. refusal to ratify it.
Copyright 2005 Associated Press
All Rights Reserved

10.6.05

Technology that imitates nature

Technology that imitates nature
Jun 9th 2005
From The Economist print edition

Biomimetics: Engineers are increasingly taking a leaf out of nature's book
when looking for solutions to design problems
AFTER taking his dog for a walk one day in the early 1940s, George de
Mestral, a Swiss inventor, became curious about the seeds of the burdock
plant that had attached themselves to his clothes and to the dog's fur.
Under a microscope, he looked closely at the hook-and-loop system that the
seeds have evolved to hitchhike on passing animals and aid pollination,
and he realised that the same approach could be used to join other things
together. The result was Velcro: a product that was arguably more than
three billion years in the making, since that is how long the natural
mechanism that inspired it took to evolve.
Velcro is probably the most famous and certainly the most successful
example of biological mimicry, or “biomimetics”. In fields from robotics
to materials science, technologists are increasingly borrowing ideas from
nature, and with good reason: nature's designs have, by definition, stood
the test of time, so it would be foolish to ignore them. Yet transplanting
natural designs into man-made technologies is still a hit-or-miss affair.
Engineers depend on biologists to discover interesting mechanisms for them
to exploit, says Julian Vincent, the director of the Centre for Biomimetic
and Natural Technologies at the University of Bath in England. So he and
his colleagues have been working on a scheme to enable engineers to bypass
the biologists and tap into nature's ingenuity directly, via a database of
“biological patents”. The idea is that this database will let anyone
search through a wide range of biological mechanisms and properties to
find natural solutions to technological problems.

How not to reinvent the wheel
Surely human intellect, and the deliberate application of design
knowledge, can devise better mechanisms than the mindless, random process
of evolution? Far from it. Over billions of years of trial and error,
nature has devised effective solutions to all sorts of complicated
real-world problems. Take the slippery task of controlling a submersible
vehicle, for example. Using propellers, it is incredibly difficult to make
refined movements. But Nekton Research, a company based in Durham, North
Carolina, has developed a robot fish called Madeleine that manoeuvres
using fins instead.
In some cases, engineers can spend decades inventing and perfecting a new
technology, only to discover that nature beat them to it. The Venus flower
basket, for example, a kind of deep-sea sponge, has spiny skeletal
outgrowths that are remarkably similar, both in appearance and optical
properties, to commercial optical fibres, notes Joanna Aizenberg, a
researcher at Lucent Technology's Bell Laboratories in New Jersey. And
sometimes the systems found in nature can make even the most advanced
technologies look primitive by comparison, she says.
The skeletons of brittlestars, which are sea creatures related to starfish
and sea urchins, contain thousands of tiny lenses that collectively form a
single, distributed eye. This enables brittlestars to escape predators and
distinguish between night and day. Besides having unusual optical
properties and being very small—each is just one-twentieth of a millimetre
in diameter—the lenses have another trick of particular relevance to
micro-optical systems. Although the lenses are fixed in shape, they are
connected via a network of fluid-filled channels, containing a
light-absorbing pigment. The creature can vary the contrast of the lenses
by controlling this fluid. The same idea can be applied in man-made
lenses, says Dr Aizenberg. “These are made from silicon and so cannot
change their properties,” she says. But by copying the brittlestar's
fluidic system, she has been able to make biomimetic lens arrays with the
same flexibility.
Another demonstration of the power of biomimetics comes from the gecko.
This lizard's ability to walk up walls and along ceilings is of much
interest, and not only to fans of Spider-Man. Two groups of researchers,
one led by Andre Geim at Manchester University and the other by Ron
Fearing at the University of California, Berkeley, have independently
developed ways to copy the gecko's ability to cling to walls. The secret
of the gecko's success lies in the tiny hair-like structures, called
setae, that cover its feet. Instead of secreting a sticky substance, as
you might expect, they owe their adhesive properties to incredibly weak
intermolecular attractive forces. These van der Waals forces, as they are
known, which exist between any two adjacent objects, arise between the
setae and the wall to which the gecko is clinging. Normally such forces
are negligible, but the setae, with their spatula-like tips, maximise the
surface area in contact with the wall. The weak forces, multiplied across
thousands of setae, are then sufficient to hold the lizard's weight.
Both the British and American teams have shown that the intricate design
of these microscopic setae can be reproduced using synthetic materials. Dr
Geim calls the result “gecko tape”. The technology is still some years
away from commercialisation, says Thomas Kenny of Stanford University, who
is a member of Dr Fearing's group. But when it does reach the market,
rather than being used to make wall-crawling gloves, it will probably be
used as an alternative to Velcro, or in sticking plasters. Indeed, says Dr
Kenny, it could be particularly useful in medical applications where
chemical adhesives cannot be used.
While it is far from obvious that geckos' feet could inspire a new kind of
sticking plaster, there are some fields—such as robotics—in which
borrowing designs from nature is self-evidently the sensible thing to do.
The next generation of planetary exploration vehicles being designed by
America's space agency, NASA, for example, will have legs rather than
wheels. That is because legs can get you places that wheels cannot, says
Dr Kenny. Wheels work well on flat surfaces, but are much less efficient
on uneven terrain. Scientists at NASA's Ames Research Centre in Mountain
View, California, are evaluating an eight-legged walking robot modelled on
a scorpion, and America's Defence Advanced Research Projects Agency
(DARPA) is funding research into four-legged robot dogs, with a view to
applying the technology on the battlefield.

Madeleine, a swimming robot modelled on a fish

Having legs is only half the story—it's how you control them that counts,
says Joseph Ayers, a biologist and neurophysiologist at Northeastern
University, Massachusetts. He has spent recent years developing a
biomimetic robotic lobster that does not just look like a lobster but
actually emulates parts of a lobster's nervous system to control its
walking behaviour. The control system of the scorpion robot, which is
being developed by NASA in conjunction with the University of Bremen in
Germany, is also biologically inspired. Meanwhile, a Finnish technology
firm, Plustech, has developed a six-legged tractor for use in forestry.
Clambering over fallen logs and up steep hills, it can cross terrain that
would be impassable in a wheeled vehicle.
Other examples of biomimetics abound: Autotype, a materials firm, has
developed a plastic film based on the complex microstructures found in
moth eyes, which have evolved to collect as much light as possible without
reflection. When applied to the screen of a mobile phone, the film reduces
reflections and improves readability, and improves battery life since
there is less need to illuminate the screen. Researchers at the University
of Florida, meanwhile, have devised a coating inspired by the rough,
bristly skin of sharks. It can be applied to the hulls of ships and
submarines to prevent algae and barnacles from attaching themselves. At
Penn State University, engineers have designed aircraft wings that can
change shape in different phases of flight, just as birds' wings do. And
Dr Vincent has devised a smart fabric, inspired by the way in which pine
cones open and close depending on the humidity, that could be used to make
clothing that adjusts to changing body temperatures and keeps the wearer
cool.

From hit-and-miss to point-and-click
Yet despite all these successes, biomimetics still depends far too heavily
on serendipity, says Dr Vincent. He estimates that there is only a 10%
overlap between biological and technological mechanisms used to solve
particular problems. In other words, there is still an enormous number of
potentially useful mechanisms that have yet to be exploited. The problem
is that the engineers looking for solutions depend on biologists having
already found them—and the two groups move in different circles and speak
very different languages. A natural mechanism or property must first be
discovered by biologists, described in technological terms, and then
picked up by an engineer who recognises its potential.
This process is entirely the wrong way round, says Dr Vincent. “To be
effective, biomimetics should be providing examples of suitable
technologies from biology which fulfil the requirements of a particular
engineering problem,” he explains. That is why he and his colleagues, with
funding from Britain's Engineering and Physical Sciences Research Council,
have spent the past three years building a database of biological tricks
which engineers will be able to access to find natural solutions to their
design problems. A search of the database with the keyword “propulsion”,
for example, produces a range of propulsion mechanisms used by jellyfish,
frogs and crustaceans.
The database can also be queried using a technique developed in Russia,
known as the theory of inventive problem solving, or TRIZ. In essence,
this is a set of rules that breaks down a problem into smaller parts, and
those parts into particular functions that must be performed by components
of the solution. Usually these functions are compared against a database
of engineering patents, but Dr Vincent's team have substituted their
database of “biological patents” instead. These are not patents in the
conventional sense, of course, since the information will be available for
use by anyone. By calling biomimetic tricks “biological patents”, the
researchers are just emphasising that nature is, in effect, the patent
holder.
One way to use the system is to characterise an engineering problem in the
form of a list of desirable features that the solution ought to have, and
another list of undesirable features that it ought to avoid. The database
is then searched for any biological patents that meet those criteria. So,
for example, searching for a means of defying gravity might produce a
number of possible solutions taken from different flying creatures but
described in engineering terms. “If you want flight, you don't copy a
bird, but you do copy the use of wings and aerofoils,” says Dr Vincent.
He hopes that the database will store more than just blueprints for
biological mechanisms that can be replicated using technology. Biomimetics
can help with software, as well as hardware, as the robolobster built by
Dr Ayers demonstrates. Its physical design and control systems are both
biologically inspired. Most current robots, in contrast, are
deterministically programmed. When building a robot, the designers must
anticipate every contingency of the robot's environment and tell it how to
respond in each case. Animal models, however, provide a plethora of proven
solutions to real-world problems that could be useful in all sorts of
applications. “The set of behavioural acts that a lobster goes through
when searching for food is exactly what one would want a robot to do to
search for underwater mines,” says Dr Ayers. It took nature millions of
years of trial and error to evolve these behaviours, he says, so it would
be silly not to take advantage of them.
Although Dr Vincent's database will not be capable of providing such
specific results as control algorithms, it could help to identify natural
systems and behaviours that might be useful to engineers. But it is still
early days. So far the database contains only 2,500 patents. To make it
really useful, Dr Vincent wants to collect ten times as many, a task for
which he intends to ask the online community for help. Building a
repository of nature's cleverest designs, he hopes, will eventually make
it easier and quicker for engineers to steal and reuse them.

7.6.05

Burma sanctions: isolating whom? The ethics of the ongoing Burma boycott may be slightly more complex than they seem

Burma sanctions: isolating whom?
EC Newsdesk
26 May 05

The ethics of the ongoing Burma boycott may be slightly more complex than
they seem
More people are going to Burma, at least according to its government. This
is despite the fact that prominent figures such as UK prime minister Tony
Blair, Nobel Peace Prize winner and elected leader of the country Aung San
Suu Kyi and Hollywood actress Susan Sarandon are telling the world to stay
away and play the sanctions game.

The government statistics, announced in February, note an increase in
foreign visitors in 2004 of about 12.5% to 675,000.

Trade sanction busters are usually shady corporate types in Saville Row
suits, not backpackers and grannies in tour buses. In this light, might it
time to reconsider the worth of isolating Burma?

There are reasons to reconsider. For one thing, sanctions are probably not
working. According to the Brussels-based International Confederation of
Free Trade Unions, there are 437 foreign companies operating in Burma.
These include major multinationals such as Credit Agricole, Deutsche Bank,
Hyundai, Suzuki and Total. This is despite worldwide condemnation and
official bans on trading with Burma in the US and the European Union.

And the much-hated generals are still there and their grip is as tight as
ever.

Second, sanctions don’t have a hugely successful record. In South Africa
during the apartheid regime, they may have helped, but there were a range
of other contingencies that eventually led to the breakdown of that
heinous system. In Iraq, before the US invasion, sanctions were seen as
more damaging than Saddam Hussein for many long-suffering Iraqis.

Third, it may be possible to operate in Burma and give only minimal
support to the generals but give considerable economic and moral support
to the Burmese themselves. It’s not so easy for those pesky tourists to
get to Burma, so those who go must have a real interest. Sure, there will
be the ignorant ones who chose to remain so, but there will also be those
who will leave some goodwill and hope and who will better understand a
troubled people.

Any review of Burma sanctions can only be done under the banner of
improving the lot of the Burmese themselves, not based on whatever lost
opportunities multinational corporations might suffer. This has to be a
humanitarian not an economic argument.

Rather than isolating all Burmese, why not isolate the generals who make
their lives a misery? More well directed, responsible and globally
co-ordinated trade and investment than we have today might just do that.

Banking blocks that already exist, for instance, might be extended to
include offshore transfer mechanisms for the Burmese generals. Some
incoming travel might actually be useful.

Ultimately, the ability of governments to administer economic sanctions is
always fraught with problems. We may need to operate less at the
legislative level and more at the corporate culture level. Companies can
be ethical in Burma with the right mindset. Devising the means will be a
big job, but the despots of Burma and those of the future will require
more targeted and sophisticated boycotts than currently exist.

Write to the Editor at editor@ethicalcorp.com.

Who owns ideas?

The People Own Ideas!
By Lawrence Lessig June 2005

We entered the youth camp that morning by passing down a long, white
gravel road and under a wooden gate. Spread to one side, and for as far as
you could see, were rows and rows of tents. In front were scores of
showers, with hundreds of kids in swimsuits milling about, waiting to
rinse. It felt like a refugee camp.

In a sense, it was. More than a hundred thousand had descended upon Porto
Alegre, Brazil, to attend the World Social Forum, a conference intended to
offer a progressive alternative to the much smaller, and much more famous,
World Economic Forum meeting at Davos, Switzerland (see "Letter from
Davos," April 2005).
Just past the showers was a sprawling collection of wooden huts, connected
by a canvas spread across their roofs. This was the free-software lab. To
the right, there was a training room, with more than 50 PCs arranged along
long tables. At the far end was a large screen, where 20 to 30 kids were
watching an instructor explain the workings of some video-editing
software. Every machine was running free software only--GNU/Linux as the
operating system, Mozilla as the browser, and a suite of media production
software, most of which I had never seen on any machine anywhere.
The room was being prepared for what seemed like a disco. Three DJ-like
characters were huddled over a table full of machines, testing sound and
twiddling fantastically elaborate controls. They were not DJs, however,
but VJs: video jockeys who were preparing a demonstration of the tools
they had built (as they described it) for "recycling culture." The music
would, for all I know, not have been out of place in the coolest New York
dance club; but the images were a collage of television and color
presented in a way that I had never seen before, anywhere. As the music
played, video samples were scratched across the screen. The VJ operated a
turntable-like controller, which drove powerful digital video equipment
designed to mix images, not records.
In another room, the yellow light filtering through the canvas roof bathed
another 50 machines. John Perry Barlow, former lyricist for the Grateful
Dead and cofounder of the Electronic Frontier Foundation, sat stooped over
his PowerBook chatting with someone. He looked up with a smile. "It's [New
York Times writer John] Markoff at Davos." Obviously, Wi-Fi bathed the
room as well.

Inside the room, a group of five or six Brazilians was waiting there to
meet us. A film crew waited as well. They were shooting a documentary. The
Brazilians were our guides, and I was there to understand what a "free
software lab" was all about.

Stallman's Good GNUs
Everyone who reads Technology Review must have heard of "free software."
It was on MIT's campus twenty years ago that the Free Software Foundation
was born; it was an MIT researcher, Richard Stallman, who presided at its
birth. Free software is code that carries a promise. Actually, it carries
five promises (four explicitly, and one by implication), according to the
foundation's definition of free software. Geekily numbered starting with
zero, the promises are

(0) The freedom to run the program for any purpose;
(1) The freedom to study how the program works and adapt it to your needs;
(2) The freedom to redistribute copies so you can help your neighbor;
(3) The freedom to improve the program and release your improvement to the
public, so that the whole community benefits.
The first and third freedoms imply a final, and equally important,
freedom: access to the source code of the program. Software that offers
anyone these freedoms is free; software that compromises any of them is
not.
Stallman launched his movement as a reaction to changes in the environment
within which software was written. In the world he had known, programmers
were a sort of ethical scientist. Coders worked on common problems; they
shared the knowledge that their work produced. More than 60 years ago,
sociologist Robert Merton said of science, "Incipient and actual attacks
upon the integrity of science have led scientists to recognize their
dependence on particular types of social structure"; so, too, did Stallman
believe that the freedom of programming faced "incipient and actual
attacks." Its defense, he believed, would depend upon "particular types of
social structure." He thus set out to build one: a social structure that
would help coders preserve the integrity that he thought their discipline
should have. The foundation of this structure would be a "free" operating
system, inspired by Unix, but not actually Unix (and thus cleverly named
GNU--GNU's Not Unix).
At the time, Stallman's ambition seemed to many unachievable. No single
person, and no collective of volunteers, had ever succeeded in finishing a
software project on the scale of a complete operating system. There was no
reason to believe Stallman and his followers would succeed. But they began
with first steps--the tools and scaffolding with which everything else
could be built. These included some of the most important bits of GNU,
like its compiler, the GNU Compiler Collection (GCC), and some of the most
beautiful, like the Emacs editor. And each bit was wrapped in Stallman's
single most brilliant idea: a license that would assure that the code he
was building would forever remain free.
The GNU General Public License, or GPL, is a copyright license. In the
language of the free-software movement, it is also a copyleft license.
Like any copyright license, it imposes conditions upon some uses of the
products it governs. Like any copyleft software license, it includes among
those conditions the requirement that changes to the protected code must
be shared if they are redistributed. The copyleft requirement is a benefit
for some (those who share the goal of spreading free software); it is a
curse for others (those who would like to add to the project and benefit
exclusively from what they add). Stallman bet there would be enough who
saw it as a benefit to build a free operating system.
Six years into the project, however, GNU still lacked a heart--that is,
the "kernel" of an operating system that provides control of a computer's
hardware. That part would not come from Stallman. In 1991, Linus Torvalds,
a Finnish undergraduate, announced the beginnings of a kernel governed by
the GPL. Hackers started integrating that kernel--which they dubbed
"Linux"--into GNU. By the middle 1990s, there was a full, functioning,
free operating system spreading across the Internet. By the end of the
1990s, GNU/Linux had become a powerful and free competitor to Microsoft's
Windows operating system (see "How Linux Could Overthrow Microsoft," p.
64).

Proprietary Systems
There are a million details to fill in before the story of free software
makes sense to anyone who doesn't already know it. Can free software be
used commercially? Yes, freedom promise 0 requires it. Can free software
be sold? Yes, for whatever price the market will bear. Can businesses make
money producing or supporting free software? Some think so, as the
billions invested by IBM and Hewlett-Packard suggest. Does free software
destroy the financial incentive to produce new software? Not necessarily.
Free software simply makes improvements transparent, as they are in any
number of other healthy, competitive markets.

But let's put those questions aside and focus instead upon a historical
pattern: a practice is at one time "free"; something changes; that freedom
is lost; in response, activists work to restore that freedom. Thus, coding
had been free; changes in the market had rendered it unfree; free-software
activists acted to restore that freedom.
As I listened to the Brazilians explain the free-software lab, I began to
realize that this pattern was recurring. They were doing for culture what
Stallman had done for software. The lab was not so much about "free
software." It did not, for example, teach people how to make free
software. Its aim instead was to help them build free culture using free
software. The lab offered "workshops about video editing, audio editing,
collaboration tools, [and] online collaboration," all "on top of free
software." But the objective of this teaching wasn't, or wasn't just,
better software. The objective was a different economy for culture.
Culture itself, as one Brazilian explained to me, should be free, meaning,
he said, "free as in free software."
The parallel between free software and free culture is strong, though
bringing it out will require some distinctions. For unlike software,
culture has always had an element of proprietary control. And for most of
our history, proprietary culture has actually encouraged free culture. But
changes in the way culture is owned now make necessary the free-culture
movement that Brazil is promoting. To understand that movement, we must
understand what provoked it.
Proprietary culture is rendered proprietary by a system of regulation we
call "copyright." In the U.S., copyright regulation was slight at first.
In effect, the law reached only to the printing press, and by design, it
regulated only a small proportion of creative work--just "maps, charts,
and books." Very soon, however, the scope of the law began to grow. By
1831, it covered music. In 1870, it expanded to cover paintings, statues,
and, most importantly, "derivative" works, meaning work based upon an
earlier work--a translation, for example, or a play based upon a story.

These expansions were reasonable enough, each the product of a
self-conscious legislative change. But early in the 20th century, the law
became latched to a device that would produce unimagined changes in
copyright's reach. For in 1909, through a mistake in codification
(literally: it was an error in the wording used in the statute), the
exclusive right that copyright protected was defined to be not only the
right to "publish" or "republish" but the right to "copy." That change
didn't matter much in 1909: the machines for making copies were still
printing presses, and no one believed a schoolchild writing out a poem 50
times so as to memorize it was committing a federal offense. But as the
machines that copied became more and more common, the reach of copyright
law became more and more extensive. At first it was commercial machines
that bore the burden of the law: player pianos, radio, cable TV. But in
the 1970s, and for the first time, a printing press to which the common
folk had access--the "copier"--became the target of extensive litigation.

These expansions in the law were balanced by important, built-in
limitations on copyright. "Fair use" is one important limitation. But the
most important was the product of a formality. To get the benefit of
copyright protection, an author had to "opt in" to the copyright system: a
work had to be registered; after an initial term, the registration had to
be renewed; and the work had to be marked (©). No more than 50 percent of
work published in the 19th century was registered. More than 80 percent of
that registered work was never renewed. Copyright law thus automatically
narrowed its reach to work presumptively needing the protection of
copyright. It left much published work (and the overwhelming majority
after an initial term) free.
This opt-in system was changed, however, in a series of amendments to
American copyright law beginning in 1976. After these changes, creative
work was automatically protected by a federal copyright, whether or not
the work was registered, without any need to renew the copyright, and
whether or not the work was marked with a funny little ©.
Copyright law had always been conditional. It was now unconditional. It
had always automatically narrowed its reach to work presumptively needing
the benefit of copyright protection. It now reached all work, regardless
of whether it needed any copyright protection. There was no evil
conspiracy behind this change. Its purpose was perfectly benign: to
simplify copyright law. The formalities of the old system were a bother.
Abolishing them would remove that bother. But the consequences of
abolishing these formalities were dramatic: we moved from regulating a
minority of creative work to regulating all of it. Call this copyright's
"first big change."
The second change is even more dramatic. To see the point, notice first
how little the law of copyright regulates ordinary uses of creative work.
Reading a book, for example, is not a regulated (by copyright law) use of
the book. It's a free use: reading a book creates no copy. To lend someone
a book is not a regulated use: it creates no copy. And to sell someone a
book is not a regulated use: it creates no copy. These ordinary uses are
beyond the reach of copyright law. Or put differently, copyright law
leaves these ordinary uses immune from regulation.
But in the digital world, this immunity disappears. It is the nature of
digital technologies that every use produces a copy. Thus, it is the
nature of a copyright regime like the United States', designed to regulate
copies, that every use in the digital world produces a copyright question:
Has this use been licensed? Is it permitted? And if not permitted, is it
"fair"? Thus, reading a book in analog space may be an unregulated act.
But reading an e-book is a licensed act, because reading an e-book
produces a copy. Lending a book in analog space is an unregulated act. But
lending an e-book is presumptively regulated. Selling a book in analog
space is an unregulated act. Selling an e-book is not. In all these cases,
and many more, ordinary uses that were once beyond the reach of the law
now plainly fall within the scope of copyright regulation. The default in
the analog world was freedom; the default in the digital world is
regulation. Call this copyright's "second big change."
When you tie these two big changes of copyright together, you get a "truly
profound change." Not only is the reach of the law dramatically larger
because copyright now regulates all rather than a minority of work, but
the effective scope of the law is dramatically larger because copyright
regulates all uses rather than just some. The U.S. Congress
self-consciously made one of these two big changes, but it didn't know how
far-reaching its legislation would be, since it couldn't foresee the
eventual universality of machines that copy.

Control Issues
But if in fact the scope and reach of copyright law have expanded so
radically, why hasn't anyone noticed? Why aren't copyright holders
celebrating?

Simple answer: the expansion of copyright regulation has been offset by an
equally radical diminution of its effectiveness. Though in theory an
opt-out copyright regime plus digital technology means that everything is
presumptively regulated, in practice, digital technologies have meant that
this regulation is irrelevant. Digital technologies were designed to
enable perfect copies; they were not designed to enable control over these
copies. The perfection and freedom of digital technology, including the
Internet, have thus led to a Roman feast of copyright infringement.
Digital tools make it simple to share data with your 10,000 best friends,
so people share data with their 10,000 best friends. The United States may
be a nation of drivers who stop at red lights--even deserted red lights,
in the middle of the night--but its citizens didn't even hiccup when
slurping down peer-shared creative content, copyright notwithstanding.
Whatever the law says, actions speak louder than words. And it is the
actions of ordinary users that worries industries that depend on
copyright, such as publishing, music, film, and software.
That worry has now prompted a response, and it is this response that in
turn worries the free-culture movement. Digital technologies are only
technologies: they are made by humans, and they can be remade by humans.
If existing digital technologies have so far denied copyright holders
control over how the fruits of their creativity get used, then future
digital technologies can be remade to restore control to those same
copyright holders. In fact, that's just what is happening.
This restoration of control operates under the name "digital rights
management" (DRM). DRM, and the myriad of supporting changes in technology
that it will demand (for example, the addition of dedicated "trusted
computing" chips to new computers), will build back into the architecture
of the digital world the control that the original architecture of the
digital world disabled. Nor will it stop there: copyright owners want
more. Current DRM proposals reach far beyond the balance of proprietary
and free culture in the analog world. If enacted, they would enable a
copyright holder (or software creator) to, for example, dictate how many
times you can read the e-book you've "bought," or how many times you can
move it from one machine to another. Whatever uses you can imagine for a
digital device, imagine DRM controlling them. That's the potential of this
technology--a potential that reaches far beyond the limits of predigital
copyright regulation.
Now, economists and others of a capitalist bent (see "The Creators Own
Ideas," p. 56) will argue that it's not at all obvious that expanded
regulation would be bad. They'll tell you that in many cases, giving
property holders the power to control (or "discriminate") in uses of their
property actually increases the total wealth of society. So we shouldn't
necessarily condemn the tightening of control that DRM will produce in
cyberspace, at least if it increases social wealth.
I don't want to quibble with economists (although I do answer Richard
Epstein's objections to this essay: see "Rebuttal" on page 63). My point
is less ambitious: it is simply to remark that the prospect of such tight
controls would have seemed bizarre even three decades ago, and that we
need to think quickly to decide whether we want such controls in the space
of culture. When the Internet gives copyright owners perfect control of
their content, then, since it's all automatically copyrighted, every use
of it will presumptively require permission. We will then no longer live
in a free culture, but in a "by-permission" culture. And these permissions
will no longer be policed by courts or the law but rather by software
code.
This is the control that the free-culture movement fears. Theoretically,
digital technologies give the law the right to regulate culture to an
unprecedented extent. DRM will turn that theory into practice. Do we know
enough to conclude that the benefits of that practice will outweigh the
costs? Do we even know enough to understand the costs?

DRMatically Bad
The case against DRM comes in two flavors, the familiar and the more
obscure.

The familiar complaint is about the exclusivity of markets: maybe the
price of reading an e-book will be too high; maybe too many people will be
shut out of the market. That's a real concern in developing nations around
the world, where the cost of both proprietary code and proprietary culture
is wildly beyond the means of most people. Still, it's a concern that
market apologists can quickly dismiss (see Epstein).
But consider the second complaint against DRM--one generally missed by the
market apologists. This complaint is more fundamental: DRM abridges our
personal freedoms and inhibits cultural transmission. To appreciate it,
step back from the digital for a moment. Think instead about human culture
as a whole. Participation in cultural life involves a practice that we
could call "remix." You read a book. You tell the story to friends. You
see a movie that inspires you. You share its story with your family, to
spread that inspiration.
Remixing uses the fruits of someone else's creativity. There's no
guarantee that it does any favors to the work that is remixed. There's no
requirement that it treat the work respectfully or kindly. The freedom to
remix is a freedom to ridicule or respect. Fairness is not the measure.
Freedom is.
It is almost impossible to imagine a culture thriving if its people are
not free to engage in this kind of practice. Remixing is how culture gets
made. The acts of reading, or criticizing, or praising, or condemning bits
of culture are how we create things. This is true whether the culture is
commercial or not: you cannot limit remixing to things in the public
domain. In our tradition, we have been free to remix, whether the stuff
remixed is copyrighted or not.
This freedom, however, has been limited, historically, by an important
technological fact. Since the dawn of humankind we have been free to
remix, but the technology of remixing has been words. We use words to
remake our culture. We use words to criticize or incorporate. The ordinary
ways in which culture gets made are textual. No one restricted the freedom
to remake culture because, in free societies at least, no one purported to
restrict what ordinary people did with ordinary words.
So what happens when the ordinary ways in which culture gets remixed
change? What happens when the ordinary tools of remixing change? Do the
freedoms to remix change as well? Will we be more or less free to remix
culture in the 21st century than we were in previous centuries?
Consider how the kids in Porto Alegre think about remixing. They remix
culture with words, certainly. But they want to build the capacity to
remix more than words. They hope to use computers to remix culture. For
most of us, computers are a way to type fast. But for most of them,
computers will be a way to speak, using sounds and images, synchronized or
remixed, to make art or remake politics.

It is extremely hard to describe the new kinds of remixes that digital
technologies enable. That may be their point. You could look at some
examples at atmo.se. But if you're stuck with your imagination, then you
need to extrapolate from examples you've seen so far. Think about the very
best examples of digital media that you've experienced (perhaps the JibJab
remix of "This Land Is Your Land"), and then remember they're not likely
the product of Sony or Disney. Digital technologies have inspired an
extraordinary range of creativity, in part because they lower the media
market's barriers to entry, which in turn invites a much wider range of
participation.

We're just now beginning to see the consequences of this democratization
of artistic means. A couple years ago, for example, a young filmmaker
named Jonathan Caouette began playing with his boyfriend's iMac. The iMac
came bundled with Apple's iMovie program. Caouette was smitten with it.
And while he had never studied film, he had shot an extraordinary amount
of video growing up. He began obsessively to digitize this video. Then,
using iMovie, he remixed it. The result was a film that was the hit of
Sundance and Cannes in 2004: Tarnation. It cost Caouette just $218 to make
this film.
The point is not that anyone can make a Cannes hit. But it is enough to
recognize that many more people (indeed, millions more) could make good
films. New digital technologies could enable an explosion of creative
work.
Now there's no problem, of course, with this sort of creativity if the
underlying remixed culture is "free": Jonathan Caouette didn't have much
trouble making his film since he remixed his own footage. But what if you
wanted to use these technologies to remix copyrighted content with your
own content?
The short answer is, you couldn't. Under today's rules, remixing
copyrighted digital content is infringing the rights of the copyright
holder.
That in turn makes concrete the second, less familiar, complaint against
DRM: if the technology permits the most extreme interpretation of existing
copyright law, remixing will not become merely difficult. It will be
effectively impossible--without clearing the rights first. If content is
locked in code that requires permission before it can be reused, or
remixed, then that permission will poison the practice of remixing. A kind
of creativity--familiar since the beginning of culture--will thus be lost
to digital culture and, as digital culture occupies more and more of our
activities, to culture as a whole.
This, finally, is the link between the free-software and free-culture
movements. In both, there was a practice that was essentially free. In
both, a change in the environment of the practice removed that freedom.
With free software, the change was the rise of proprietary code. With free
culture, the change was the radical expansion of the reach of copyright
regulation. Technology made both of these changes possible. Both the
free-software and free-culture movements in turn use technology and law
(through copyright licenses) to restore the freedoms that proprietary code
and culture removed. Each proceeds through the voluntary efforts of
creators to preserve a wider range of freedoms for their successors. Each
seeks a world without the controls that the extremes of proprietary
assertion produce.

Truly Free Markets
When most people trip upon these free movements, their initial reaction is
that both are implausibly utopian. They read "free" to be a rejection of
basic economic principles.

But the economy of free software is still an economy. It produces wealth;
it inspires growth; it spreads services broadly within a society. It
functions differently than the economy of proprietary software--different
scarcities are traded--but it is still an economy. And literally billions
of dollars have been invested to make it flourish.
The same is true of free culture. Many read "free culture" to mean that
artists don't get paid. But here, too, the difference is not that one
approach (proprietary culture) builds an economy while the other (free
culture) does not. In the way that I've use the term, free culture
describes the economy that governed creative industries for at least the
first 186 years of the American republic. More importantly, proprietary
culture has never yet governed any creative economy, anywhere. No society
has ever imposed the level of control that the proprietary culture of
digital technologies and DRM would enable.
The kids in Porto Alegre were resisting economic shifts away from the old
balance that has defined the Western tradition. The economy that they
would build doesn't deny the importance of copyright (indeed, the licenses
necessary to build free software and free culture depend upon copyright).
But it revises copyright to fit a digital age more effectively. It
structures the law in light of technology, to produce the greatest
opportunity for creativity and growth that the technology might offer.
These are improvements in efficiency. They aim at increased wealth. But
there is a growing politics supporting both movements that has little to
do with efficiency or wealth. This is payback politics, tied less to ideas
than to an increasing global frustration with the United States.
The cause is not hard to see: according to the United States, Brazil, for
example, is a pirate nation. The International Intellectual Property
Alliance (which, its name notwithstanding, represents U.S. copyright
interests) estimates that this piracy cost United States copyright
industries close to $1 billion last year. Consequently, the U.S. has begun
to put pressure on Brazil. That pressure has produced an unsurprising
reaction against the stuff that makes it possible for Brazil to be a
pirate nation--proprietary code and proprietary culture.
For there's another way to reckon the cost of the proprietary. According
to the Brazilian government, for example, Brazil sends close to $1 billion
to the north each year just to pay for software licenses. So as the
Brazilians see it, tongue firmly in cheek, this proprietary stuff is a bad
thing all around--costing the U.S. $1 billion, and Brazil $1 billion as
well.
The obvious solution is to dump the proprietary stuff. So the Brazilian
government is pushing itself and the nation to substitute free software
for proprietary software. As one member of the government said during a
speech at the World Social Forum, "We're against software piracy. We
believe Microsoft's rights should be respected. And the simplest way to
respect their rights is for Brazilians everywhere to switch to free
software."
The Brazilian government is beginning to internalize the tenets of the
free-culture movement as well. Brazil's minister of culture, Gilberto Gil,
is leading a push for practical reform of the copyright system. His
ministry has launched a project called Points of Culture (Pontos de
Cultura) that will establish free-software studios, built with free
software, in a thousand towns and villages throughout Brazil, enabling
people to create culture using tools that support free cultural
transmission. If things go as planned, the result will be an archive of
Brazilian music, which will be stored in digital form and governed by a
license inspired by free software's GPL. The Canto Livre project will
"free music" made in Brazil, for Brazilians (and the world) to remix and
re-create. And like a free-software project, it achieves that freedom on
the back of copyright.
Gil is emphatically not against copyright. He's one of Brazil's most
successful musical artists, which means he has benefited greatly from
copyright. But he is also one of the very few Brazilian artists to make it
outside of Brazil. And he is convinced that a different kind of economy
might spread Brazilian creativity more broadly.
So the U.S. calls them pirates, and they reform their ways--not by more
faithfully buying our products, but by finding ways to remain creative
without infringing our rights. This is free software "ported"--as software
engineers say--to free culture, and it inspires all the hype typical of
such movements. "We're hoping," the leader of the free-software lab
explained, "everybody is going to start producing their own media content
and then they won't have to watch TV anymore."
That's a rather grand ambition, no doubt. But before you dismiss it as
mere youthful idealism, consider this: had you met Richard Stallman in
1984, would you have believed him? And remember, he didn't have the
government of the fifth-largest nation in the world behind him.

Imagine All the People
Two nights before my trip to the free-software lab, I attended a
free-software rally at the same youth camp. Really. A rally. I arrived
with Minister Gil and John Perry Barlow. The place was packed. There were
hundreds inside the tiny tent; there were many hundreds more huddled
outside. We were seated near the front, the only three with chairs. The
evening began with some lectures, then followed with some music.

You can't imagine this scene. Or at least you can't imagine this scene as
a rally for free software. I've seen free-software rallies in the U.S.
They're populated by geeks with ponytails. This was something very
different. The tent was divided evenly between men and women. Geeks were
in the minority. Most of the people at the rally were astonishingly
beautiful, and amazingly articulate. They were young and intensely
passionate. And they were chanting free-software slogans. It was Woodstock
without the mud and squalor, and with a penguin in the middle of the room.
For a bit, I was terrified a riot would break out. There was no room to
move. We were physically squeezed on all sides. I tried to imagine Donald
Rumsfeld in the same situation. One or two police stood at the back, just
in case. But the crowd was peaceful, just jubilant.
Just as Gil started to speak, however, a handful of masked protesters
appeared out of nowhere and positioned themselves right up front,
brandishing posters. They were attacking the government. They were
attacking Gil. They were supporters of pirate radio. They wanted a third
layer of freedom--free radio spectrum, in addition to free software and
free culture--and the government had resisted them. It was hypocrisy, they
screamed. I was sure it would turn ugly--until Gil did something
unimaginable in U.S. political culture. He stopped, and he engaged them.
He argued with them. He listened to their arguments. A deputy joined Gil
in the argument. They paused to listen to the protesters argue back. They
then responded again, and Gil slowly whittled the opposition down. Midway
through all this, a kid wearing a white T-shirt stood up just in front of
us. Emblazoned on the back was the slogan "This is what democracy looks
like." Eventually the crowd rose in Gil's support. They wanted more music.
The protestors yielded. Gil was asked to sing some songs.
By the end of his performance, the crowd was in a euphoria. Imagine a mix
between RFK and John Lennon, and you have a sense of this man's power and
charisma. As we left, the crowd left with us--mobbing Gil. Teenage girls
wanted him to sign their backs. Men and women gave him anything they had
to sign. He was grabbed again and again. If people disagreed with him, he
would stop and engage them. He argued, but always with respect.
We were finally pushed onto a golf cart and then into a government car, so
he could escape. But even here, when someone knocked on Gil's window, he
rolled it down and continued arguing. He yelled out his final words as his
driver (a man with less patience than Gil) sped away. When the window was
closed, and after a moment of silence, I tried to explain to Gil just how
extraordinary that scene appeared to American eyes. I said that I could
never imagine the equivalent in the United States, with anyone actually in
power.
"Yes, I know," he said, smiling. America, he explained, has "important"
people. "Here, we are just citizens."
These "citizens" are building something. We won't notice it until it is
big enough to see from America. But if it gets that big, nothing will stop
it. Just as the free-software movement has built an economy of free
software, the Brazilians--and others around the world--will have built an
economy of free culture, competing with, perhaps displacing, but no doubt
changing the proprietary culture that finds itself dominant now.
Lawrence Lessig is a professor of law at Stanford Law School. He is the
author of The Future of Ideas and Code and Other Laws of Cyberspace. His
most recent book, Free Culture, was released in paperback this spring and
is available as a free download at www.free-culture.org.

The Creators Own Ideas
By Richard A. Epstein June 2005

My task here is to write a response to Larry Lessig's meditation on the
free-software movement and its relationship to the general law of
copyright (see "The People Own Ideas!"). But as the dry tone of my first
sentence suggests, we have very different approaches to our common topic.
Lessig is a master at weaving personal vignettes with structural
arguments. The vignettes are intended to introduce an intimate personal
dimension to the arcane world of intellectual property. His readers
receive a gut-level education about the immense impact that legal rules
have on ordinary people, whose voices, he tells us, can only be heard
above the din if they speak in unison.

I demur. The selective imagery of eager students in Porto Alegre, Brazil,
"remixing culture" using free software does nothing to address the central
policy issues around intellectual property. The complex trade-offs needed
to govern software and copyright aren't illuminated by the artful
juxtaposition of real users of free software with the nameless stick
figures stuck with proprietary alternatives. One could as easily paint a
picture of high-spirited inner-city youths mastering Microsoft Office
under the benevolent gaze of the Bill and Melinda Gates Foundation.
Neither helps.
Private and Common Property
My qualified defense of proprietary software rests on my general approach
to property rights. It may seem odd that I see land law as a place to
begin thinking about copyright law in the digital age, but in the law,
continuity counts for more than novelty. While we always have to tend to
the differences among different forms of property, we are likely to make
fewer mistakes by proceeding carefully from established understandings.
Every legal system in history has blended two separate property regimes:
the private and common. Both are important to software and copyright.
Private property confers on individual owners exclusive rights to the
possession, use, and disposition (sale, lease, mortgage, gift) of some
given tangible resource. Virtually all civilizations start with a
decentralized system in which the person who first takes an unowned thing
is entitled to keep it against the rest of the world. Providing a plot of
land or individual object with a single, determinate owner facilitates its
effective use. The farmer who sows today knows that she can reap tomorrow,
without fearing the incursions of others. The ability to sell, lease, or
mortgage property allows for everything from a simple transfer of land
from person A to person B to the formation of complex cooperative ventures
among multiple parties. The GNU General Public License (GPL) that Lessig
so admires offers a shining example of how this last, iterative process
works.
Any system of private ownership requires state enforcement, first, to
protect private property from forced occupation, misappropriation, and
invasion, and second, to enforce voluntary deals. But any theory of
property rights that includes a key role for the state should also
emphatically reject the use of centralized state power to determine who
shall own what resource or why. Governments, for instance, should not pick
technologies.
In all legal systems, however, a system of private property rests on an
infrastructure of common property. The air we breathe, the roads we
travel, and the language we speak cannot easily be reduced to private
possession. They remain part of the commons because their separation
impedes respiration, transportation, and communication. At the edges, we
recognize useful exceptions. Although everyone may use the word "monopoly"
to describe a market with a single seller, only Hasbro may market a board
game with hotels and a jovial top-hatted mascot under that trade name. The
private creation of a trade name pulls that name out of the linguistic
commons for the limited purpose of identification.
Owners and Nonowners
These simple observations can be generalized. Property rights are
organized to minimize the obstacles to human prosperity and well-being by
maximizing the public benefit that emerges from the self-interested
actions of many individuals. Common property works when we want, say, to
travel freely on a river. Private property works when the development and
trade of separable assets creates enormous gains. But the justification
for private rights in everything from sponge cake to software has to be
social. Private property provides the right incentives for innovation,
from which nonowners benefit through voluntary exchange.
In general, private property is a great bargain for society. Let's assume
that Bill Gates's net worth is $45 billion. That's a lot, but it's no big
deal compared to the gains his customers have received from purchasing
Microsoft products. My copy of Microsoft Office may have cost me $500, but
that is a tiny fraction of my gains in productivity. The most important
gains from all forms of property, whether tangible or intellectual, accrue
to the nonowner who buys the products of the owner. The price one actually
pays for a thing is almost always less than the amount one would pay if
necessary. The difference, called consumer surplus, is a pure gain for the
buyer, and it exists because private ownership gave the seller the
incentive to create or maintain the thing. In other words, granting a
temporary patent or copyright monopoly to get the benefit of a new product
now--rather than having to wait for some free product later on--is usually
a good deal for both the producer and the consumer. This system of
property rights isn't antithetical to free software or "free culture."
Indeed, it is their very foundation. Let's start with software.

Freeware versus Payware
Lessig's defense of free software reads more like a disquisition on good
and evil than a measured assessment of its benefits compared to
proprietary alternatives. But the "four freedoms" contained in the GNU
GPL, which governs the way much free and open-source software is
distributed, weren't inscribed on tablets brought down from Mount Sinai.
They were created in the 1980s by Richard Stallman, an MIT computer
scientist with a particular social agenda. Lessig describes these four
freedoms and their functions cogently enough in his essay. The bottom
line: free software means free and open access to a program's source code.

It sounds innocent enough. But as with every contract or license, there's
a catch. No legal system ever creates unlimited rights, and every freedom
has its correlative duties. In the case of the GPL, the kicker is that
anyone who incorporates open-source software into his work must release
any derivative work under the same license. The precise language is as
follows: "You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any part
thereof, to be licensed as a whole at no charge to all third parties under
the terms of this License." The word "must" says it all. Content protected
by a general public license is just that, licensed. Software distributed
under the GPL should not be confused with ideas, writings, and inventions
lodged happily in the public domain, available for all to use as they see
fit, without any restrictions.
The GPL, in short, is vintage capitalism. Stallman created his own
software from scratch, and the ordinary rules of property and contract let
him license that software on whatever terms he chooses, with no questions
asked. Those who don't want to play by the rules of the free-software
community are free to do business with Microsoft. Likewise, Microsoft is
free to say, "Forget that free-access stuff: you can license our software,
but you cannot see all our source code. If you don't like our conditions,
then you may switch to some open-source product." Take-it-or-leave-it
works both ways.
Why, then, prefer software bound to Stallman's particular mix of freedoms
and restraints to software that is proprietary or in the public domain?
The defenders of free software often claim that their GPL inspires
production and creativity, while proprietary software encourages secrecy.
But this is a false opposition. The law of trade secrets can also inspire
creativity; it recognizes the premise that some people will invest in a
new invention only if they can retain the exclusive right to control its
use. They can keep the code dark and sell the products made with it; or
they can license the use of the code under a confidentiality agreement.
Protecting trade secrets ensures that original creators are rewarded for
their work. By contrast, open-source projects reward those who contribute
to the code later.
So which plan is better? Perhaps the choice is not so stark. IBM makes
millions on its database and server software yet actively encourages its
customers to use the open-source Linux operating system. Sun Microsystems
is relicensing its Solaris operating system under open-source terms in
order to let its own software developers tap into the thriving open-source
community. Even Microsoft shares its code, on a limited basis, with
outside developers of Windows programs. If the copyleft movement (as it
sometimes likes to call itself) requires that all derivative work be
governed by the GPL, that's fair enough; developers know the terms of the
deal. Yet proprietary firms can profit from similar networks of license
agreements, albeit agreements that take a different form. The state's job
is to enforce both sets of arrangements as written (with the caveat that
private contracts are invalid if they create monopolies in restraint of
trade).
We can now see why Lessig's homage to free software is at odds with the
principles of a free society where people can choose whatever business
arrangements they prefer. We shouldn't praise the Brazilian government for
"pushing itself and the nation to substitute free software for proprietary
software." We would be equally wrong to urge the Brazilian government to
promote proprietary software. In free-market societies, it is wholly
illiberal for governments to take any side in controversies involving
varying business models. A government's role as a neutral arbiter is
compromised whenever it engages in propaganda to persuade folks to prefer
one type of contract to another. By putting its thumb on the scale, it
makes true competition impossible. Arbiters cannot be cheerleaders.
The same principle, I believe, applies to the state's own procurement
decisions. Governments have fiduciary duties to their citizens similar to
those that boards of directors owe to shareholders. Their job is not to
satisfy their own ideological predilections; they should buy the software
that offers the best combination of price and quality. The great threat to
free culture is not proprietary software. It is the dogmatic insistence
that one form of industrial organization is a priori better than its
rivals.

Social Reasoning
That same overconfidence about software licensing pervades Lessig's
treatment of copyright. No matter one's political beliefs, it is critical
to remember the strong economic imperatives that drive modern societies to
legislate some form of copyright protection. Just as we protect private
rights in land for the benefit of the community, not solely for a
property's owner, so too we have a social reason to protect writings and
other intellectual creations.

As John Locke would have it, a just society recognizes the natural rights
of its citizens, including the right to protection of their productive
labor. But copyright has an additional justification: it fosters huge
positive contributions to culture, in the form of novels, movies, manuals,
music, and other works. Some creators are motivated solely by the desire
to create and would be happy to distribute their works under simple terms
such as a Creative Commons license requiring attribution only. But for
most authors, compensation matters, and we increase their production by
limiting the rights of others to copy their work. Of course, authors who
claim copyright protection today necessarily build on the efforts of prior
writers. But Lessig's rhapsodic praise of free culture ignores the
necessary trade-offs between producers and users that any mature system of
copyright must take into account.

Balancing Acts
In the end, all is a search for balance. Here are the key trade-offs:

Copyright duration. The ownership of land is normally indefinite. But the
U.S. Constitution allows for Congress to issue copyrights for limited
periods only. Why the difference? Because there is no sensible way to
return privately owned land to the commons. Forcing the current owner out
after 50 years would create a free-for-all, because only one owner can
possess a plot of land. But writings (including software) are different,
because many people can use them without depriving others of their use.
The only question is how long to wait before returning them to the public
domain. Lessig and I agree that the current rules are too generous. In the
19th century and for most of the 20th, U.S. copyrights initially expired
after 28 years, a rational length of time. But the 1976 Copyright Act
lengthened that period to 75 years, and the Sonny Bono Copyright Term
Extension Act (CTEA) of 1998 gave producers another windfall, adding 20
more years--even for works whose 75-year copyrights were about to expire.
Disney and the Gershwin estate didn't do anything to deserve such
extensions on their expiring copyrights. (Remember, someone with one year
left to run on a copyright gets a lot more out of a 20-year extension than
a new author whose 20 extra years start in 2080.)
Scope of protection: derivative works and DRM. Opposition to the CTEA
shouldn't translate, however, into reluctance to extend copyright
protection to derivative works like the French translation of my latest
novel. The extra income stream is an added incentive for the original
author, while rivals must compete by producing novel works rather than
derivative ones.
We should also welcome the expanded options that digital rights management
(DRM) provides for marketing new works. Forcing people to pay for films
and music on a per-use basis is a sensible response to the technologies
that allow protected works to be copied infinitely at close to zero cost.
With old-fashioned books, a work's value to a second reader is built into
the cover price. But there's no way to price an initial sale to cover
anywhere from one to a million performances of a song. Charging by use
allows for price discrimination between heavy and light users, which
neatly brings into the marketplace those low-intensity users who are
unwilling to pay the flat fee for records or tapes. DRM is no more
threatening to free culture than metered phone calls.
Nor will DRM impede the "remixing" of bits and pieces of shared experience
into new creative works. I can be inspired by Hemingway or Bellow to write
my own masterpiece, so long as it is not a derivative work. Indeed, often
the copyright law does not give sufficient protection to original
creators. The old rules work well; the only problem is that an artist or
author who wants to assemble snippets from previous works into something
new can find it prohibitively expensive to acquire the rights to those
snippets. What's needed is some fine-tuning around the edges.
Fair use. Section 107 of Title 17 of the U.S. Code contains a turgid
account of the factors that determine whether any particular use of a
copyrighted work is protected as a fair use--the limited use of the work
of someone else in your own work. Fair use lets a critic quote from an
author she hopes to savage: her article will be suspect if it does not
show the basis for her judgment. Asking the author for permission won't
work because the author will deny access to his enemies and allow it to
his friends. So weakening the property right makes sense as a way to
bolster the market. Similar arguments can be made for allowing use of
copyrighted works in other cases, such as news reporting, teaching, and
research, as Section 107 now dutifully provides.
The harder problems are those like the great 1984 case of Sony v.
Universal Studios, in which the U.S. Supreme Court held that fair use
allowed people to use the Sony Betamax VCR to record television shows. The
Court ruled that Sony did not illegally aid copyright infringement because
its equipment had "substantial noninfringing uses"; in fact, the justices
reasoned that VCRs expanded the numbers of TV viewers. But the choice
involved hard trade-offs. To hold Sony liable might have retarded the use
of valuable new technology, but letting it off ran the risk of undermining
the TV and movie industry's ability to protect copyrights. The Supreme
Court's decision was proved right when VCRs opened up a new income stream
for copyright holders.
These same trade-offs are at issue in this year's cause celebre, a Supreme
Court case pitting MGM against the operators of Grokster, a peer-to-peer
file-sharing program that some consumers use to download pirated copies of
songs or movies. The friend-of-the-court brief Lessig submitted in support
of Grokster shows the same habits of mind that dominate his piece for
Technology Review. The late Fred Rogers of Mr. Rogers' Neighborhood, who
testified in Sony v. Universal, wanted his works to be freely available
for noncommercial use: Lessig applauds this virtuous impulse and worries
that suits against Grokster will frustrate the wishes of people like
Rogers. So he wants Grokster to be free of liability even if individual
file sharers should be punished. Fair enough; but once again he pleads his
case through anecdote rather than solid legal reasoning.
My fundamental objection to Lessig's essay is that he argues by appealing
to attractive examples of free spirits and not from legal principle, as a
good jurist should. My son Benjamin is a young, gifted filmmaker in New
York who depends on copyright protection for his livelihood. I would never
argue, however, that we should have strong copyright protection just to
help Benjamin's career. But Lessig's arguments from anecdote do the
equivalent for his own cause. In spite of his fervor, he has not explained
why the standard view, which offers sensible if limited protection of
intellectual-property rights, is wrong. We have yet to learn why free
culture depends on free software. For me, at least, the opposite is closer
to the truth: free society also rests on the strong protection of
proprietary software.
Richard A. Epstein is the James Parker Hall Distinguished Service
Professor of Law at the University of Chicago. He was the editor of the
Journal of Legal Studies from 1981 to 1991 and of the Journal of Law and
Economics from 1991 to 2001. His books include Skepticism and Freedom: A
Modern Case for Classical Liberalism. Epstein taught a seminar on the
intellectual origins of private property taken by none other than Lawrence
Lessig.