lunes, 23 de noviembre de 2009

A special report on telecoms in emerging markets

A special report on telecoms in emerging markets
Mobile marvels
Sep 24th 2009
From The Economist print edition

Poor countries have already benefited hugely from mobile phones. Now get

ready for a second round, says Tom Standage (interviewed here)

Eyevine

BOUNCING a great-grandchild on her knee in her house in Bukaweka, a

village in eastern Uganda, Mary Wokhwale gestures at her surroundings. “My

mobile phone has been my livelihood,” she says. In 2003 Ms Wokhwale was

one of the first 15 women in Uganda to become “village phone” operators.

Thanks to a microfinance loan, she was able to buy a basic handset and a

roof-mounted antenna to ensure a reliable signal. She went into business

selling phone calls to other villagers, making a small profit on each

call. This enabled her to pay back her loan and buy a second phone. The

income from selling phone calls subsequently enabled her to set up a

business selling beer, open a music and video shop and help members of her

family pay their children’s school fees. Business has dropped off somewhat

in the past couple of years as mobile phones have fallen in price and many

people in her village can afford their own. But Ms Wokhwale’s life has

been transformed.


Ms Wokhwale prospered because being able to make and receive phone calls

is so important to people that even the very poor are prepared to pay for

it. In places with bad roads, unreliable postal services, few trains and

parlous landlines, mobile phones can substitute for travel, allow quicker

and easier access to information on prices, enable traders to reach wider

markets, boost entrepreneurship and generally make it easier to do

business. A study by the World Resources Institute found that as

developing-world incomes rise, household spending on mobile phones grows

faster than spending on energy, water or indeed anything else.

The reason why mobile phones are so valuable to people in the poor world

is that they are providing access to telecommunications for the very first

time, rather than just being portable adjuncts to existing fixed-line

phones, as in the rich world. “For you it was incremental—here it’s

revolutionary,” says Isaac Nsereko of MTN, Africa’s biggest operator.

According to a recent study, adding an extra ten mobile phones per 100

people in a typical developing country boosts growth in GDP per person by

0.8 percentage points.


In 2000 the developing countries accounted for around one-quarter of the

world’s 700m or so mobile phones. By the beginning of 2009 their share had

grown to three-quarters of a total which by then had risen to over 4

billion (see chart 1). That does not mean that 4 billion people now have

mobile phones, because many in both rich and poor countries own several

handsets or subscriber-identity module (SIM) cards, the tiny chips that

identify a subscriber to a mobile network. Carl-Henric Svanberg, the chief

executive of Ericsson, the world’s largest maker of telecoms-network gear,

reckons that the actual number of people with mobile phones is closer to

3.6 billion.

But exact numbers are hard to come by, not least because of the continued

rapid growth in the global number of subscribers. In the year to March

2009 an additional 128m signed up in India, 89m in China and 96m across

Africa, according to TeleGeography, a telecoms consultancy. Numbers in

Indonesia, Vietnam, Brazil and Russia also grew rapidly (see chart 2).

China is the world’s largest market for mobile telephony, with over 700m

subscribers. India is adding the biggest number each month: 15.6m in March

alone. And Africa is the region with the fastest rate of subscriber

growth. With developed markets now saturated, the developing world’s rural

poor will account for most of the growth in the coming years. The total

will reach 6 billion by 2013, according to the GSMA, an industry group,

with half of these new users in China and India alone.


All this is transforming the telecoms industry. Within just a few years

its centre of gravity has shifted from the developed to the developing

countries. The biggest changes are taking place in the poorest parts of

the world, such as rural Uganda.

Not the usual suspects
Three trends in particular are reshaping the telecoms landscape. First,

the spread of mobile phones in developing countries has been accompanied

by the rise of home-grown mobile operators in China, India, Africa and the

Middle East that rival or exceed the industry’s Western incumbents in

size. These operators have developed new business models and industry

structures that enable them to make a profit serving low-spending

customers that Western firms would not bother with. Indian operators have

led the way, and some aspects of the “Indian model” are now being adopted

by operators in other countries, both rich and poor. This model provides

new opportunities, especially for Indian operators. The spread of the

Indian model could help bring mobile phones within reach of an even larger

number of the world’s poor.

The second trend is the emergence of China’s two leading telecoms-

equipment-makers, Huawei and ZTE, which have entered the global stage in

the past five years. Initially dismissed as low-cost, low-quality

producers, they now have a growing reputation for quality and innovation,

prompting a shake-out among the incumbent Western equipment-makers. The

most recent victim was Nortel, once Canada’s most valuable company, which

went bust in January. Having long concentrated on emerging markets, Huawei

and ZTE are well placed to expand their market share as subscriber numbers

continue to grow and networks are upgraded from second-generation (2G) to

third-generation (3G) technology, notably in China and India.

The third trend is the development of new phone-based services, beyond

voice calls and basic text messages, which are now becoming feasible

because mobile phones are relatively widely available. In rich countries

most such services have revolved around trivial things like music

downloads and mobile gaming. In poor countries data services such as

mobile-phone-based agricultural advice, health care and money transfer

could provide enormous economic and developmental benefits. Beyond that,

mobile networks and low-cost computing devices are poised to offer the

benefits of full internet access to people in the developing world in the

coming years.

This special report will examine each of these three trends in turn. Each

one is significant in itself but also has consequences for rich as well as

poor countries. Together they could start a second wave of mobile-led

economic development as powerful as that prompted by the original launch

of mobile phones. Their spread in poor countries is not just reshaping the

industry—it is changing the world.
Eureka moments
Sep 24th 2009
From The Economist print edition

How a luxury item became a tool of global development

Reuters

What, no network?
HOW did a device that just a few years ago was regarded as a yuppie

plaything become, in the words of Jeffrey Sachs, a development guru at

Columbia University’s Earth Institute, “the single most transformative

tool for development”? A number of things came together to make mobile

phones more accessible to poorer people and trigger the rapid growth of

the past few years. The spread of mobile phones in the developed world,

together with the emergence of two main technology standards, led to

economies of scale in both network equipment and handsets. Lower prices

brought mobile phones within reach of the wealthiest people in the

developing world. That allowed the first mobile networks in developing

countries to be set up, though prices were still high.

The next big step was the introduction of prepaid billing systems, which

allow people to load up their phones with calling credit and then talk

until the credit runs out. When mobile phones first came in, subscribers

everywhere talked first and paid later (a model known as postpaid), so

they had to be creditworthy. Prepaid billing saves operators sending out

bills and chasing up debts. It helped the spread of mobile phones among

teenagers in Europe in the late 1990s because it offered parents a way of

preventing their children from running up huge bills. It also dramatically

expanded the market for mobile phones in poor countries.


Themba Khumalo of MTN recalls the firm’s launch of mobile services in

South Africa in 1994, using the postpaid model. “Mobiles were initially

perceived as a niche product, for business people, unaffordable by

ordinary people,” he says, so this seemed the obvious method to adopt. But

the launch of prepaid services “changed the landscape”, he says, by

reducing the cost of owning a mobile phone. Top-up vouchers, in

denominations as small as $0.50, are now routinely sold by agents in small

shops and on street corners across the developing world. “Mobile phones

could not work in Africa without prepaid because it’s a cash society,”

says Mo Ibrahim, the Sudanese businessman who established Celtel, a pan-

African mobile group now owned by Zain, based in Kuwait. The prepaid model

requires systems to accredit and support thousands of resellers, as well

as handling the actual top-ups, says José María Álvarez-Pallete, general

manager for Latin America at Telefónica, a Spanish telecoms giant that

transferred its prepaid expertise from Spain to its Latin American

subsidiaries.

From luxury to commodity
Once the switch to prepaid was made, the biggest barrier to broader mobile

access became the cost of a handset, which was still an expensive item in

the late 1990s. But the price of a basic model steadily fell, from around

$250 in 1997 to around $20 today. As handset-makers became aware of the

scale of the opportunity in the developing world, they turned their minds

to producing low-cost models. And for those who still could not afford

their own handsets, help was at hand in the form of microfinance.

Popularised by Grameen Bank in Bangladesh, this involves making small

loans, mainly to the rural poor. In a typical example, a woman borrows

money to buy a cow and then repays the loan from the profits she makes on

selling its milk. That way she gets an income, and her neighbours are able

to buy milk.

Iqbal Quadir, a Bangladeshi who moved to America and became an investment

banker, looked at this model and had an epiphany: “A cellphone could be a

cow.” In 1997 the resulting effort to combine microfinance and mobile

phones brought forth a Bangladeshi mobile operator called GrameenPhone, a

joint venture between Grameen Bank and Telenor, a Norwegian telecoms firm.

GrameenPhone pioneered the idea of the “telephone lady”, extending loans

to women in rural villages to enable them to buy a mobile handset, an

antenna and a large battery so they could sell calls to other villagers.

Taking a small cut on each call, they were able to pay off the loan and

thereafter to use the proceeds to pay for health care and education for

their families and to develop other businesses. This “village phone” model

quickly extended mobile coverage to thousands of villages in Bangladesh.

Although telephone ladies now make up only a small proportion of

GrameenPhone’s customers—around 220,000 out of a total of 8.5m—they

account for as much as one-third of all calls because their phones are

used by many people. The Grameen Foundation, a not-for-profit organisation

set up by Muhammad Yunus, the founder of Grameen Bank, has since

replicated the village-phone model in Cameroon, Indonesia, Rwanda and

Uganda, and and it has been widely copied elsewhere. In Afghanistan

telephone ladies take an average of eight months to pay off the microloan

required to buy their equipment and then earn $50-100 a month, says Karim

Khoja, chief executive of Roshan, the country’s largest operator.

The village-phone model is a good way to introduce people to the

advantages of telecommunications and provide access to start with, but it

may soon have had its day. With prices continuing to fall, the vast

majority of mobile users in the developing world now have their own

handsets. Mr Khumalo says MTN recently placed an order with a Chinese

manufacturer to supply handsets at $13 each. Still, demand for shared

phones has not dried up completely. Calling from a village phone costs

less than buying a top-up, so even people with their own handsets may

sometimes make calls from shared phones if they have run out of credit,

notes Eric Cantor of the Grameen Foundation’s Uganda office. And Mr

Khumalo points out that some of MTN’s village-phone operators now make

more money selling airtime than phone calls.

Prepaid billing and affordable handsets on their own are not enough to

ensure a rapid adoption of mobile phones, however. Another vital factor

has been the liberalisation of telecoms markets and the issuing of

licences to rival operators. As those operators compete for customers and

try to recoup the cost of building their networks, calling charges fall

and mobile adoption increases.


There is clear evidence that liberalisation drives adoption (see chart 3).

The most vivid illustration comes from a comparison between two African

countries: Ethiopia and Somalia. Ethiopia is one of the few remaining

countries where mobile telecoms remains a government-run monopoly. By the

end of 2008 the country had a “mobile teledensity” of 3.5% (ie, 3.5 mobile

phones per 100 people), compared with 40% for Africa as a whole. By

contrast, in war-torn Somalia, a similarly poor country with no

functioning government and a completely unregulated telecoms market, more

than a dozen operators have sprung up to meet demand, and mobile

teledensity is 7.9%. Even warlords want their phones to work, notes Mr

Ibrahim, so they leave networks alone: Celtel launched its networks in

Sierra Leone and the Democratic Republic of Congo during civil wars, and

both prospered.

Calling for growth
Does the spread of mobile phones promote economic development? At first

the evidence was anecdotal. There were stories about farmers and fishermen

phoning around to see where they would get the best price for their

produce, for example. Mobile phones also unlock entrepreneurship: porters,

carpenters and other self-employed workers can advertise their services on

lamp-posts and noticeboards and ask potential clients to get in touch with

them. Mr Quadir likes to tell the story of a barber in Bangladesh who

could not afford the rent for a shop, so he bought a mobile phone and a

motorbike instead, scheduling appointments by phone and going to his

clients’ homes. This was more convenient for them and he was able to serve

a larger area and charge higher fees.

Globally such micro-entrepreneurs account for 50-60% of all businesses,

and in Africa nearly 90%, says Jussi Impio, the head of Nokia’s African

research arm, based in Nairobi. Mobile phones make micro-entrepreneurs

vastly more productive: a plumber no longer has to return to his shop to

pick up messages from clients, for example. Mr Impio says he recently met

an entrepreneur with a roadside kiosk who sold underwear and ice cream,

“an interesting combination”. He had conducted a detailed study of his

company’s fortunes and found that his income had increased by 70% in the

six months after he started using a mobile phone in 2006, because basic

activities such as stock handling and negotiating prices with suppliers

become much more efficient with a mobile phone.

It is also clear that mobile phones create new jobs, stimulate investment

and provide tax revenues for governments. Roshan is Afghanistan’s largest

private company, largest investor and largest taxpayer, and with its

network of 25,000 agents who sell top-up vouchers it is one of the

country’s largest indirect employers. Roshan’s success in Afghanistan

attracted MTN and Etisalat, two big foreign operators, who provided

further investment and created more jobs. By generating taxes, creating

jobs that are not related to opium production and promoting prosperity,

says Mr Khoja, the telecoms industry provides “a bubble of hope for

Afghanistan”.

In the past few years the anecdotal evidence has been backed up by studies

that measure the economic impact of mobile phones directly. One example is

the analysis of fish prices on the coast of Kerala, in southern India,

carried out in 2007 by Robert Jensen, an economist at Harvard University.

By examining historical price data as mobile-phone coverage was extended

down the coast between 1997 and 2001, Mr Jensen was able to show that

access to mobile phones made markets much more efficient. Fishermen could

call several markets while still at sea before deciding where to sell

instead of taking their catch back to their home market and throwing it

away if there were no buyers for it. This eliminated waste, dramatically

reduced the variation in prices along the coast, brought down consumer

prices by 4% and increased fishermen’s profits by 8%. Mobile phones paid

for themselves within two months. Mr Jensen concluded that “information

makes markets work, and markets improve welfare.”

Similarly, Jenny Aker of the University of California at Berkeley carried

out an analysis of grain markets in Niger, published in 2008, to see how

the phasing-in of mobile-phone coverage between 2001 and 2006 affected

grain prices. She found that it reduced price variations between one

market and another by a minimum of 6.4%, and often more in remote and

hard-to-reach markets. As a result, prices for consumers were lower and

profits for traders higher. During a spike in food prices in 2005 grain

was 4.5% cheaper in markets with mobile coverage.

Such microeconomic studies provide support for macroeconomic analyses that

suggest a link between mobile phones and economic growth. In a much-cited

study in 2005, for example, Leonard Waverman of the London Business School

found that an extra ten mobile phones per 100 people in a typical

developing country added 0.6 percentage points of growth in GDP per

person. Critics say that it is difficult to tell whether mobile phones are

promoting growth, or whether growth promotes the spread of mobile phones

as more people become able to afford them. But detailed analyses of micro

market data, such as Mr Jensen’s study, demonstrate that phones really do

make people better off. As Grameen Bank’s Mr Yunus, who won the 2006 Nobel

peace prize, once put it: “When you get a mobile phone it is almost like

having a card to get out of poverty in a couple of years.”


The most recent macroeconomic study, carried out by Christine Zhen-Wei

Qiang, an economist at the World Bank, examined the effect of fixed-line

and mobile phones, as well as dial-up and broadband internet access, on

GDP per person for 120 developed and developing countries. She found that

an increase of ten percentage points in mobile-phone adoption in a

developing country increased growth in GDP per person by 0.8 percentage

points, consistent with Mr Waverman’s earlier result. According to Ms

Zhen-Wei Qiang’s research, mobile phones were more effective at promoting

growth than fixed-line phones, but less effective than internet access or

broadband (see chart 4). Since mobile phones have the greatest

penetration, however, “the aggregate impact is highest for mobile,” she

says. She also found that all telecoms technologies promoted growth more

effectively in developing countries than in developed ones. This is

because telecoms services help make markets more efficient, reduce

transaction costs and increase productivity—all areas in which developing

countries have further to go than developed ones.

Wireless freedom
But the benefits of mobile phones are not just economic; there are

political and social advantages too. FrontlineSMS, a system that allows

groups to communicate via text messages, is being used to report human-

rights violations and co-ordinate aid and conservation projects, among

many other things. Ushahidi (Swahili for “testimony”), a website set up in

response to the post-election violence in Kenya in 2008, allows mobile

phones to be used for crisis and disaster management. In India’s election

this year voters were able to use their handsets to call up information

about candidates, such as their educational background and any criminal

charges they might be facing.

Mobile phones have been used for election monitoring in countries

including Nigeria, Kenya and Sierra Leone. Reporting vote totals by phone

from polling stations to local radio stations makes it harder to fiddle

the results later. And text messaging has been used to co-ordinate

political protests in many countries. “Mobile phones play a really

wonderful role in enabling civil society,” says Mr Ibrahim, who has set up

a foundation to improve transparency and governance in Africa. “As well as

empowering people economically and socially, they are a wonderful

political tool.”

Mr Impio cites the popularity of call-in radio shows in Kenya as another

example of how mobile phones can make politics more transparent. “People

have phones, and when politics is being discussed they can call

anonymously and say things journalists cannot discuss,” he says.

“Newspapers have started to quote them, and journalists say it has given

them more freedom to discuss corruption.”

Mobile phones can also be used to root out corruption in more direct ways.

For example, Zubair Bhatti, a Pakistani bureaucrat, asked all clerks in

the Jhang district who handled land transfers to submit a daily list of

transactions, giving the amount paid and the mobile-phone numbers of the

buyer and the seller. He explained that he would be calling buyers and

sellers at random to find out whether they had been asked to pay any extra

bribes or commissions. When charges were subsequently brought against a

clerk who had asked for a bribe, the others realised that Mr Bhatti meant

business, and buyers and sellers reported a sudden improvement in service.

Mr Bhatti extended the scheme to other areas, such as cracking down on

vets who demanded bribes from farmers, and has proposed that the Jhang

model, as it is now known, be adopted in other districts. “It could easily

be institutionalised with a call centre,” he says. “It could have big

vote-getting influence.”

Again, these are just a few anecdotal examples, but they illustrate the

myriad unseen ways in which mobile phones are improving people’s lives

across the world, and in the developing world in particular. New data

services that provide agricultural advice and price information, improve

the provision of health care and allow quick and easy money transfers hold

out the promise of extending the benefits of mobile phones still further.

Ericsson’s Mr Svanberg draws an analogy with the internet: only when it

had been widely adopted in the rich world were websites such as Facebook

and YouTube able to take off. Similarly, he says, once poor countries have

established comprehensive mobile coverage, and a reasonable proportion of

the population owns a handset, they have a platform from which new

services, such as farming advice and mobile money, can be launched. This

second wave of mobile-driven benefits, however, will reach its full

potential only if access can be extended even further. That, in turn, will

require mobile operators in developing countries to find new ways to cut

the cost of ownership even more.

The mother of invention
Sep 24th 2009
From The Economist print edition

Network operators in the poor world are cutting costs and increasing

access in innovative ways

Alamy

Dialling low-cost innovation
PROVIDING mobile services in a developing country is very different from

doing the same thing in the developed world. For a start, there may not be

a reliable electrical grid, or indeed any grid at all, to power the

network’s base stations, which may therefore need to run on diesel for

some or all of the time. That in turn means they must be regularly

resupplied with fuel, which can be tricky in remote areas. Then there is

the challenge of running the network profitably. In Europe mobile

subscribers typically spend about $36 a month, a figure known in the

industry as the average revenue per user (ARPU). In America that figure is

$51 and in Japan $57. But in China it is only around $10, in India less

than $7 (see table 5) and in some African countries even lower. As mobile

phones get cheaper and more poor people can afford them, ARPUs across the

developing world are falling.


Operators in poor countries have responded by finding new ways to reduce

the cost of operating mobile networks and serving customers. The country

that has gone furthest down this road is India, so the result is sometimes

known as the “Indian model”, even though some of its features originated

elsewhere, and some low-cost innovations developed elsewhere have not

caught on in India. Despite an ARPU of only $6.50 and call charges of

$0.02 per minute, Indian operators have operating margins of around 40%,

comparable with leading Western operators, according to a study by

Capgemini, a consultancy. “On low-cost, innovative models, this is where

the centre of gravity is,” says Prashant Gokarn, head of strategy at

Reliance Communications, India’s second-biggest operator. Given India’s

size, its combination of poverty and rapid growth and its reputation as a

centre of technology and outsourcing, it is hardly surprising that it has

emerged as the crucible of business-model innovation.


Indian model
Outsourcing is at the heart of the Indian model, which was pioneered and

is now embodied by Bharti Airtel, India’s biggest mobile operator. All of

Bharti’s information-technology (IT) operations are outsourced to IBM; the

running of its mobile network is handled by Ericsson and Nokia Siemens

Networks (NSN); and customer care is outsourced to IBM and a group of

Indian firms. This passes much of the risk of coping with a rapidly

growing subscriber base to other parties and leaves Bharti to concentrate

on marketing and strategy. Unusually, it is not just the operation of

Bharti’s network that is outsourced but the construction as well, under a

scheme known as “managed capacity” that is now used by several Indian

operators.

When moving into a new area, Bharti requests a certain amount of calling

capacity and pays for it three months later at an agreed price per unit of

capacity, says Kunal Bajaj of BDA, a telecoms consultancy. That leaves it

up to the vendor to handle the business of designing networks, putting up

base stations and so on, giving it an incentive to build the network as

frugally as possible. Margaret Rice-Jones of Aircom, a network-planning

consultancy, says this cut costs by ensuring that operators do not pay for

more capacity than they really need. “The old model was a bit like letting

your supermarket plan your shopping list,” she says. The vendors, for

their part, gain economies of scale because they build, run and support

networks for several Indian operators. Ericsson’s Mr Svanberg says his

firm can run a network with 25% fewer staff than an operator would need.

Bharti’s operating expenses are around 15% lower than they would be if it

were to build and run its network itself, and its IT costs are around 30%

lower, according to Capgemini.

Arguably, the Indian model should be called the Ericsson model, says Mr

Svanberg, because his firm developed it and first deployed it on a small

scale in New Zealand. But, says Mr Bajaj, “Bharti decided to do its entire

network like this, and to experiment at that scale is totally different.”

There were growing pains to start with as Bharti and its outsourced

suppliers searched for the right balance of cost- and risk-sharing.

Expanding into rural areas is especially tricky because the capacity

needed is initially very low, so Bharti typically agrees to buy a minimum

amount.

Equipment vendors make most of their profits when capacity is increased.

“You make the land grab in the early phases, and what you’re securing is

margins and revenues for the future,” says Ms Rice-Jones. The outsourced-

network model is now gaining popularity with other operators in India.

Even if they do not go as far as Bharti, they are more likely than

operators elsewhere to outsource network design, tuning and management,

says Mr Svanberg.

A second plank of the Indian model is infrastructure-sharing, in which

several operators share the metal towers on which network antennae are

mounted and which house their associated equipment, generators and so

forth. In 2007 three Indian operators, Bharti, Vodafone Essar and Idea

Cellular, pooled 100,000 of their towers in a single company, Indus

Towers. Not all the operators use all the towers (the average is about 1.5

operators per tower), but the arrangement saves the three companies having

to find new sites and build their own towers. Indus Towers will also lease

tower capacity to other operators.

Similarly, Reliance Communications has spun off its towers into a separate

unit that will offer tower capacity to other operators. This turns an

operator’s assets into a source of new revenue, says Mr Gokarn, and allows

the mobile operator to concentrate on serving customers. Tower-sharing

happens in other countries too, including Britain and America, says Greg

Jacobsen of Capgemini; and some countries, including China and Bangladesh,

have made sharing compulsory. What is unusual about India is the extent of

voluntary, market-led sharing as a way to reduce costs.

Other components of the Indian model include “lifetime” prepaid schemes,

in which customers pay a one-off fee and can then receive incoming calls

indefinitely, even if they do not make outgoing calls; widespread use of

paperless top-ups, to reduce the costs of distributing top-up vouchers;

and automatically turning off some equipment at night, when traffic

volumes fall, to reduce energy usage.

The search for new cost savings continues. Reliance is experimenting with

a “micro-call-centre” model, in which large call centres in urban areas

are replaced by a smaller number of centres in more rural areas. This

means agents can be paid less and are more likely to be able to answer

queries. Turnover is high, so the trick, says Mr Gokarn, is to reduce the

cost of training new agents. Indian operators are also keen adopters of

“green” base-station technologies, such as air cooling, solar and wind

power, and hybrid diesel-electric generators, which reduce energy

consumption and hence operating costs. “Green technology has become a hot

topic in India because it’s cheaper,” says Mr Bajaj.

Dynamic Africa
African operators, which face many of the same difficulties as those in

India, have devised some cost-lowering innovations of their own, such as

dynamic tariffing, pioneered by MTN. This involves adjusting the cost of

calls every hour, in each network cell, depending on the level of usage.

Customers can check the discount they are getting on their handsets. At

4am it can be as high as 99%. This generates calls when the network would

otherwise be little used, says Themba Khumalo of MTN Uganda. In addition

to the peak hour from 8am, he says, there is now a new peak hour from 1am

as people take advantage of cheaper calls. Customers in developing

countries are far more price-sensitive than people in the rich world,

notes Stephan Beckert of TeleGeography, so they are prepared to stay up

late to save money. Vodacom has introduced a similar scheme. In Tanzania,

says Ms Rice-Jones, it found that call volumes increase by 20-30% in areas

where dynamic tariffing is switched on.

Another African innovation is “borderless roaming”, introduced by Celtel

(now Zain) in late 2006. This allows customers in Kenya, Tanzania and

Uganda to move between these countries without paying roaming charges to

make or receive calls. They can also top up their calling credit in any of

these countries. The scheme has been extended to other African countries

where Celtel operates, and rival operators such as MTN have introduced

similar offers. Borderless roaming is possible because many operators have

direct fibre-optic connections between their networks in different

countries, allowing them to act, in effect, like a single network.

Alessio Ascari, of McKinsey, a consultancy, argues that Africa, rather

than India, “is the new battlefield and the new laboratory for

development” in telecoms. The difficulties operators face are even greater

than in India, given the huge diversity and political instability in many

countries, as well as widespread poverty and fierce competition. Africa is

also interesting because local operators and regional champions are

competing with Middle Eastern operators, such as Zain and Etisalat, and

those from Europe, such as Vodafone and Orange. All of them, Mr Ascari

points out, “bring different strengths to the market”.

The wealth of innovation in India and Africa demonstrates that the Western

operators are not always best at running networks. “Each of us is learning

different pieces of the puzzle from the others,” says Mr Álvarez-Pallete

of Spain’s Telefónica. His company is transferring expertise, and indeed

managers, between its operations in Europe and Latin America. Much the

same is done at Vodafone, which has separate divisions for the developed

and the developing world. Vittorio Colao, its chief executive, says his

company is applying its European expertise in customer-profiling and

segmentation in India, for example, as customer loyalty becomes more

important. But there is also a flow of expertise in the opposite

direction, in particular in network operations. “There are a lot of

operational ideas from a cash-constrained, poor and very entrepreneurial

environment that you can immediately bring back to the developed world,”

he notes.

Perhaps the most striking example is the agreement struck between Vodafone

and Telefónica in March 2009 to share towers and other network

infrastructure in four European countries. Network-sharing is not new,

says Mr Colao, “but the confidence to do it at scale, and with a fierce

competitor, came from India. Once you see how it works in that kind of

environment, you become much more confident that you can do it in

Barcelona or Venice.” The savings are much bigger in Europe because the

cost of leasing tower sites is higher, which adds to the attraction of the

deal. An agreement reached in July by Sprint, an American operator, to

outsource the day-to-day running of its network to Ericsson can also be

seen as an example of the spread of the Indian model, argues Capgemini’s

Mr Jacobsen. Ericsson is betting that it will be able to sign similar

deals with other American operators in order to gain economies of scale.

Vodafone has outsourced more of its IT, again inspired by the Indian

example, and it is using the Indian “managed capacity” model at one of its

rapidly growing subsidiaries in Turkey. But according to Mr Colao this

model, which he likens to leasing rather than buying a car, does not work

everywhere. “In markets where you are not sure about speed and shape of

growth, the model makes sense,” he says. But in mature markets where

demand is easier to predict it can be better for operators to build new

capacity themselves. Vodafone is also taking a leaf out of the Indian

marketing book, moving its marketing chief from India, Harit Nagpal, into

a global marketing role. (Google “Zoo zoo” to see Vodafone’s popular

series of Indian television advertisements.)

The challenge now is to apply all these cost-saving lessons to connecting

the world’s remaining 3 billion people and achieving universal mobile

coverage. Within India, even the most remote areas are now judged to be on

the verge of commercial viability, judging by the results of two auctions

held in 2007. In each case bidders had to say how much government subsidy

they would require to expand into rural areas, with the contract going to

the lowest bidder.

In the first auction, for the right to build shared towers in 8,000 rural

locations, the average subsidy requested was 35%, much less than expected.

In the second auction, for the right to offer mobile services, many

operators submitted zero bids or even negative ones—in effect offering to

pay for the right to set up in rural areas. “The subsidies required are

not as big as everyone thought, because the companies believe there’s a

business case in being present in rural areas first,” says Mr Bajaj. In

part this reflects the cut-throat competition in the Indian market. But it

also shows that mandated tower-sharing can make the economics far more

attractive for operators in rural areas, which could be a valuable lesson

for other countries. A second round of rural expansion, with another

12,000 shared towers, has been announced.

In China tower-sharing is mandatory, which has helped reduce the cost of

expanding into rural areas. But since the three mobile operators are

state-owned, the extension of coverage is co-ordinated from the centre.

China Mobile, the largest operator, has signed an agreement with the

agriculture ministry to cover 98% of rural areas by 2012, in part to

compensate for its relative weakness in third-generation (3G) networks,

where it is being forced to adopt the home-grown and relatively immature

Chinese standard. And just as India, renowned for its technology-services

industry, has pioneered clever business models and outsourcing to get

prices down and extend access, China has used its own particular strength

as a low-cost manufacturer (see article).

Rural access elsewhere in the developing world is also likely to improve.

One hopeful sign is the merger being negotiated between Bharti and MTN,

which should accelerate the transfer of low-cost operating expertise

between India and Africa. Greater scale will also increase the combined

firm’s clout with suppliers. The deal is driven by Bharti’s and MTN’s

desire for long-term growth potential outside their existing markets,

rather than by hopes of cost savings, says Mr Bajaj. But it could promote

greater use of network outsourcing in Africa, and new techniques such as

dynamic tariffing in India.

Spreading the word
This is unlikely to be the end of Indian operators’ international

ambitions, which could spread the Indian model to other parts of the

world. So far moves into Africa by Middle Eastern operators have not been

conspicuously successful. Nick Jotischky of Informa Telecoms & Media, a

consultancy, notes that Middle Eastern operators often lack the Indian

operators’ experience with low-cost business models. Zain, for example,

was said to be looking for a buyer for its operations in sub-Saharan

Africa, many of which are making losses, to concentrate on wealthier

customers in North Africa and the Middle East. But in recent weeks it has

been negotiating to sell a 46% stake to a consortium of Indian and

Malaysian buyers. Reliance, India’s number two, held merger talks with MTN

last year.

In recent years Indian firms have made a series of bold foreign

acquisitions in industries such as steel and cars. If its telecoms giants

follow suit, their low-cost model could give them a clear competitive

advantage—and help bring mobile phones within reach of even more people.

Up, up and Huawei
Sep 24th 2009
From The Economist print edition

China has made huge strides in network equipment

Imaginechina

Now gearing up in handsets
IN THE 1960s, when Japan emerged as a manufacturing exporter, it soon

became a byword for low cost and low quality. Much fun was made of

unreliable Japanese watches and cheap Japanese cars. But quality improved

and Japan became a powerful force in electronics, carmaking and other

industries. Today Toyota is held up as a model of efficient manufacturing,

and Japanese firms lead the world in clean technology, carmaking and

consumer electronics. China hopes to make a similar transition. For now,

foreigners think that its home-grown electronics and cars are cheap and

shoddy, as Japan’s were thought to be 40 years ago. But quality is

steadily improving and China is being taken increasingly seriously as an

innovator. The firm that embodies this new, high-tech China is Huawei, the

country’s largest telecoms-equipment company.


Now gearing up in handsets
Founded in 1988, Huawei has risen astonishingly fast. Last year it was the

world’s fourth-largest maker of network equipment, ranked by sales (see

chart 6), and this year it is expected to move into third place, according

to BDA, a consultancy. It is already ranked a close second in optical

networking and third in mobile-network gear. Only slightly behind is ZTE,

China’s second-largest maker of telecoms equipment, founded in 1985. Last

year it was in eighth place, and it is moving up the field—not least

because Nortel, the number seven, went bankrupt in January. Both Chinese

firms specialise in network infrastructure, but they also make handsets.

In a fiercely competitive market, ZTE became the world’s sixth-largest

handset-maker last year. Its goal is to be the number three in handsets

within five years.

The two Chinese firms’ global market share is still relatively small, but

their impact on telecoms has been colossal. Together they have driven down

costs and brought about consolidation across the industry. Having offered

discounts of as much as 50%, they were in large part responsible for the

mergers in 2006 between Alcatel and Lucent and the network-equipment arms

of Nokia and Siemens, and the collapse in January 2009 of Nortel and the

sale of many of its assets to Ericsson.


Huawei and ZTE are now winning the lion’s share of equipment contracts for

China’s three third-generation (3G) mobile networks, spending on which

will total $59 billion between 2009 and 2011, according to the Ministry of

Industry and Information Technology. This will further increase their

market share, to the disappointment of Western vendors that had hoped to

benefit from China’s adoption of 3G, one of the biggest telecoms projects

in history. “The vendor community is struggling, but Huawei and ZTE are

still growing, largely on the back of the emerging markets,” says

Informa’s Mr Jotischky.

The Chinese are coming
Huawei and ZTE are not just strong at home; both firms also ventured

abroad in the 1990s, selling fixed-line equipment in Asia and Africa.

Western vendors’ interest in those regions was limited and their prices

were too high, says Zhu Xiaodong, ZTE’s technology chief in Europe. Next,

the Chinese firms began selling wireless equipment in the Middle East,

South-East Asia, Africa and Latin America. Mr Zhu, who led the team that

designed ZTE’s first mobile base-station based on the GSM standard, says

Chinese companies had two advantages in the wireless-equipment market:

much cheaper labour and, by that time, settled standards. Nokia and

Ericsson, the pioneers of the GSM standard, took years to develop the

technology; ZTE built its first base-station in six months.

Huawei was the first of the two firms to move into Europe, the home market

of Ericsson, the world’s largest telecoms-equipment supplier. At first

only smaller operators, and the eastern European subsidiaries of bigger

ones, bought its equipment, but now it supplies several leading European

operators, including Vodafone, Telefónica, T-Mobile and BT. In America

Huawei is selling 3G network gear to Cox Communications, and its equipment

is being tested by AT&T.

Customers needed time to get to know Huawei, says Edward Zhou, its

marketing chief in Europe, but now “we are accepted as a provider of

innovative solutions and high quality.” A few years ago Huawei had only a

small booth at Mobile World Congress, the industry’s biggest annual trade

show, notes Mike Thelander of Signals Research, a consultancy. This year

it had a whole building to itself, which had been Ericsson’s sole

prerogative. “It’s impressive what they’ve done in a short period of

time,” says Ericsson’s Mr Svanberg.

Perceptions of the Chinese vendors within the industry shifted suddenly

between 2004 and 2006, says Vodafone’s Mr Colao, who spent that period

working outside the industry as head of an Italian media group. “When I

left, I think I had heard of Huawei twice, but I would not have been able

to remember their name,” he says. “When I came back in 2006 they were a

supplier to Vodafone, and they are now one of the main ones.” Having got

started by offering low prices, he notes, the Chinese firms have since

gained scale and a reputation for innovation.

Huawei and ZTE led the way in something called “remote radio-head”

technology. In a mobile base-station the radio circuitry usually sits in a

cabinet and is connected by a cable to an antenna on the tower overhead.

Replacing this cable with an optical fibre, and moving the radio circuitry

into the antenna itself, eliminates power losses in the antenna cable,

cutting energy consumption by around one-third and reducing the size of

the equipment.

More recently, says Weiran Zhuang of BDA, the Chinese vendors have shown

that they can innovate by launching reconfigurable base-stations, the

functions of which are defined in software rather than hardware. That

means the base-station can be quickly rejigged to support different

mobile-network technologies, or even several such technologies at the same

time. Most mobile operators are now running 2G and 3G networks alongside

each other, using separate sets of equipment, so the prospect of being

able to replace them with a single system is enticing. América Móvil, the

largest mobile operator in Latin America, found that deploying Huawei’s

reconfigurable SingleRAN hardware reduced the power consumption of its

base-stations by 50% and the volume of equipment needed by 70%. ZTE makes

a similar system which reduces power consumption by 40% and has already

been deployed by CSL, an operator in Hong Kong. Both systems can also be

upgraded to LTE, the emerging 4G standard. This has particular appeal for

Chinese operators, which are still upgrading from 2G to 3G as 4G already

looms on the horizon.

A few years ago Huawei used to boast of its cost advantage in research and

development, mostly because its Chinese engineers commanded much lower

salaries than its rivals’ staff. But as foreign firms have shifted more of

their own R&D to China, and Huawei has expanded outside China, it is now

keen to present itself primarily as an innovator rather than a low-cost

provider. “It is a misperception to say that Huawei is a low-cost

company,” says Mr Zhou. The firm now has over 100 offices abroad and

maintains research centres in Europe, America and India as well as China.

In January Huawei topped the World Intellectual Property Organisation’s

2008 rankings for international patent applications, a sign that the

company is outward-looking and determined to defend its intellectual

property abroad.

A TD-S diversion
Even the Chinese government has been surprised by the speed at which

Huawei has established itself as an international force. Since the late

1990s the government has been pursuing an elaborate industrial policy

designed to boost the prospects for Chinese equipment-makers at home and

abroad. But the plan has fallen so far behind schedule, and Huawei and ZTE

have done so well on their own in international markets, that the entire

scheme has become almost irrelevant.

The plan involved the development and promotion of a Chinese 3G technology

called TD-SCDMA, or TD-S. A decade ago, as operators in America, Europe

and Japan prepared to build the first 3G networks, there was a fierce

argument over the merits of two rival 3G technologies: one called W-CDMA,

backed by European operators and vendors, and one called CDMA2000, backed

by American firms. It was clear that W-CDMA would predominate in Europe

and CDMA2000 in America, but both camps had their eye on foreign markets.

Chinese officials decided that China should also enter this competition

and develop its own 3G standard. By mandating its adoption in China they

could provide enough scale to get the technology established. TD-S could

then be offered to operators abroad, particularly those in Asia whose

customers might wish to roam in and out of China. Chinese equipment-makers

would enjoy a boost to their sales and would not have to pay licensing

fees to foreign vendors.

But TD-S took much longer to develop than expected. The government delayed

issuing China’s 3G licences because it wanted to ensure that TD-S would be

used for at least one of the country’s 3G networks. After years of

uncertainty it reorganised China’s various mobile and fixed-line operators

into three giant groups in 2008, in preparation for the introduction of

3G. But by this time Huawei and ZTE were doing well in foreign markets

without any help from TD-S, and the global telecoms industry was already

looking towards 4G networks, based on the LTE standard. Huawei is at the

forefront of LTE development: the world’s first LTE mobile connection was

made using the company’s equipment in June this year. But TD-S has had so

much political capital invested in it that the Chinese government could

not give up on it. So when at last it awarded 3G licences in January this

year it required China Mobile, the world’s largest operator by subscriber

numbers, to use TD-S to build its 3G network.

Because of its size, China Mobile is arguably the only operator on Earth

that could establish a new technological standard on its own, but even

this giant seems unable to make a success of TD-S. In a recent filing with

financial regulators the company admitted that “we have encountered and

may continue to encounter challenges in the deployment of our 3G services”

and that “we may not be able to effectively and economically deliver our

3G services based on this technology.” The main problem is the lack of

TD-S handsets: existing models must be completely redesigned to work with

TD-S networks. China Mobile had hoped to have 10m TD-S subscribers by the

end of 2009, but by the end of June it had signed up only 959,000. Of

these, says Mr Zhuang, only half are using TD-S handsets. The other half

are using the TD-S network to provide a mobile-broadband connection for

laptops, which seems a more promising market until more TD-S handsets

become available. The prospect that TD-S will be adopted outside China,

never bright, has now faded altogether.

Although China Mobile, Huawei and ZTE continue to talk up TD-S, they have

already devised a face-saving exit strategy: to promote a new variety of

LTE, called TD-LTE, which with enough hand-waving can claim to be derived

in some respects from TD-S. “The reality is that they are two completely

different, incompatible technologies, but it’s a nice way to get away from

TD-S, by claiming it’s an upgrade or an evolution,” says Mr Thelander.

China Mobile now requires all suppliers of 3G equipment to support smooth

evolution to LTE, says Mr Jotischky.

Vodafone and Verizon Wireless are taking part in efforts to make TD-LTE

work smoothly with the mainstream LTE standard. (Vodafone owns a small

stake in China Mobile and would like a single global 4G standard to make

roaming easier and increase economies of scale.) If TD-LTE can then be

rolled into the main LTE standard, so that LTE handsets work well on

Chinese TD-LTE networks, China Mobile will escape being hobbled by an

inferior home-grown technology motivated by political aims. In the

meantime it must push ahead with TD-S as best it can.

Both Huawei and ZTE, along with other Chinese equipment-makers such as

Datang, received government funds to support the development of TD-S. But

“by the time the TD-S cake was baked—and it never really tasted that good

—Huawei and ZTE had racked up impressive and unexpected gains,” says

Duncan Clark of BDA. Huawei, which did the minimum necessary to support

TD-S, has emerged as the strongest, whereas Datang has been far less

successful abroad. So it is difficult to argue that the TD-S project has

helped make Chinese firms more internationally competitive.

One source of concern about Huawei is its opaque ownership. The company is

privately held, and Mr Zhou insists that it is entirely employee-owned.

But its military culture, and the fact that its founder, Ren Zhengfei, is

a former army officer, have led to persistent rumours that it has close

ties with the army. Moreover, its ownership structure may be complicated

by its history of joint ventures, says Mr Clark.

The big two Chinese vendors are relatively weak in services compared with

their Western rivals, though both are pushing ahead as fast as they can.

Being able to offer services in conjunction with network equipment is

becoming more important as operators, in India and elsewhere, outsource

their network operation to reduce costs. As network gear becomes

commoditised, services offer higher margins and long contracts, notes Mr

Thelander. Like many people in the industry, he believes that only

Ericsson and Huawei are sure to be around in a decade’s time. A senior

executive at one large mobile operator says he sometimes awards contracts

to non-Chinese vendors, even if their prices are a little higher, in order

to maintain choice and competition in the market.

As Huawei goes up against Ericsson in network equipment, ZTE hopes to move

up in handsets. At the moment many of its handsets are sold by mobile

operators (including Vodafone and T-Mobile) under their own brands,

customised to the operators’ specifications. ZTE says it is willing to

work with operators, but is also preparing to push its own brand more

vigorously, particularly in western Europe. To succeed, it will need to

produce some desirable, high-specification handsets. So far, says Mr

Thelander, “I haven’t seen anything that’s wowed me.” But then only a few

years ago the Chinese vendors’ network equipment was seen as not very

exciting.

eyond voice
Sep 24th 2009
From The Economist print edition

New uses for mobile phones could launch another wave of development

Reuters

I'm not selling for that
IN A field just outside the village of Bumwambu in eastern Uganda,

surrounded by banana trees and cassava, with chickens running between the

mud-brick houses, Frederick Makawa is thinking about tomatoes. It is late

June and the rainy season is coming to an end. Tomatoes are a valuable

cash crop during the coming dry season and Mr Makawa wants to plant his

seedlings as soon as possible. But Uganda’s traditional growing seasons

are shifting, so he is worried about droughts or flash floods that could

destroy his crop. Michael Gizamba, a local village-phone operator, offers

to help using Farmer’s Friend, an agricultural-information service. He

sends a text message to ask for a seasonal weather forecast for the

region. Before long a reply arrives to say that normal, moderate rainfall

is expected during July. Mr Makawa decides to plant his tomatoes.

A few miles away in the village of Musita, Michael Malime, another

village-phone operator, explains how his customers have been using the

same service to get farming tips. Rice farmers who had trouble with aphids

texted for advice and received a message telling them how to make a

pesticide using soap and paraffin. A farmer with blighted tomato plants

learned how to control the problem by spraying the plants with a milk-

based mixture.


The Farmer’s Friend service accepts text-message queries such as “rice

aphids”, “tomato blight” or “how to plant bananas” and dispenses relevant

advice from a database compiled by local partners. More complicated

questions (“my chicken’s eyes are bulging”) are relayed to human experts,

who either call back within 15 minutes or, with particularly difficult

problems, promise to provide an answer within four days. These answers are

then used to improve the database.

Farmer’s Friend is one of a range of phone-based services launched in June

by MTN, Google and the Grameen Foundation’s “Application Laboratory”, or

AppLab. As well as disseminating advice in agriculture, provided by the

Busoga Rural Open Source and Development Initiative, the new services also

provide health and market information. The Clinic Finder service points

people to nearby clinics, and the Health Tips service explains the

symptoms of common diseases.

Lastly there is Google Trader, a text-based system that matches buyers and

sellers of agricultural produce and commodities. Sellers send a message to

say where they are and what they have to offer, which will be available to

potential buyers within 30km for seven days. Mr Makawa says his father

used the service to look for a buyer for some pigs, which he sold to pay

school fees. These services cost 110 shillings ($0.05) a time, the same as

a standard text message, except for Google Trader, which costs double

that. In their first five weeks the services received a total of more than

1m queries.

A web of sorts
“There is a big shift from holding a phone to your ear to holding it in

your hand,” says David Edelstein of the Grameen Foundation. “It opens the

door to information services. It’s not the web, but it’s a web of services

that can be offered on mobile devices.” As with the Village Phone project,

Grameen is trying to establish a model that can be scaled up and

replicated in other countries. Offering agricultural and health

information is more difficult than offering a phone service, however,

because such information must be localised and must take cultural

differences into account. The answer is to work closely with local

partners, says Mr Edelstein. Grameen is also experimenting with the idea

of “community knowledge workers”—local people who can help others get

access to mobile services, reading, translating and explaining text

messages where necessary, just as village-phone operators provide access

to basic communications.

Trading up
Grameen’s collaboration with MTN and Google in Uganda is just one of

dozens of services across the developing world that offer agricultural,

market and health information via mobile phones. In India, for example,

farmers can sign up for Reuters Market Lite, a text-based service that is

available in parts of India. Its 125,000 users pay 200 rupees ($4.20) for

a three-month subscription, which provides them with local weather and

price information four or five times a day. Many farmers say that their

profits have gone up as a result.

Tata Consultancy Services, an Indian operator, offers a service called

mKrishi which is similar to Farmer’s Friend, allowing farmers to send

queries and receive personalised advice. “The rural population is willing

to pay substantial subscription fees to get this information multiple

times a day,” says Kunal Bajaj of BDA. There have been lots of pilot

schemes in the past, he says, but commercial offerings are now beginning

to gain ground.

Nokia, the world’s largest handset-maker, launched its own information

service, Nokia Life Tools, in India in June. In addition to education and

entertainment, it provides agricultural information, such as prices,

weather data and farming tips, that can be called up from special menus on

some Nokia handsets. The basic service costs 30 rupees a month, and a

premium service which provides detailed local crop prices in ten states is

available at twice that price. “It is in its early stages, but it has

resonated extremely well with its target audience,” says Olli-Pekka

Kallasvuo, Nokia’s chief executive.

Services to help farmers have been most widely adopted in China, where

China Mobile offers a service called Nong Xin Tong in conjunction with the

agriculture ministry, as part of its push into rural areas. It has already

signed up 50m users and is aiming for 100m within three years. The service

provides news, weather information and details of farming-related

government policies.

China Mobile also runs a website, 12582.com, that sends farmers

information about planting techniques, pest management and market prices.

The service, which costs two yuan ($0.30) a month, sends out 13m text

messages a day and has over 40m users. There are dozens of other examples

across the developing world. TradeNet, launched in Ghana in 2005, now

links buyers and sellers of agricultural products in nine African

countries; CellBazaar provides a text-based classified-ads service in

Bangladesh.

Mobile phones are also being used in health care. One-way text alerts,

sent to everyone in a particular area, can be used to raise awareness of

HIV; sending daily text messages to patients can help them remember to

take their drugs for tuberculosis or HIV. Mobile phones can be used to

gather health information in the field faster and more accurately than

paper records and help with the management of drug stocks. Camera-phones

are used to send pictures to remote specialists for diagnosis.

Bright Simons, a Ghanaian social entrepreneur, has devised a phone-based

system called mPedigree to tackle the problem of counterfeit drugs. Some

10-25% of all drugs sold are fakes, according to the World Health

Organisation, and in some countries the proportion can be as high as 80%.

Under Mr Simons’ scheme, which is being implemented in Nigeria and Ghana,

a scratch-off panel on the packaging reveals a code which can be texted to

a special number to verify that the drugs are genuine. Most mobile-health

projects are still at the trial stage, but a report compiled in 2008 by

the UN Foundation and the Vodafone Foundation documented around 50 such

projects across the developing world. Studies are now under way to

quantify their benefits.

These new services have become feasible because mobile phones are

increasingly ubiquitous. “We are now in a new phase where we are seeing

the network effects of so many people using mobile phones,” says Mr

Simons. His system can, for example, safely assume that the pharmacist in

any given village will have a mobile phone. These text-based services,

though they fall short of full internet access, have the potential to

unlock a range of social and economic benefits to users of even the most

basic mobile phones. “There’s a lot of talk about what you can do with

more sophisticated devices, but it’s much more compelling when you focus

on the devices that people have in their hands today,” says Mr Edelstein.

Money talks
Quantifying the benefits of agricultural and health services is hard, and

such services are still in their early days in much of the world. The

mobile service that is delivering the most obvious economic benefits is

money transfer, otherwise known as mobile banking (though for technical

and regulatory reasons it is not, strictly speaking, banking). It has

grown out of the widespread custom of using prepaid calling credit as an

informal currency.

Suppose you want to send money from the city back to your family in the

country. You could travel to the village and deliver the cash in person,

but that takes time and money. Or you could ask an intermediary, such as a

bus driver, to deliver the money, but that can be risky. More simply, you

could buy a top-up voucher for the amount you want to transfer (say, $10)

and then call the village-phone operator or shopkeeper in your family’s

village and read out the code on the voucher. The credit will be applied

to the phone of the shopkeeper, who will hand cash to your family, minus a

commission of 10-20%. In some countries, where airtime can be transferred

directly from one phone to another by text message, the process is even

simpler: load credit onto your phone, then send it to someone on the spot

who in return gives cash to your intended recipient.

These methods became so widespread that some companies decided to set up

mobile-payment systems that allow real money, rather than just airtime, to

be transferred from one user to another by phone. Once you have signed up,

you pay money into the system by handing cash to an agent (usually a

mobile operator’s airtime vendor), who credits the money to your mobile-

money account. You can withdraw money by visiting another agent, who

checks that you have sufficient funds before debiting your account and

handing over the cash. You can also send money to other people, who will

be sent a text message containing a special code that can be taken to an

agent to withdraw cash. This allows cash to be sent from one place to

another quickly and easily.

Some mobile-money schemes also allow international remittances; others

issue participants with debit cards linked to their mobile-money accounts.

Since there are many more mobile phones and sellers of mobile airtime than

there are cash machines and bank branches, mobile money is well placed to

bring financial services within reach of billions of “unbanked” people

across the developing world.

Getty Images

Banking for the unbanked
The biggest successes in this field so far have been Gcash and Smart Money

in the Philippines, Wizzit in South Africa, Celpay in Zambia and, above

all, M-PESA in Kenya, which has become the most widely adopted mobile-

money scheme in the world. Launched in 2007 by Safaricom, Kenya’s largest

mobile operator, it now has nearly 7m users—not bad for a country of 38m

people, 18.3m of whom have mobile phones. M-PESA’s early adopters were

young, male urban migrants who used it to send money home to their

families in the country. But it has since become wildly popular and is

used to pay for everything from school fees to taxis (drivers like it

because it means they are carrying less cash around). Roughly $2m is

transferred through the system each day, with an average amount of $20.

“In markets in Kenya, stallholders are happy to take M-PESA payments. It’s

pretty dramatic,” says Bob Christen, head of the “Financial Services for

the Poor” initiative at the Bill & Melinda Gates Foundation.

Making it easier, quicker and cheaper to transfer money has enormous

social and economic benefits. Commissions are lower, and recipients no

longer have to pay for transport to towns to make withdrawals. They can

also take out funds more easily and frequently. In rural households that

have adopted mobile money, incomes have increased by 5-30%, according to

Olga Morawczynski, an ethnographer at the University of Edinburgh who has

studied M-PESA in detail. It also saves men working in the city having to

take time off to deliver the money to their families. The only drawback,

say their wives, is that some men now visit home less frequently.

A safe place for savings
M-PESA is also used as a form of savings account, even though it does not

pay interest. Having even a small cushion of savings to fall back on

allows people to deal with the unexpected, such as suddenly having to pay

for medical treatment. “An awful lot of people climb out of poverty every

year, but a lot drop back in because they have no savings, no buffer, so

when something bad happens they have to sell assets and lose a lot of

ground,” says Mr Christen. Poor people tend to save by buying livestock,

which can get sick or die, or buying gold, which can be stolen, or

investing in community-based schemes that may be fraudulent, says Timothy

Lyman of the Consultative Group to Assist the Poor (CGAP). Mobile banking

offers a more reliable alternative, he says, and could have economic

benefits comparable to those of mobile phones.

Given all these benefits, why has mobile banking taken off in Kenya and a

few other places but not elsewhere? M-PESA did not do well in neighbouring

Tanzania, for example. There were special factors that made M-PESA more

likely to work in Kenya: the unusually high cost of sending money by other

methods; the unusually large market share (80%) of Safaricom, the main

mobile operator (an affiliate of Vodafone); the regulator’s decision to

allow the scheme to proceed, even without formal regulatory approval; and,

most intriguingly, the post-election violence in the country in early

2008. M-PESA was used to transfer money to people trapped in Nairobi’s

slums at the time, and some people regarded M-PESA as a safer place to

store their money than the banks, which were entangled in ethnic disputes.

All this makes Ms Morawczynski think that Kenya’s success in mobile

banking may not be matched elsewhere. “But I hope somebody can prove me

wrong,” she says.

There are signs that her wish may soon come true. Banks and regulators,

which have been sceptical towards mobile money in many countries, are

coming around to the idea, in large part because of M-PESA’s success.

“Many of the issues that seemed to be significant stumbling blocks last

year seem less significant now, or at least more manageable,” says Mr

Lyman. There has, he says, been a “change in the comfort level” about

non-banks (ie, operators) providing financial services. “A year ago most

banks were scared—they were seeing the mobile guys taking their lunch

away,” says Dare Okoudjou, head of mobile money at MTN. But now, he says,

some banks have realised that teaming up with a mobile operator to launch

a mobile-money service will allow them to reach many more customers. After

all, mobile operators have far more powerful brands and much greater reach

than banks.

Regulators, meanwhile, are reassured by the banks’ involvement. Mobile-

money schemes generally limit balances and transfers (typically to around

$100), which helps allay fears about money-laundering. And when customers

sign up, they have to produce some form of identification. That makes the

process more formal than for buying a SIM, but less rigorous than for

opening a bank account. “We can find a balance between those two,” says Mr

Okoudjou.

MTN’s launch of a mobile-money service in Uganda in March 2009, in

partnership with Stanbic Bank, provides further cause for optimism. MTN

backed up its launch with a huge marketing campaign based around the

simple idea of sending money home, as Safaricom had previously done in

Kenya. After three months 60% of the population had heard of the service—a

level of awareness that M-PESA took a year to achieve, according to MTN.

After four months the service had signed up 82,000 users. Of the $5.1m

transferred in that period, half was in the fourth month, indicating a

rapid take-off. MTN plans to increase the number of outlets that can

handle mobile money to 5,000 by early 2010.

Banking for the unbanked
MTN’s apparent success in Uganda seems to suggest that Kenya may not be a

one-off after all. After fine-tuning its technology and procedures in

Uganda, MTN plans to introduce the service in 20 other African and Middle

Eastern countries; it has already launched in Ghana. Meanwhile Zain, which

operates in several African markets, has started its own mobile-money

service, called Zap. According to CGAP, there will be over 120 mobile-

money schemes in developing countries by the end of 2009, more than double

the number in 2008. By 2012, it predicts, some 1.7 billion people will

have a mobile phone but no bank account, and 20% of them will be using

mobile money.

Operators do not expect to make much money from mobile banking, says Mr

Okoudjou, but it can help keep customers from defecting to rivals and cut

costs by allowing people to top up their airtime directly on their phones,

as well as providing wider social and economic benefits that reflect well

on operators. Most importantly, he says, mobile banking can help the

industry repeat the huge impact made when mobile phones were first

introduced. “This is a second wave that can unleash the potential of

mobile phones again,” he says. “So we need to do this, and we need to do

it properly, and we need to do it all over.”

Finishing the job
Sep 24th 2009
From The Economist print edition

Mobile-phone access will soon be universal. The next task is to do the

same for the internet

Panos

The way forward
HOW long will it be before everyone on Earth has a mobile phone? “It looks

highly likely that global mobile cellular teledensity will surpass 100%

within the next decade, and probably earlier,” says Hamadoun Touré,

secretary-general of the International Telecommunication Union, a body set

up in 1865 to regulate international telecoms. Mobile teledensity (the

number of phones per 100 people) went above 100% in western Europe in

2007, and many developing countries have since followed suit. South Africa

passed the 100% mark in January, and Ghana reached 98% in the same month.

Kenya and Tanzania are expected to get to 100% by 2013.

Even 100% teledensity does not mean that everyone has a phone, because

many people have several handsets or SIMs. But nor is everyone a potential

customer: the under-fives, for instance, still usually manage without. But

at current rates of growth it seems likely that within five years, and

certainly within ten, everyone in the world who wants a mobile phone will

probably have one. 3G networks capable of broadband speeds will be

widespread even in developing countries, and even faster 4G networks will

be spreading rapidly in some places. Then what?

The next task, says Mr Touré, is to ensure that everyone who wants to can

use mobile technology to access the internet. Like many in the industry,

he predicts that this will be done using low-cost laptops, or netbooks,

connecting to the internet via mobile networks. “Mobile broadband will

become a global phenomenon—it will be the dominant form of broadband,”

says Informa’s Mr Jotischky. He thinks there could be 1.4 billion mobile-

broadband subscribers by 2014.


Meanwhile, with the falling price and size of laptops and the advancing

potential of mobile phones, the two seem to be converging in a new range

of devices that combine the power and versatility of a computer with the

portability of a phone. Already, netbooks can cost as little as $200,

making them cheap enough to be given away with long-term mobile-broadband

contracts in some countries, just as mobile handsets already are for some

users. Mobile phones, it seems, are the advance guard for mobile-broadband

networks that will extend internet access to the whole of mankind.

The combination of mobile broadband and cheap netbooks will resolve a

long-running argument within the technology industry about the relative

merits of computers and mobile phones as tools to promote development.

Leading the computer camp is Nicholas Negroponte of the Massachussetts

Institute of Technology, the man behind the $100 laptop. He and his

followers argue that bringing down the cost of laptops, and persuading

governments in developing countries to buy and distribute millions of

them, could have enormous educational benefits.


Critics of his scheme argue that it makes more sense to spend $100 on a

schoolhouse, or textbooks, or teacher training, than on a laptop. And

advocates of mobile phones, including Iqbal Quadir, who has sparred with

Mr Negroponte on the subject, point out that mobile phones provide

immediate economic benefits, which enables them to spread in a self-

sustaining, bottom-up way, without the need for massive government

funding. Mr Negroponte responds that mobile phones are not much use for

education; Mr Quadir replies that thanks to economic development driven by

mobile phones, parents can afford to educate their children. The argument,

having rumbled on for years, has now ended in compromise.

On the face of it, those in the mobile camp seem to have won. Mobile

phones are now seen as a vital tool of development, whereas Mr

Negroponte’s laptop project has failed to meet its ambitious goals. But

although his engineers have so far only managed to get the cost of their

elegant laptop down to about $150, they have shown what is possible with a

low-cost design, and helped create today’s vibrant netbook market. If

netbooks do indeed become the preferred devices to access the internet in

the developing world, Mr Negroponte will have had the last laugh. But if

those netbooks turn out to be, in effect, large mobile phones with

keyboards that access the internet via mobile networks, as also seems

likely, Mr Quadir and his camp can claim to have won the day.

Technological progress in devices and networks seems to have rendered the

debate moot: the important thing is that internet access will be on its

way to becoming as widespread as mobile phones.

Obstacles remain even to universal mobile access, and beyond that to

universal internet access. One problem is a lack of backbone links,

particularly to Africa. But a series of new cables is in the works to

improve Africa’s connectivity with the rest of the world, increasing

capacity and reducing the cost of internet access. The first of these, the

SEACOM cable, eastern Africa’s first modern submarine cable, was completed

in July.

As international links improve and network equipment becomes cheaper and

more effective, it will not be difficult to provide a low-cost mobile-

broadband service, says Vodafone’s Mr Colao. The main challenge will be to

reduce the price of access devices. “We need to come up with a mobile-data

device that costs $60-80 maximum,” he says. “Netbooks are very good, but

we need an emerging-market netbook that costs one-third of the price.”

With phones, he observes, “we got real penetration when we got below $35.

Netbooks must be below $100 in price to get real traction.” This will

require advances in neighbouring industries, such as chipmaking and

manufacturing, rather than telecoms, he points out.

The rise of the village netbook
In the meantime, notes the Grameen Foundation’s Mr Cantor, the internet

equivalent of the village-phone model could provide a stepping stone to

wider internet access in the poorest areas, just as village phones did for

telephony. The Grameen Foundation has already experimented by giving

netbooks to a few village-phone operators in Uganda so that they can sell

internet access as well as telephony. Despite the relatively slow

connection provided by Uganda’s 2G mobile networks, demand for the service

proved to be stronger than expected, and revenues were double the level

required to make the service self-sustaining.

Christine Zhen-Wei Qiang of the World Bank notes that internet-kiosk

operators in India are charging small fees for access to government

services online. This makes such services easier to get at, prevents

officials from extorting bribes and provides an income for the kiosk

operator, “so there is a revenue-generating model,” she says. It might

make sense to offer microfinance loans to entrepreneurs to buy netbooks

and provide information services. Many of the methods used to make mobile

phones more widely available seem likely to be applied to extending

internet access in the future.

As Ms Qiang’s research shows, access to the internet can provide an even

bigger boost to economic growth than access to mobile phones. But to make

the most of the internet, users have to have a certain level of education

and literacy. Its effect on development may be greater in the long term,

but is unlikely to be as sudden and dramatic as that of the spread of

mobile phones in the first decade of this century.

In the grand scheme of telecoms history, mobile phones have made a bigger

difference to the lives of more people, more quickly, than any previous

technology. They have spread the fastest and proved the easiest and

cheapest to adopt. It is now clear that the long process of connecting

everyone on Earth to a global telecommunications network, which began with

the invention of the telegraph in 1791, is on the verge of being

completed. Mobile phones will have done more than anything else to advance

the democratisation of telecoms, and all the advantages that come with it.

Technology Quarterly

A factory on your desk
Sep 3rd 2009
From The Economist print edition

Manufacturing: Producing solid objects, even quite complex ones, with 3-D

printers is gradually becoming easier and cheaper. Might such devices some

day become as widespread as document printers?

Todd May

JUST before going on holiday you decide to buy a new pair of trainers. The

usual procedure would be to pop down to the shops, select a style and try

on a pair to make sure they are comfortable. Instead, imagine doing this:

designing shoes exactly the right size in the style and colour you want on

a computer, or downloading a design from the web and customising it. Then

press print and go off to have lunch while a device on your desk

manufactures them for you. On your return, your trainers are ready. But

they are not quite right. So after another fiddle on the computer you

print a second pair. Perfect.

The technology to print a pair of trainers, or at least to do so in one go

rather than in parts that have to be glued together, is not yet available.

But it is getting close. An increasing number of things, from mock-ups of

new consumer products to jewellery and aerospace components, are being

produced by machines that build objects layer by layer, just like printing

in three dimensions. The general term the industry uses for this is

“additive manufacturing”, but the most widely used devices are called 3-D

printers. Some of these printers are becoming small enough to be desktop

devices. They are making their way not just into workshops and factories,

but also into the offices of designers, architects and researchers, and

are being embraced by entrepreneurs who are using them to invent entirely

new businesses.


The 3-D printers currently available use a variety of technologies, each

of which is suited to different applications. They range in price from

under $10,000 to more than $1m for a high-end device capable of making

sophisticated production parts. Depending on the size of the object, the

material it is made from and the level of detail required, the printing

process takes around an hour for a relatively small, simple object that

would fit into the palm of your hand, and up to a day for a bigger, more

sophisticated part. The latest machines can produce objects to an accuracy

of slightly less than 0.1mm.

Terry Wohlers, a consultant based in Colorado who monitors the industry,

reckons the global market for additive manufacturing was worth $1.2

billion in 2008 and that it could double in size by 2015. He estimates

that 3-D printers of various sorts account for about 75% of sales, and

high-performance industrial machines the remainder. He expects lower-cost

3-D printers to account for as much as 90% of the market as prices fall

and performance improves. Model-making and rapid prototyping remain the

most popular uses, but all types of machines are increasingly being used

for direct manufacturing of parts for finished products, rather than just

prototypes.

The ability of 3-D printers to speed up the design process will have a big

impact on industry.
Although powerful design software allows the virtual creation of 3-D

objects on a computer screen, many designers and their clients prefer to

examine, touch and hold a physical object before committing to huge

investments in manufacturing or construction. Models help take some of the

guesswork out of the process. They are traditionally crafted by hand from

materials such as clay, wood or metal. It is a slow and costly business.

Even making a non-working model of what might seem to be a relatively

simple thing, like a new sole for a shoe, is in fact a complex process. It

used to take Timberland, an American firm, a week to turn the design of a

new sole into a model, at a cost of around $1,200. Using a 3-D printer

made by Z Corporation, based in Burlington, Massachusetts, it has cut the

time to 90 minutes and the cost to $35.

The ability of 3-D printers to speed up the design process will have a big

impact on industry. “Now engineers can think of an idea, print it, hold it

in their hand, share it with other people, change it and go back and print

another one,” says David Reis, the chief executive of Objet Geometries, an

Israeli firm that makes 3-D printers. “Suddenly design becomes much more

innovative and creative.” Objet’s machines can produce not only solid

things out of plastic-type materials, but complex ones with moving parts

too, such as a working model of a bicycle chain or a small gearbox. And

they can print objects in multiple materials, such as a plastic remote-

control unit with rubbery buttons.

Little by little
The first step in all 3-D printing processes is for software to take

cross-sections through the part to be created and calculate how each layer

needs to be constructed. Different machines then take different

approaches. Most processes can trace their roots back to the earliest form

of 3-D printing: stereolithography. It was pioneered by 3D Systems, based

in South Carolina, which made the first commercially available

stereolithography machine in 1986.


Such machines build up objects, a layer at a time, by dispensing a thin

layer of liquid resin and using an ultraviolet laser, under computer

control, to make it harden in the required pattern of the cross-section.

The build tray then descends, a new liquid surface is applied and the

process is repeated. At the end, the excess soft resin is cleaned away

using a chemical bath. A related approach, which also dates back to the

1980s, is selective laser-sintering, in which a high-temperature laser is

used to melt and fuse together powdered ceramics, metal or glass, one

layer at a time, to produce the desired 3-D shape.

Both Z Corporation and Objet, by contrast, use modified forms of inkjet

printing. Z Corporation uses the printing heads in its machine to squirt a

liquid binder onto a bed of white powder, but only in the areas where the

layer needs to be solid. Colour is applied at the same time, allowing

multicoloured objects to be created. The bed is lowered by a fraction of a

millimetre and a new layer of powder is spread and rolled. The print head

then repeats the process to create the next layer. When the process is

complete and the material is set, the loose powder is blown away with an

air jet to reveal the completed structure. The powder can be one of

several substances including plastic, a special material that can be

treated to become flexible like rubber, and casting materials suitable for

making moulds. Each layer takes 15-30 seconds to output.

Objet’s machines have print heads that slide back and forth depositing

extremely thin layers of two types of liquid photopolymer. One type is

printed where the cross-section is required to be solid, and the other

where there are cavities, overhangs and other features with spaces. After

each layer is printed, an ultraviolet light-source in the print head

hardens the polymer in the areas that need to be solid, and causes the

second polymer to assume a gel-like state to provide structural support.

The build tray then moves down and the process is repeated for the next

layer. At the end, a jet of water washes away the gel-like support

material. The machine is capable of making objects out of multiple kinds

of solid photopolymer, each with different colours or properties.

Another form of 3-D printing is “fused deposition modelling”. Stratasys,

based in Minneapolis, is the market leader in this field. This approach

involves unwinding a filament of thermoplastic material from a spool and

feeding it through a moving extrusion nozzle, heating the material to melt

it and deposit it in the desired pattern on the build tray. The material

then hardens to form the solid parts required in each layer. As subsequent

layers are added the molten thermoplastic fuses to the layers below. In

areas such as overhangs, physical supports can be added and removed later,

or water-soluble materials can be deposited and then washed away.

3-D printers can already be found in the workshops of artists and

enthusiasts.
Fred Fischer of Stratasys sees the market developing in two directions. On

one hand there will be more demand for cheaper and simpler 3-D printers

capable of quickly turning out concept models, which are likely to sit on

the desks of engineers and designers. On the other hand there will also be

demand for more elaborate machines with added features and higher

performance, the most elaborate of which will provide a cost-effective way

to manufacture thousands, and perhaps even tens of thousands, of

components. Today’s rapid prototyping, in other words, will shade into

tomorrow’s rapid manufacturing. Mr Fischer draws an analogy with the

development of document printers, which range from small, cheap devices

for home use to industrial printing presses capable of producing high-

quality glossy magazines.

Today’s largest and most expensive 3-D printing machines, capable of

directly producing complex plastic, and metal and alloy components using

selective laser-sintering, are becoming increasingly popular in the

consumer-electronics, aerospace and carmaking industries. It is not just

their ability to make a small number of parts, without having to spread

the massive tool-up costs of traditional manufacturing across thousands of

items, that makes these machines useful. They can also be used to build

things in different ways, such as producing the aerodynamic ducting on a

jet-fighter as a single component, rather than assembling it from dozens

of different components, each of which has to be machined and tested.

Some 3-D printers can already be found in the workshops of artists and

enthusiasts. Jay Leno, an American television celebrity, bought a

Stratasys machine to help keep his large collection of old cars on the

road. He can scan a broken part that is no longer available into a

computer, or design a missing one from scratch, and then print out a copy

made of plastic. This can be fitted to a vehicle to check that the design

is correct. After any adjustments, a final plastic copy can either be used

by a machinist to make an exact copy from metal, or the model’s numerical

data can be fed directly into a computer-controlled milling machine. Mr

Leno’s 1907 White steam-driven car is now back on the road thanks to his

3-D printer.

Where now?
Many in the industry believe that low-cost 3-D printers for the consumer

market will eventually appear. 3D Systems launched a new model costing

less than $10,000 in May. That may sound a lot, but it is what laser

printers cost in the early 1980s, and they can now be had for less than

$100. Desktop Factory, a start-up based in Pasadena, California, hopes to

launch a 3-D printer for $4,995 that is around the same size as an early

laser printer.

Objet believes the way to the mass market is via inkjet technology, just

as it has been with 2-D printers. The ability to print different materials

with inkjet heads greatly increases not just model-making abilities but

production possibilities, too. The firm thinks it is getting close to

being able to print with engineering-quality plastics through inkjet

heads. “When we reach that point, it would allow us to go to short-term

manufacturing,” says Amit Shvartz, Objet’s head of marketing.

Todd May

One of Z Corporation’s printers and (below) a finished model of a

camcorder
As with 2-D printing, many individuals and small firms may not need

sophisticated machines, especially if they can use 3-D printing bureaus to

produce their more demanding digital creations. Some of these make-to-

order services are starting to appear. Z Corporation’s machines are being

used by companies to let players of video games, including “World of

Warcraft”, “Spore” and “Rock Band”, produce colourful, 3-D models of their

in-game characters, for example. “We are at that point where people are

looking at this technology and saying ‘We can make a business out of

that’,” says Scott Harmon, head of business development at Z Corporation.

Shapeways, a firm based in the Netherlands, lets users upload designs,

choose a construction material and get a production quote. It then turns

the design into an object with a 3-D printer and ships it to the customer.

3D Systems recently set up a joint venture called MQast, which is an

online provider of aluminium and stainless-steel parts produced using its

machines. And iKix, based in Chennai, India, has equipped itself with Z

Corporation machines and set up a chain of online service-bureaus to

produce architectural models, for delivery anywhere on earth.

Mr Wohlers thinks medical applications of 3-D printing also have a lot of

potential. It is already possible to print 3-D models from the digital

slices produced by computed-tomography scans. These can be used for

training, to explain procedures to patients and to help surgeons plan

complex operations. Some hospitals have started using 3-D printing to

produce custom-made metallic and plastic parts to be used as artificial

implants and in reconstructive surgery. “It is possible to deposit living

cells through inkjet printers onto a biodegradable scaffold,” adds Mr

Wohlers. “There are a lot of problems to overcome, like the creation of

blood vessels, but eventually I think we will see replacement body parts

being printed too.”

Todd May

Meanwhile, what about making those trainers? A 3-D printer cheap enough to

do that at home is probably many years away. But customising a

standardised product by changing its outward appearance, like re-skinning

a mobile phone, would be easier. “You can do that pretty much with

existing technology,” says Mr Harmon. You could also make other simple but

useful things, like a missing piece for a broken toy. And you might even

make your own 3-D printer. The RepRap project, an open-source group based

at the University of Bath in England, has produced designs for a 3-D

printer which can be built for around $700, including royalty-free designs

that can be fed into the machine to produce the plastic parts needed to

create another RepRap machine. This could be fun for the mechanically

minded. Others might want to wait until the local hardware store buys a 3

-D printer and begins to offer one-off manufacturing services on demand.

//

Keeping pirates at bay
Sep 3rd 2009
From The Economist print edition

Policing the internet: The music industry has concluded that lawsuits

alone are not the way to discourage online piracy

Illustration by Belle Mellor

THREE big court cases this year—one in Europe and two in America—have

pitted music-industry lawyers against people accused of online piracy. The

industry prevailed in each case. But the three trials may mark the end of

its efforts to use the courts to stop piracy, for they highlighted the

limits of this approach.

The European case concerned the Pirate Bay, one of the world’s largest and

most notorious file-sharing hubs. The website does not actually store

music, video and other files, but acts as a central directory that helps

users locate particular files on BitTorrent, a popular file-sharing

network. Swedish police began investigating the Pirate Bay in 2003, and

charges were filed against four men involved in running it in 2008. When

the trial began in February 2009, they claimed the site was merely a

search engine, like Google, which also returns links to illegal material

in some cases. One defendant, Peter Sunde, said a guilty verdict would “be

a huge mistake for the future of the internet…it’s quite obvious which

side is the good side.”


The court agreed that it was obvious and found the four men guilty, fining

them a combined SKr30m ($3.6m) and sentencing them each to a year in jail.

Despite tough talk from the defendants, they appear to have tired of legal

entanglements: in June another firm said it would buy the Pirate Bay’s

internet address for SKr60m and open a legal music site.

The Pirate Bay is the latest in a long list of file-sharing services, from

Napster to Grokster to KaZaA, to have come under assault from the media

giants. If it closes, some other site will emerge to take its place; the

music industry’s victories, in short, are never final. Cases like this

also provoke a backlash against the music industry, though in Sweden it

took an unusual form. In the European elections in June, the Pirate Party

won 7.1% of the Swedish vote, making it the fifth-largest party in the

country and earning it a seat in the European Parliament. “All non-

commercial copying and use should be completely free,” says its manifesto.

So much for that plan
The Recording Industry Association of America (RIAA) has pursued another

legal avenue against online piracy, which is to pursue individual users of

file-sharing hubs. Over the years it has accused 18,000 American internet

users of engaging in illegal file-sharing and demanding settlements of

$4,000 on average. Facing the scary prospect of a federal copyright-

infringement lawsuit, nearly everyone settled; but two cases have

proceeded to trial. The first involved Jammie Thomas-Rasset, a single

mother from Minnesota who was accused of sharing 24 songs using KaZaA in

2005. After a trial in 2007, a jury ruled against her and awarded the

record companies almost $10,000 per song in statutory damages.

Critics of the RIAA’s campaign pointed out that if Ms Thomas-Rasset had

stolen a handful of CDs from Wal-Mart, she would not have faced such

severe penalties. The judge threw out the verdict, saying that he had

erred by agreeing to a particular “jury instruction” (guidance to the jury

on how they should decide a case) that had been backed by the RIAA. He

then went further, calling the damages “wholly disproportionate” and

asking Congress to change the law, on the basis that Ms Thomas-Rasset was

an individual who had not sought to profit from piracy.

But at a second trial, which concluded in June 2009, Ms Thomas-Rasset was

found guilty again. To gasps from the defendant and from other observers,

the jury awarded even higher damages of $80,000 per song, or $1.92m in

total. One record label’s lawyer admitted that even he was shocked. In

July, in a separate case brought against Joel Tenenbaum, a student at

Boston University, a jury ordered him to pay damages of $675,000 for

sharing 30 songs.

According to Steven Marks, general counsel for the RIAA, the main point of

pursuing these sorts of cases is to make other internet users aware that

file-sharing of copyrighted material is illegal. Mr Marks admits that the

legal campaign has not done much to reduce file-sharing, but how much

worse might things be, he wonders, if the industry had done nothing? This

year’s cases, and other examples (such as the RIAA’s attempt in 2005 to

sue a grandmother, who had just died, for file-sharing), certainly

generate headlines—but those headlines can also make the industry look

bad, even to people who agree that piracy is wrong.

That helps explain why, in late 2008, the RIAA abandoned the idea of suing

individuals for file-sharing. Instead it is now backing another approach

that seems to be gaining traction around the world, called “graduated

response”. This is an effort to get internet service-providers to play a

greater role in the fight against piracy. As its name indicates, it

involves ratcheting up the pressure on users of file-sharing software by

sending them warnings by e-mail and letter and then restricting their

internet access. In its strictest form, proposed in France, those accused

three times of piracy would have their internet access cut off and their

names placed on a national blacklist to prevent them signing up with

another service provider. Other versions of the scheme propose throttling

broadband-connection speeds.

All this would be much quicker and cheaper than going to court and does

not involve absurd awards of damages and their attendant bad publicity. A

British study found that most file-sharers will stop after receiving a

warning—but only if it is backed up by the threat of sanctions.

It sounds promising, from the industry’s perspective, but graduated

response has drawbacks of its own. In New Zealand the government scrapped

the idea before implementation, and in Britain the idea of cutting off

access has been ruled out. In France the first draft of the law was

savaged by the Constitutional Council over concerns that internet users

would be presumed guilty rather than innocent. Internet service-providers

are opposed to being forced to act as copyright police. Even the European

Parliament has weighed in, criticising any sanctions imposed without

judicial oversight. But the industry is optimistic that the scheme will be

implemented in some form. It does not need to make piracy impossible—just

less convenient than the legal alternatives.

But many existing sources of legal music have not offered what file-

sharers want. “In my view, growing internet piracy is a vote of no

confidence in existing business models,” said Viviane Reding, the European

commissioner for the information society, in July.

The industry is desperately searching for better business models, and is

offering its catalogue at low rates to upstarts that could never have

acquired such rights a decade ago. Services such as Pandora, Spotify and

we7 that stream free music, supported by advertising, are becoming

popular. Most innovative are the plans to offer unlimited downloads for a

flat fee. British internet providers are keen to offer such a service, the

cost of which would be rolled into the monthly bill. Similarly, Nokia’s

“Comes With Music” scheme includes a year’s downloads in the price of a

mobile phone. The music industry will not abandon legal measures against

piracy altogether. But solving the problem will require carrots as well as

sticks.

--
Tilting in the breeze
Sep 3rd 2009
From The Economist print edition

Energy: A novel design for a floating wind-turbine, which could reduce the

cost of offshore wind-power, has been connected to the electricity grid

StatoilHydro

Floating a new idea
FAR out to sea, the wind blows faster than it does near the coast. A

turbine placed there would thus generate more power than its inshore or

onshore cousins. But attempts to build power plants in such places have

foundered because the water is generally too deep to attach a traditional

turbine’s tower to the seabed.

One way round this would be to put the turbine on a floating platform,

tethered with cables to the seabed. And that is what StatoilHydro, a

Norwegian energy company, and Siemens, a German engineering firm, have

done. The first of their floating offshore turbines has just started a

two-year test period generating about 1 megawatt of electricity—enough to

supply 1,600 households.

Span of control
Sep 3rd 2009
From The Economist print edition

Engineering: A new generation of “smart” bridges use sensors to detect

structural problems and warn of impending danger

Illustration by Belle Mellor

WHEN an eight-lane steel-truss-arch bridge across the Mississippi River in

Minneapolis collapsed during the evening rush hour on August 1st 2007, 13

people were killed and 145 were injured. There had been no warning. The

bridge was 40 years old but had a life expectancy of 50 years. The central

span suddenly gave way after the gusset plates that connected the steel

beams buckled and fractured, dropping the bridge into the river.

In the wake of the catastrophe, there were calls to harness technology to

avoid similar mishaps. The St Anthony Falls bridge, which opened on

September 18th 2008 and replaces the collapsed structure, should do just

that. It has an embedded early-warning system made of hundreds of sensors.

They include wire and fibre-optic strain and displacement gauges,

accelerometers, potentiometers and corrosion sensors that have been built

into the span to monitor it for structural weaknesses, such as corroded

concrete and overly strained joints.

On top of this, temperature sensors embedded in the tarmac activate a

system that sprays antifreeze on the road when it gets too cold, and a

traffic-monitoring system alerts the Minnesota Department of

Transportation to divert traffic in the event of an accident or

overcrowding. The cost of all this technology was around $1m, less than 1%

of the $234m it cost to build the bridge.


The new Minneapolis bridge joins a handful of “smart” bridges that have

built-in sensors to monitor their health. Another example is the six-lane

Charilaos Trikoupis bridge in Greece, which spans the Gulf of Corinth,

linking the town of Rio on the Peloponnese peninsula to Antirrio on the

mainland. This 3km-long bridge, which was opened in 2004, has roughly 300

sensors that alert its operators if an earthquake or high winds warrant it

being shut to traffic, as well as monitoring its overall health. These

sensors have already detected some abnormal vibrations in the cables

holding the bridge, which led engineers to install additional weights as

dampeners.

The next generation of sensors to monitor bridge health will be even more

sophisticated. For one thing, they will be wireless, which will make

installing them a lot cheaper.

Jerome Lynch of the University of Michigan, Ann Arbor, is the chief

researcher on a project intended to help design the next generation of

monitoring systems for bridges. He and his colleagues are looking at how

to make a cement-based sensing skin that can detect excessive strain in

bridges. Individual sensors, says Dr Lynch, are not ideal because the

initial cracks in a bridge may not occur at the point the sensor is

placed. A continuous skin would solve this problem. He is also exploring a

paint-like substance made of carbon nanotubes that can be painted onto

bridges to detect corrosion and cracks. Since carbon nanotubes conduct

electricity, sending a current through the paint would help engineers to

detect structural weakness through changes in the paint’s electrical

properties.

The researchers are also developing sensors that could be placed on

vehicles that regularly cross a bridge, such as city buses and police

cars. These could measure how the bridge responds to the vehicle moving

across it, and report any suspicious changes.

Some civil engineers are sceptical about whether such instrumentation is

warranted. Emin Aktan, director of the Intelligent Infrastructure and

Transport Safety Institute at Drexel University in Philadelphia, points

out that although the sensors generate a huge amount of data, civil

engineers simply do not know what happened in the weeks and days before a

given bridge failed. It will take a couple of decades to arrive at a point

when bridge operators can use such data intelligently, he predicts.

Meanwhile, the Obama administration’s stimulus plan has earmarked $27

billion for building and repairing roads and bridges. Just 1% of that

would pay for a lot of sensors.


The Hywind is the first large turbine to be deployed in water more than 30

metres deep. The depth at the prototype’s location, 10 kilometres (six

miles) south-west of Karmoy, is 220 metres. But the turbine is designed to

operate in water up to 700 metres deep, meaning it could be put anywhere

in the North Sea. Three cables running to the seabed prevent it from

floating away.

It is an impressive sight. Its three blades have a total span of 82 metres

and, together with the tower that supports them, weigh 234 tonnes. That

makes the Hywind about the same size as a large traditional offshore

turbine.

Even though it is tethered, and sits on a conical steel buoy, the motion

of the sea causes the tower to sway slowly from side to side. This swaying

places stress on the structure, and that has to be compensated for by a

computer system that tweaks the pitch of the rotor blades to keep them

facing in the right direction as the tower rocks and rolls to the rhythm

of the waves. That both improves power production and minimises the strain

on the blades and the tower. The software which controls this process is

able to measure the success of previous changes to the rotor angle and use

that information to fine-tune future attempts to dampen wave-induced

movement.

If all works well, the potential is huge. Henrik Stiesdal of Siemens’s

windpower business unit reckons the whole of Europe could be powered using

offshore wind, but that competition for space near the coast will make

this difficult to achieve if only inshore sites are available. Siting

turbines within view of coastlines causes conflicts with shipping, the

armed forces, fishermen and conservationists. But floating turbines moored

far out to sea could avoid such problems. That, plus the higher wind

speeds which mean that a deep-water turbine could generate much more power

than a shallow-water one, make the sort of technology that the Hywind is

pioneering an attractive idea.

One obvious drawback is that connecting deep-water turbines to the

electrical grid will be expensive. But the biggest expense—the one that

will make or break far-offshore wind power—will probably be maintenance.

In deep seas, it will not be possible to use repair vessels that can jack

themselves up on the seabed for stability, like the machines that repair

shallow-water turbines. Instead maintenance will be possible only in good

weather. If the Hywind turbine turns out to need frequent repairs, the

cost of leaving it idle while waiting for fair weather, and of ferrying

the necessary people and equipment to and fro, will outweigh the gains

from generating more power. But if all goes according to plan, and the new

turbine does not need such ministrations, it would put wind in the sails

of far-offshore power generation.

Keeping a grip
Sep 3rd 2009
From The Economist print edition

Transport: A new type of tyre, equipped with built-in sensors, can help

avoid a skid—and could also improve fuel-efficiency

FEW sensations of helplessness match that of driving a car that

unexpectedly skids. In a modern, well-equipped (and often expensive) car,

electronic systems such as stability and traction control, along with

anti-lock braking, will kick in to help the driver avoid an accident. Now

a new tyre could detect when a car is about to skid and switch on safety

systems in time to prevent it. It could also improve the fuel-efficiency

of cars to which it is fitted.

The Cyber Tyre, developed by Pirelli, an Italian tyremaker, contains a

small device called an accelerometer which uses tiny sensors to measure

the acceleration and deceleration along three axes at the point of contact

with the road. A transmitter in the device sends those readings to a unit

that is linked to the braking and other control systems.


The accelerometers in the Cyber Tyre contain two tiny structures, the

distance between which changes during acceleration, altering the

electrical capacitance of the device, which is measured and converted into

a voltage. Powered by energy scavengers that exploit the vibration of the

tyre, the device encapsulating the accelerometers and the transmitter is

about 2.5 centimetres in diameter and about the thickness of a coin.

Constantly monitoring the forces that tyres are subjected to as they grip

the road could help reduce fuel consumption by optimising braking and

suspension. Moreover, it could promote the greater use of tyres with a low

rolling-resistance, which are often fitted to hybrid vehicles. These save

fuel by reducing the resistance between the tyre and the road but, to do

so, they have a reduced grip, especially in the wet. If fitted with

sensors, such tyres could be more closely monitored and controlled in

slippery conditions.

Pirelli believes its new tyre could be fitted to cars in 2012 or 2013, but

this will depend on getting carmakers to incorporate the necessary

monitoring and control systems into their vehicles. As with most

innovations, these are expected to be available in upmarket models first,

and cheaper cars later. But if the introduction in 1973 of Pirelli’s

steel-belted Cinturato radial tyre is any guide, devices that make cars

safer will be adopted rapidly.

Trappings of waste
Sep 3rd 2009
From The Economist print edition

Materials science: Plastic beads may provide a way to mop up radiation in

nuclear power-stations and reduce the amount of radioactive waste

Science Photo Library

They want us to drop beads into the cooling system?
NUCLEAR power does not emit greenhouse gases, but the technology does have

another rather nasty by-product: radioactive waste. One big source of

low-level waste is the water used to cool the core in the most common form

of reactor, the pressurised-water reactor. A team of researchers led by

Börje Sellergren of the University of Dortmund in Germany, and Sevilimedu

Narasimhan of the Bhabha Atomic Research Centre in Kalpakkam, India, think

they have found a new way to deal with it. Their solution is to mop up the

radioactivity in the water with plastic.

In a pressurised-water reactor, hot water circulates at high pressure

through steel piping, dissolving metal ions from the walls of the pipes.

When the water is pumped through the reactor’s core, these ions are

bombarded by neutrons and some of them become radioactive. The ions then

either settle back into the walls of the pipes, making the pipes

themselves radioactive, or continue to circulate, making the water

radioactive. Either way, a waste-disposal problem is created.


Because the pipes are steel, most of the ions are iron. When the commonest

isotope of iron (56Fe) absorbs a neutron, the result is not radioactive.

The steel used in the pipes, however, is usually alloyed with cobalt to

make it stronger. When common cobalt (59Co) absorbs a neutron the result

is 60Co, which is radioactive and has a half-life of more than five years.

At present, nuclear engineers clean cobalt from the system by trapping it

in what are known as ion-exchange resins. These swap bits of themselves

for ions in the water flowing over them. Unfortunately, the ion-exchange

technique traps many more non-radioactive iron ions than radioactive

cobalt ones.

To overcome that problem Drs Sellergren and Narasimhan have developed a

polymer that binds to cobalt while ignoring iron. They made the material

using a technique called molecular imprinting, which involves making the

polymer in the presence of cobalt ions, and then extracting those ions by

dissolving them in hydrochloric acid. The resulting cobalt-sized holes

tend to trap any cobalt ions that blunder into them, with the result that

a small amount of the polymer can mop up a lot of radioactive cobalt.

The team is now forming the new polymer into small beads that can pass

through the cooling systems of nuclear power-stations. Concentrating

radioactivity into such beads for disposal would be cheaper than trying to

get rid of large volumes of low-level radioactive waste, according to Dr

Sevilimedu. He thinks that the new polymer could also be used to

decontaminate decommissioned nuclear power-stations where residual

radioactive cobalt in pipes remains a problem.

Nuclear power is undergoing a renaissance. Some 40 new nuclear power-

stations are being built around the world. The International Atomic Energy

Agency estimates that a further 70 will be built over the next 15 years,

most of them in Asia. That is in addition to the 439 reactors which are

already operating. So there will be plenty of work for the plastic beads,

if Drs Sellergren and Narasimhan can industrialise their process.

Air power
Sep 3rd 2009
From The Economist print edition

Energy: Batteries that draw oxygen from the air could provide a cheaper,

lighter and longer-lasting alternative to existing designs

Illustration by Belle Mellor

MOBILE phones looked like bricks in the 1980s. That was largely because

the batteries needed to power them were so hefty. When lithium-ion

batteries were invented, mobile phones became small enough to be slipped

into a pocket. Now a new design of battery, which uses oxygen from ambient

air to power devices, could provide even an smaller and lighter source of

power. Not only that, such batteries would be cheaper and would run for

longer between charges.

Lithium-ion batteries have two electrodes immersed in an electrically

conductive solution, called an electrolyte. One of the electrodes, the

cathode, is made of lithium cobalt oxide; the other, the anode, is

composed of carbon. When the battery is being charged, positively charged

lithium ions break away from the cathode and travel in the electrolyte to

the anode, where they meet electrons brought there by a charging device.

When electricity is needed, the anode releases the lithium ions, which

rapidly move back to the cathode. As they do so, the electrons that were

paired with them in the anode during the charging process are released.

These electrons power an external circuit.


Peter Bruce and his colleagues at the University of St Andrews in Scotland

came up with the idea of replacing the lithium cobalt oxide electrode with

a cheaper and lighter alternative. They designed an electrode made from

porous carbon and lithium oxide. They knew that lithium oxide forms

naturally from lithium ions, electrons and oxygen, but, to their surprise,

they found that it could also be made to separate easily when an electric

current passed through it. They exposed one side of their porous carbon

electrode to an electrolyte rich in lithium ions and put a mesh window on

the other side of the electrode through which air could be drawn. Oxygen

from the air took the place of the cobalt oxide.

When they charged their battery, the lithium ions migrated to the anode

where they combined with electrons from the charging device. When they

discharged it, lithium ions and electrons were released from the anode.

The ions crossed the electrolyte and the electrons travelled round the

external circuit. The ions and electrons met at the cathode, and combined

with the oxygen to form lithium oxide that filled the pores in the carbon.

Because the oxygen being used by the battery comes from the surrounding

air, the device that Dr Bruce’s team has designed can be a mere one-eighth

to one-tenth the size and weight of modern batteries, while still carrying

the same charge. Making such a battery is also expected to be cheaper.

Lithium cobalt oxide accounts for 30% of the cost of a lithium-ion

battery. Air, however, is free


--

The taxonomy of tumours
Sep 3rd 2009
From The Economist print edition

Medicine: A new technique aims to measure the activity of a tumour, and

could also help provide a new way to classify cancers

ONCOLOGISTS would like to be able to classify cancers not by whereabouts

in the body they occur, but by their molecular origin. They know that

certain molecules become active in tumours found in certain parts of the

body. Both head-and-neck cancers and breast cancers, for example, have an

abundance of molecules called epidermal growth-factor receptors (EGFRs).

Now a team from Cancer Research UK’s London Research Institute has taken a

step towards this goal. Their technique can already identify how advanced

a person’s cancer is, and thus how likely it is to return after treatment.

At present, pathologists assess how advanced a cancer is by taking a

sample, known as a biopsy, and examining the concentration within it of

specific receptors, such as EGFRs, that are known to help cancers spread.

Peter Parker had the idea of employing a technique called fluorescence

resonance-energy transfer (FRET), which is used to study interactions

between individual protein molecules, to see if he could find out not only

how many receptors there are in a biopsy, but also how active they are.


The technique uses two types of antibody, each attached to a fluorescent

dye molecule. Each of the two types is selected to fuse with a different

part of an EGFR molecule, but one will do so only when the receptor has

become active.

Pointing a laser at the sample causes the first dye to become excited and

emit energy. With an activated receptor, the second dye will be attached

nearby and so will absorb some of the energy given off by the first.

Measuring how much energy is transferred between the two dyes indicates

the activity of the receptors.

Dr Parker’s idea was implemented by his colleague Banafshe Larijani. She

and her colleagues used FRET to measure the activity of receptors in 122

head-and-neck cancers. They found that the higher the activity of the

receptors they examined, the more likely it was the cancers would return

quickly following treatment. The technique was found to be a better

prognostic tool than conventional visual analysis of receptor density.

To speed things up, engineers in the same group have now created an

instrument that automates the analysis. Tumour biopsies are placed on a

microscope slide and stained with antibodies. The system then points the

laser at the samples, records images of the resulting energy transfer and

interprets those images to provide FRET scores. Results are available in

as little as an hour, compared with four or five days using standard

methods.

Having established the principle with head-and-neck cancer, the team hopes

to extend it. They are beginning a large-scale trial to see whether FRET

can accurately “hindcast” the clinical outcomes associated with 2,000

breast-cancer biopsies. Moreover, if patterns of receptor-activation for

other types of cancers can be characterised, the technique could be

applied to all solid tumours (ie, cancers other than leukaemias and

lymphomas).

If they succeed, it will be good news for researchers who want to switch

from classifying cancers anatomically to classifying them biochemically.

Most cancer specialists think that patients with tumours in different

parts of the body that are triggered by the same genetic mutations may

have more in common than those whose tumours are in the same organ, but

have been caused by different mutations. The new approach could help make

such classification routine. That could, in turn, create a new generation

of therapies and help doctors decide which patients should receive them,

and in which combinations and doses.

---
The digital geographers
Sep 3rd 2009
From The Economist print edition

The internet: Detailed digital maps of the world are in widespread use.

They are compiled using both high-tech and low-tech methods

IT IS a damp, overcast Monday morning in Watford, an undistinguished town

north of London that seems to offer little to the casual visitor. But one

man is eagerly snapping photographs. In fact, he is working with six

high-resolution cameras, all of which are attached to the roof of the car

in which he is being driven. He sits in the passenger seat with a keyboard

on his lap, tapping occasionally and muttering into a microphone. A

computer screen built into the dashboard shows the car’s progress as a

luminous dot travelling across a map of the town. The man is a geographic

analyst for NAVTEQ, one of a small group of companies that are creating

new, digital maps of the world.

Each keystroke he makes denotes a feature in the outside world that is

added to the map displayed on the screen. New details are also recorded in

audio form. Once the journey is finished, the analyst can also pick out

new details while watching a video playback. All this information is

transferred from a server in the car’s boot to NAVTEQ’s database.


Companies such as NAVTEQ and its rivals, which include Tele Atlas and

Microsoft, always start a new map by going to trusted sources such as

local governments or mapping organisations. This information can be

corroborated using aerial or satellite photography. Only when these

sources are exhausted do they switch to the more expensive process of

gathering data themselves. The digital maps they create are used mostly by

motorists in rich countries. But the same companies are now creating maps

of the developing world, which is requiring them to do things in somewhat

different ways.

A geographic analyst in India would probably have deserted his vehicle,

finding it impractical to manoeuvre on the country’s crowded urban

streets. Instead, he would go on foot and use a pen to annotate a map

printed on paper, a technique abandoned by his Western counterparts a

decade ago. Official mapmaking in some poor countries is far from

comprehensive, leaving the likes of NAVTEQ or Tele Atlas to generate the

most accurate maps available.

The type of data that must be gathered also varies. Navigation in wealthy

Western markets generally requires gathering the information that is of

most interest to motorists. But lower levels of car ownership in poor

countries makes such information less relevant. Instead, the proliferation

of mobile phones in countries such as China or India, many of which

incorporate satellite-positioning chips, may make pedestrian navigation

more relevant for local customers. Mapmakers are more likely to spend time

hanging around bus stations collecting timetables, or finding the quickest

route, which is not always the most direct one, from a city’s railway

station to its main shopping street. All this information has to be

constantly refreshed, sometimes several times a year.

To reduce the cost of sending staff on such reconnaissance trips, mapping

companies are asking their customers to do more of the work. Tele Atlas,

for example, gathers data from users of satellite-navigation systems made

by TomTom, a firm based in the Netherlands. Drivers can report errors and

suggest new features, or can agree to submit data passively: the TomTom

device automatically logs their vehicle’s position, leaving a trail where

it has travelled. It is then possible to calculate the vehicle’s direction

and speed, which can help identify the class of road on which it is

travelling. Altitude measurements mean the road’s gradient can be

determined. Other information can also be deduced. If a lot of cars all

seem to be driving across what was thought to be a ploughed field, for

example, then it is likely that a new road has been built. Such detective

work keeps the company’s mapping database up to date.

In some parts of the world, however, mapmaking relies heavily on voluntary

contributions. Google’s Map Maker service, for example, makes up for the

lack of map data for much of the world by asking volunteers to provide it.

Among its contributors is Tim Akinbo, a Nigerian software developer who

got involved with the project last year. He has mapped recognisable

features in Lagos, where he lives, as well as his home town of Jos.

Churches, banks, office buildings and cinemas all feature on his map.

His working method is relatively simple. His mobile phone does not have

satellite positioning, but he can use it to call up Google Maps, see what

is on the map in a particular area and make a note of things to add. He

then goes online when he gets home to add new features.

Why should people freely give up their time to improve local maps? Mr

Akinbo explains that local businesses could use Map Maker to alert

potential customers to their existence. “They will be contributing to a

tool from which other people can benefit, as well as themselves,” he

explains. With enough volunteers a useful map can be created without the

need for fancy camera-toting cars.
Washing without water
Sep 3rd 2009
From The Economist print edition

Environment: A washing machine uses thousands of nylon beads, and just a

cup of water, to provide a greener way to do the laundry

Xeros

Water? Who needs it?
SYNTHETIC fibres tend to make low quality clothing. But one of the

properties that makes nylon a poor choice of fabric for a shirt, namely

its ability to attract and retain dirt and stains, is being exploited by a

company that has developed a new laundry system. Its machine uses no more

than a cup of water to wash each load of fabrics and uses much less energy

than conventional devices.

The system developed by Xeros, a spin-off from the University of Leeds, in

England, uses thousands of tiny nylon beads each measuring a few

millimetres across. These are placed inside the smaller of two concentric

drums along with the dirty laundry, a squirt of detergent and a little

water. As the drums rotate, the water wets the clothes and the detergent

gets to work loosening the dirt. Then the nylon beads mop it up.

The crystalline structure of the beads endows the surface of each with an

electrical charge that attracts dirt. When the beads are heated in humid

conditions to the temperature at which they switch from a crystalline to

an amorphous structure, the dirt is drawn into the core of the bead, where

it remains locked in place.


The inner drum, containing the clothes and the beads, has a small slot in

it. At the end of the washing cycle, the outer drum is halted and the

beads fall through the slot; some 99.95% of them are collected.

Because so little water is used and the warm beads help dry the laundry,

less tumble drying is needed. An environmental consultancy commissioned by

Xeros to test its system reckoned that its carbon footprint was 40%

smaller than the most efficient existing systems for washing and drying

laundry.

The first machines to be built by Xeros will be aimed at commercial

cleaners and designed to take loads of up to 20 kilograms. Customers will

still be able to use the same stain treatments, bleaches and fragrances

that they use with traditional laundry systems. Nylon may be nasty to

wear, but it scrubs up well inside a washing machine.

--

Hard act to follow
Sep 3rd 2009
From The Economist print edition

Environment: Making softwoods more durable could reduce the demand for

unsustainably logged tropical hardwoods

Kebony ASA

Kebony’s product is furfuryl
ONE of the reasons tropical forests are being cut down so rapidly is

demand for the hardwoods, such as teak, that grow there. Hardwoods, as

their name suggests, tend to be denser and more durable than softwoods.

But unsustainable logging of hardwoods destroys not only forests but also

local creatures and the future prospects of the people who lived there.

It would be better to use softwood, which grows in cooler climes in

sustainably managed forests. Softwoods are fast-growing coniferous species

that account for 80% of the world’s timber. But the stuff is not durable

enough to be used outdoors without being treated with toxic preservatives

to protect it against fungi and insect pests. These chemicals eventually

wash out into streams and rivers, and the wood must be retreated.

Moreover, at the end of its life, wood that has been treated with

preservatives in this way needs to be disposed of carefully.

One way out of this problem would be an environmentally friendly way of

making softwood harder and more durable—something that a Norwegian company

called Kebony has now achieved. It opened its first factory in January.


Kebony stops wood from rotting by placing it in a vat containing a

substance called furfuryl alcohol, which is made from the waste left over

when sugarcane is processed. The vat is then pressurised, forcing the

liquid into the wood. Next the wood is dried and heated to 110ºC. The heat

transforms the liquid into a resin, which makes the cell walls of the wood

thicker and stronger.

The approach is similar to that of a firm based in the Netherlands called

Titan Wood. Timber swells when it is damp and shrinks when it is dry

because it contains groups of atoms called hydroxyl groups, which absorb

and release water. Titan Wood has developed a technique for converting

hydroxyl groups into acetyl groups (a different combination of atoms) by

first drying the wood in a kiln and then treating it with a chemical

called acetic anhydride. The result is a wood that retains its shape in

the presence of water, and is no longer recognised as wood by grubs that

would otherwise attack it. It is thus extremely durable.

The products made by both companies are completely recyclable,

environmentally friendly and create woods that are actually harder than

most tropical hardwoods. The strengthened softwoods can be used in

everything from window frames to spas to garden furniture. Treated maple

is also being adopted for decking on yachts. The cost is similar to that

of teak, but the maple is more durable and easier to keep clean.

Obviously treating wood makes it more expensive. But because it does not

need to receive further treatments—a shed made from treated wood would not

need regular applications of creosote, for example—it should prove

economical over its lifetime. Kebony reckons that its pine cladding, for

example, would cost a third less than conventionally treated pine cladding

over the course of 40 years. Saving money, then, need not be at the

expense of helping save the planet.

--
Memories are made of this
Sep 3rd 2009
From The Economist print edition

Computing: Memory chips based on nanotubes and iron particles might be

capable of storing data for a billion years

FEW human records survive for long, the 16,000-year-old Paleolithic cave

paintings at Lascaux, France, being one exception. Now researchers led by

Alex Zettl of the University of California, Berkeley, have devised a

method that will, they reckon, let people store information electronically

for a billion years.

Dr Zettl and his colleagues constructed their memory cell by taking a

particle of iron just a few billionths of a metre (nanometres) across and

placing it inside a hollow carbon nanotube. They attached electrodes to

either end of the tube. By applying a current, they were able to shuttle

the particle back and forth. This provides a mechanism to create the “1”

and “0” required for digital representation: if the particle is at one end

it counts as a “1”, and at the other end it is a “0”.


The next challenge was to read this electronic information. The

researchers found that when electrons flowed through the tube, they

scattered when they came close to the particle. The particle’s position

thus altered the nanotube’s electrical resistance on a local scale.

Although they were unable to discover exactly how this happens, they were

able to use the effect to read the stored information.

What makes the technique so durable is that the particle’s repeated

movement does not damage the walls of the tube. That is not only because

the lining of the tube is so hard; it is also because friction is almost

negligible when working at such small scales.

Theoretical studies suggest that the system should retain information for

a long time. To switch spontaneously from a “1” to a “0” would entail the

particle moving some 200 nanometres along the tube using thermal energy.

At room temperature, the odds of that happening are once in a billion

years. In tests, the stored digital information was found to be remarkably

stable. Yet the distance between the ends of the tube remains small enough

to allow for speedy reading and writing of the memory cell when it is in

use.

The next challenge will be to create an electronic memory that has

millions of cells instead of just one. But if Dr Zettl succeeds in

commercialising this technology, digital decay itself could become a thing

of the past.

--
Only humans allowed
Sep 3rd 2009
From The Economist print edition

Computing: Can online puzzles that force internet users to prove that they

really are human be kept secure from attackers?

Illustration by Belle Mellor

ON THE internet, goes the old joke, nobody knows you’re a dog. This is

untrue, of course. There are many situations where internet users are

required to prove that they are human—not because they might be dogs, but

because they might be nefarious pieces of software trying to gain access

to things. That is why, when you try to post a message on a blog, sign up

with a new website or make a purchase online, you will often be asked to

examine an image of mangled text and type the letters into a box. Because

humans are much better at pattern recognition than software, these online

puzzles—called CAPTCHAs—can help prevent spammers from using software to

automate the creation of large numbers of bogus e-mail accounts, for

example.

Unlike a user login, which proves a specific identity, CAPTCHAs merely

show that “there’s really a human on the other end”, says Luis von Ahn, a

computer scientist at Carnegie Mellon University and one of the people

responsible for the ubiquity of these puzzles. Together with Manuel Blum,

Nicholas J. Hopper and John Langford, Dr von Ahn coined the term CAPTCHA

(which stands for “completely automated public Turing test to tell

computers and humans apart”) in a paper published in 2000.


But how secure are CAPTCHAs? Spammers stepped up their efforts to automate

the solving of CAPTCHAs last year, and in recent months a series of cracks

have prompted both Microsoft and Google to tweak the CAPTCHA systems that

protect their web-based mail services. “We modify our CAPTCHAs when we

detect new abuse trends,” says Macduff Hughes, engineering director at

Google. Jeff Yan, a computer scientist at Newcastle University, is one of

many researchers interested in cracking CAPTCHAs. Since the bad guys are

already doing it, he told a spam-fighting conference in Amsterdam in June,

the good guys should do it too, in order to develop more secure designs.

That CAPTCHAs work at all illuminates a failing in artificial-intelligence

research, says Henry Baird, a computer scientist at Lehigh University in

Pennsylvania and an expert in the design of text-recognition systems.

Reading mangled text is an everyday skill for most people, yet machines

still find it difficult.

The human ability to recognise text as it becomes more and more distorted

is remarkably resilient, says Gordon Legge at the University of Minnesota.

He is a researcher in the field of psychophysics—the study of the

perception of stimuli. But there is a limit. Just try reading small text

in poor light, or flicking through an early issue of Wired. “You hit a

point quite close to your acuity limit and suddenly your performance

crashes,” says Dr Legge. This means designers of CAPTCHAs cannot simply

increase the amount of distortion to foil attackers. Instead they must

mangle text in new ways when attackers figure out how to cope with

existing distortions.

Mr Hughes, along with many others in the field, thinks the lifespan of

text-based CAPTCHAs is limited. Dr von Ahn thinks it will be possible for

software to break text CAPTCHAs most of the time within five years. A new

way to verify that internet users are indeed human will then be needed.

But if CAPTCHAs are broken it might not be a bad thing, because it would

signal a breakthrough in machine vision that would, for example, make

automated book-scanners far more accurate.

CAPTCHA me if you can
Looking at things the other way around, a CAPTCHA system based on words

that machines cannot read ought to be uncrackable. And that does indeed

seem to be the case for ReCAPTCHA, a system launched by Dr von Ahn and his

colleagues two years ago. It derives its source materials from the

scanning in of old books and newspapers, many of them from the 19th

century. The scanners regularly encounter difficult words (those for which

two different character-recognition algorithms produce different

transliterations). Such words are used to generate a CAPTCHA by combining

them with a known word, skewing the image and adding extra lines to make

the words harder to read. The image is then presented as a CAPTCHA in the

usual way.

If the known word is entered correctly, the unknown word is also assumed

to have been typed in correctly, and access is granted. Each unknown word

is presented as a CAPTCHA several times, to different users, to ensure

that it has been read correctly. As a result, people solving CAPTCHA

puzzles help with the digitisation of books and newspapers.

Even better, the system has proved to be far better at resisting attacks

than other types of CAPTCHA. “ReCAPTCHA is virtually immune by design,

since it selects words that have resisted the best text-recognition

algorithms available,” says John Douceur, a member of a team at Microsoft

that has built a CAPTCHA-like system called Asirra. The ReCAPTCHA team has

a member whose sole job is to break the system, says Dr von Ahn, and so

far he has been unsuccessful. Whenever the in-house attacker appears to be

making progress, the team responds by adding new distortions to the

puzzles.

Even so, researchers are already looking beyond text-based CAPTCHAs. Dr

von Ahn’s team has devised two image-based schemes, called SQUIGL-PIX and

ESP-PIX, which rely on the human ability to recognise particular elements

of images. Microsoft’s Asirra system presents users with images of several

dogs and cats and asks them to identify just the dogs or cats. Google has

a scheme in which the user must rotate an image of an object (a teapot,

say) to make it the right way up. This is easy for a human, but not for a

computer.

The biggest flaw with all CAPTCHA systems is that they are, by definition,

susceptible to attack by humans who are paid to solve them. Teams of

people based in developing countries can be hired online for $3 per 1,000

CAPTCHAs solved. Several forums exist both to offer such services and

parcel out jobs. But not all attackers are willing to pay even this small

sum; whether it is worth doing so depends on how much revenue their

activities bring in. “If the benefit a spammer is getting from obtaining

an e-mail account is less than $3 per 1,000, then CAPTCHA is doing a

perfect job,” says Dr von Ahn.
--

The road ahead
Sep 3rd 2009
From The Economist print edition

Consumer electronics: Your next satellite-navigation device will be less

bossy and more understanding of your driving preferences

Illustration by Allan Sanders

DO YOU get a quiet sense of satisfaction in deviating from the route

recommended by your satellite-navigation device and ignoring its bossy

voice as it demands that you “make a U-turn” or “turn around when

possible”? A satnav’s encyclopedic knowledge of the road network may

justify its hectoring tone most of the time, but sometimes you really do

know better. The motorway might look like the fastest way but it can be a

nightmare at this time of the day; taking a country lane or a nifty

shortcut can avoid a nasty turn into heavy traffic; or sometimes the

chosen route is simply too boring.

Fortunately your next satnav will be more understanding, because it will

allow a greater level of personalisation. It may well, for example, try to

learn your motoring foibles, such as your favourite route into town. This

is just one of the features being readied for inclusion in the next

generation of devices. If you want them to, they will help you drive more

economically by offering the route that requires the least fuel, or

provide tips on how to adjust your driving style to be more frugal. Access

to real-time traffic information will also become more widespread.

Avoiding hold-ups is the most effective way a satnav can help a driver

save both time and fuel, and devices are getting better at doing this. By

taking data from special FM radio signals or via a built-in cellular-data

connection, satnavs can take account of current traffic conditions into

route calculations. The actual traffic data can come from a variety of

sources including traffic sensors, the anonymous monitoring of mobile

phones moving along stretches of road and information collected (also

anonymously) from satnavs in other vehicles. Access to real-time data will

generally mean paying for a subscription, but it turns a navigation device

into a live information system. This makes it useful not just when you do

not know where you are going but also on familiar journeys, when you want

to know which of several possible routes you should take.

The classic motorway dilemma provides an example. An overhead sign gives

warning of an accident ahead. You could turn off now but you might then

get stuck in a busy town because so many other drivers are following the

same alternative route. Or you could stay on the motorway in the hope that

the tailback will soon clear—only to find that it has got worse. A satnav

that knows the average speeds on particular roads at different times of

the day, as many now do, does a good job of predicting which route is the

fastest under normal circumstances. But one that can also use real-time

data would be able to tell that the traffic on the alternative route, say,

is moving at a snail’s pace while vehicles near the site of the accident

are beginning to pick up speed, suggesting that the emergency services

have started clearing the road. So it could then advise you to stay on the

motorway.


Keep on going
Journey planning using a satnav usually allows for a limited choice: you

can pick the fastest route, the shortest, the one that avoids motorways or

a route that passes through or avoids a particular point. Future devices

will learn about a driver’s preferences and adjust accordingly. MyDrive,

for example, is a piece of software developed by Journey Dynamics, a

British company, for satnav providers. It analyses the behaviour of an

individual driver on different types of road. Some people always prefer

motorways and drive quickly, others would much rather drive on local roads

and some like to keep moving even if that means a long detour around a

traffic jam. Understanding a driver’s foibles can ensure that the right

sort of route is chosen, and can also double the accuracy of the predicted

time of arrival, says John Holland, the company’s chief executive.

Satnavs with built-in data connections are also becoming more widespread,

making other new things possible. TomTom, which is based in the

Netherlands, lets users of its systems update maps and add points of

interest. With two-way communication, satnavs no longer have to be taken

out of the car and plugged into a computer to update their maps. “The

screen becomes a connected computer in the car,” says Mark Gretton,

TomTom’s chief technology officer. He expects other companies to develop

software that can be downloaded by satnavs, just as small programs, or

apps, can be added to mobile phones.

Another trend is towards greater integration between the satnav and the

car’s other systems. Bosch, a German car-component company, is working on

a satnav that can give warning of a sharp bend ahead, for example. If the

car is being driven too fast, it can prepare the brakes to slow the

vehicle swiftly when the driver realises—or pretension the seat belts if

he does not.

But such features are only possible with built-in satnav systems. These

can be far more convenient than portable units, but they also tend to be

much more expensive. Portable devices cost less and are easier to update,

but they often get stolen from cars. The distinction may be starting to

blur, however. Portable satnavs that plug into vehicle-information systems

are starting to appear. And TomTom has done a deal that allows its devices

to be specified as the built-in satnav in Renault cars.

All these innovations should give drivers more choice and flexibility.

There is still plenty of scope, it seems, for satnavs to learn new tricks.

--
Reality, improved
Sep 3rd 2009
From The Economist print edition

Computing: Thanks to mobile phones, augmented reality could be far more

accessible—and useful—than virtual reality

Nokia

VIRTUAL reality never quite lived up to the hype. In the 1990s films such

as “Lawnmower Man” and “The Matrix” depicted computer-generated worlds in

which people could completely immerse themselves. In some respects this

technology has become widespread: think of all those video-game consoles

capable of depicting vivid, photorealistic environments, for example. What

is missing, however, is a convincing sense of immersion. Virtual reality

(VR) doesn’t feel like reality.

One way to address this is to use fancy peripherals—gloves, helmets and so

forth—to make immersion in a virtual world seem more realistic. But there

is another approach: that taken by VR’s sibling, augmented reality (AR).

Rather than trying to create an entirely simulated environment, AR starts

with reality itself and then augments it. “In augmented reality you are

overlaying digital information on top of the real world,” says Jyri

Huopaniemi, director of the Nokia Research Centre in Tampere, Finland.

Using a display, such as the screen of a mobile phone, you see a live view

of the world around you—but with digital annotations, graphics and other

information superimposed upon it.


The data can be as simple as the names of the mountains visible from a

high peak, or the names of the buildings visible on a city skyline. At a

historical site, AR could superimpose images showing how buildings used to

look. On a busy street, AR could help you choose a restaurant: wave your

phone around and read the reviews that pop up. In essence, AR provides a

way to blend the wealth of data available online with the physical world—

or, as Dr Huopaniemi puts it, to build a bridge between the real and the

virtual.

AR, me hearties
It all sounds rather distant and futuristic. The idea of AR has, in fact,

been around for a few years without making much progress. But the field

has recently been energised by the ability to implement AR using advanced

mobile handsets, rather than expensive, specialist equipment. Several AR

applications are already available. Wikitude, an AR travel-guide

application developed for Google’s Android G1 handset, has already been

downloaded by 125,000 people. Layar is a general-purpose AR browser that

also runs on Android-powered phones. Nearest Tube, an AR application for

Apple’s iPhone 3GS handset, can direct you in London to the nearest

Underground station. Nokia’s “mobile augmented reality applications”

(MARA) software is being tested by staff at the world’s largest handset-

maker, with a public launch imminent.

What has made all this possible is the emergence of mobile phones equipped

with satellite-positioning (GPS) functions, tilt sensors, cameras, fast

internet connectivity and, crucially, a digital compass. This last item is

vital, and until recently it was the one bit of hardware that was missing

from the iPhone, says Philipp Breuss-Schneeweis of Mobilizy, the Austrian

software house which developed Wikitude. (A compass is standard on the

Android G1 handset.) But the launch of the compass-equipped iPhone 3GS

handset in June is expected to trigger a deluge of AR apps.

The combination of GPS, tilt sensors and a compass enables a handset to

determine where it is, its orientation relative to the ground, and which

direction it is being pointed in. The camera allows it to see the world,

and the wireless-internet link allows it to retrieve information relating

to its surroundings, which is combined with the live view from the camera

and displayed on the screen. All this is actually quite simple, says Mr

Breuss-Schneeweis. In the case of Wikitude, the AR software works out the

longitudes and latitudes of objects in the camera’s field of view so that

they can be tagged accordingly, he says.

Precisely which items in the real world are labelled varies from one AR

application to another. Wikitude, as its name implies, draws information

from Wikipedia, the online encyclopedia, by scouring it for entries that

list a longitude and latitude—which includes everything from the Lincoln

Memorial to the Louvre. Using the application a tourist can stroll through

the streets of a city and view the names of the landmarks in the vicinity.

The full Wikipedia entry on any landmark can then be summoned with a

click. There are 600,000 Wikipedia entries that include longitude and

latitude co-ordinates, says Mr Breuss-Schneeweis, and the number is

increasing all the time.

Information from social networks can be overlaid on the real world.
Another way to identify nearby landmarks is to draw upon existing

databases, such as those used in satellite navigation systems. That is how

Nokia’s MARA system works. It is doubly clever because harvesting local

points of interest from the NAVTEQ software built into many Nokia phones

means no wireless-internet connection is needed to look them up.

However it is done, the result of both approaches is to present detailed

information about the user’s surroundings. That said, the precision of the

tagging can vary somewhat, because satellite-positioning technology is

only accurate to within a few metres at best. This can cause problems when

standing very close to a landmark. “The farther you are away from the

buildings the more accurate it seems to be,” says Mr Breuss-Schneeweis.

But there is a way to improve the accuracy of AR tagging at close

quarters. Total Immersion, a firm based in Paris, is one of several

companies using object recognition. By looking for a known object in the

camera’s field of view, and then analysing that object’s position and

orientation, it can seamlessly overlay graphics so that they appear in the

appropriate position relative to the object in question.

Together with Alcatel-Lucent, a telecoms-equipment firm, Total Immersion

is developing a mobile-phone service that allows users to point their

phone’s camera at an object, such as the Mona Lisa. The software

recognises the object and automatically retrieves related information,

such as a video about Leonardo da Vinci. The same approach will also allow

advertisements in newspapers and on billboards to be augmented, too. Point

your camera at a poster of a car, for example, and you might see a 3-D

rendering of the vehicle floating in space, which can be viewed from any

angle by moving around.

Recognise this
The simplest way to make all this work, says Greg Davis of Total

Immersion, is to put 2-D bar-codes on posters and advertisements, which

are detected and used to retrieve content which is then superimposed on

the device’s screen. But the trend is towards “markerless” tracking, where

image recognition is used to identify targets. Putting a 2-D bar-code on

the Mona Lisa, after all, is not an option.

Nokia’s Point-and-Find software uses the markerless approach. It is a

mobile-phone application, currently in development, that lets you point

your phone at a film poster in order to call up local viewing times and

book tickets. In theory this approach should also be able to recognise

buildings and landmarks, such as the Eiffel Tower, although recognising 3

-D objects is much more difficult than identifying static 2-D images, says

Mr Davis. The way forward may be to combine image-recognition with

satellite-positioning, to narrow down the possibilities when trying to

identify a nearby building. The advantage of the image-recognition

approach, says Mr Davis, is that graphics can be overlaid on something no

matter where it is, or how many times it gets moved.

One category of moving objects that should be easy to track is people, or

at least those carrying mobile phones. Information from social networks,

such as Facebook, can then be overlaid on the real world. Clearly there

are privacy concerns, but Latitude, a social-networking feature of Google

Maps, has tested the water by letting people share their locations with

their friends, on an opt-in basis. The next step is to let people hold up

their handsets to see the locations and statuses of their friends, says Dr

Huopaniemi, who says Nokia is working on this very idea.

As well as being able to see what your friends are up to now, it can be

useful to see into the past. Nokia has developed an AR system called Image

Space which lets users record messages, photos and videos and tag them

with both place and time. When someone else goes to a particular location,

they can then scroll back through the messages that people have left in

the vicinity. More practically, Wikitude can also link virtual messages to

real places by overlaying user-generated reviews of bars, hotels and

restaurants from a website called Qype onto the establishments in

question.

T Mobile

Time for some strawberries, then
Other obvious uses for AR are turn-by-turn navigation, in which the route

to a particular destination is painted onto the world; house-hunting,

using AR to indicate which houses are for sale in a particular street; and

providing additional information at sporting events, such as biographies

of individual players and on-the-spot instant replays. Some of those

attending this year’s Wimbledon tennis tournament got a taste of things to

come with a special version of Wikitude, called Seer, developed for the

Android G1 handset in conjunction with IBM and Ogilvy, an advertising

agency. It could direct users to courts, restaurants and loos, provide

live updates from matches, and even show if there was a queue in the bar

or at the taxi rank.

These sorts of application really are just the beginning, says Dr

Huopaniemi. Virtual reality never really died, he says—it just divided

itself in two, with AR enhancing the real world by overlaying information

from the virtual realm, and VR becoming what he calls “augmented

virtuality”, in which real information is overlaid onto virtual worlds,

such as players’ names in video games. AR may be a relatively recent

arrival, but its potential is huge, he suggests. “It’s a very natural way

of exploring what’s around you.” But trying to imagine how it will be used

is like trying to forecast the future of the web in 1994. The building-

blocks of the technology have arrived and are starting to become more

widely available. Now it is up to programmers and users to decide how to

use them.
--
Attack of the drones
Sep 3rd 2009
From The Economist print edition

Military technology: Smaller and smarter unmanned aircraft are

transforming spying and redefining the idea of air power

Reuters

FIVE years ago, in the mountainous Afghan province of Baghlan, NATO

officials mounted a show of force for the local governor, Faqir Mamozai,

to emphasise their commitment to the region. As the governor and his

officials looked on, Jan van Hoof, a Dutch commander, called in a group of

F-16 fighter jets, which swooped over the city of Baghlan, their

thunderous afterburners engaged. This display of air power was, says Mr

van Hoof, an effective way to garner the respect of the local people. But

fighter jets are a limited and expensive resource. And in conflicts like

that in Afghanistan, they are no longer the most widespread form of air

power. The nature of air power, and the notion of air superiority, have

been transformed in the past few years by the rise of remote-controlled

drone aircraft, known in military jargon as “unmanned aerial vehicles”

(UAVs).

Drones are much less expensive to operate than manned warplanes. The cost

per flight-hour of Israel’s drone fleet, for example, is less than 5% the

cost of its fighter jets, says Antan Israeli, the commander of an Israeli

drone squadron. In the past two years the Israeli Defence Forces’ fleet of

UAVs has tripled in size. Mr Israeli says that “almost all” IDF ground

operations now have drone support.


Of course, small and comparatively slow UAVs are no match for fighter jets

when it comes to inspiring awe with roaring flyovers—or shooting down

enemy warplanes. Some drones, such as America’s Predator and Reaper, carry

missiles or bombs, though most do not. (Countries with “hunter-killer”

drones include America, Britain and Israel.) But drones have other

strengths that can be just as valuable. In particular, they are

unparalleled spies. Operating discreetly, they can intercept radio and

mobile-phone communications, and gather intelligence using video, radar,

thermal-imaging and other sensors. The data they gather can then be sent

instantly via wireless and satellite links to an operations room halfway

around the world—or to the hand-held devices of soldiers below. In

military jargon, troops without UAV support are “disadvantaged”.

The technology has been adopted at extraordinary speed. In 2003, the year

the American-led coalition defeated Saddam Hussein’s armed forces,

America’s military logged a total of roughly 35,000 UAV flight-hours in

Iraq and Afghanistan. Last year the tally reached 800,000 hours. And even

that figure is an underestimate, because it does not include the flights

of small drones, which have proliferated rapidly in recent years. (America

alone is thought to have over 5,000 of them.) These robots, typically

launched by foot soldiers with a catapult, slingshot or hand toss, far

outnumber their larger kin, which are the size of piloted aeroplanes.

Global sales of UAVs this year are expected to increase by more than 10%

over last year to exceed $4.7 billion, according to Visiongain, a market-

research firm based in London. It estimates that America will spend about

60% of the total. For its part, America’s Department of Defence says it

will spend more than $22 billion to develop, buy and operate drones

between 2007 and 2013. Following the United States, Israel ranks second in

the development and possession of drones, according to those in the

industry. The European leaders, trailing Israel, are roughly matched:

Britain, France, Germany and Italy. Russia and Spain are not far behind,

and nor, say some experts, is China. (But the head of an American navy

research-laboratory in Europe says this is an underestimate cultivated by

secretive Beijing, and that China’s drone fleet is actually much larger.)

In total, more than three dozen countries operate UAVs, including Belarus,

Colombia, Sri Lanka and Georgia. Some analysts say Georgian armed forces,

equipped with Israeli drones, outperformed Russia in aerial intelligence

during their brief war in August 2008. (Russia also buys Israeli drones.)

Iran builds drones, one of which was shot down over Iraq by American

forces in February. The model in question can reportedly collect ground

intelligence from an altitude of 4,000 metres as far as 140km from its

base. This year Iranian officials said they had developed a new drone with

a range of more than 1,900km. Iran has supplied Hizbullah militants in

Lebanon with a small fleet of drones, though their usefulness is limited:

Hizbullah uses lobbed rather than guided rockets, and it is unlikely to

muster a ground attack that would benefit from drone intelligence. But

ownership of UAVs enhances Hizbullah’s prestige in the eyes of its

supporters, says Amal Ghorayeb, a Beirut academic who is an expert on the

group.

Eyes wide open
How effective are UAVs? In Iraq, the significant drop in American

casualties over the past year and a half is partly attributable to the

“persistent stare” of drone operators hunting for insurgents’ roadside

bombs and remotely fired rockets, says Christopher Oliver, a colonel in

the American army who was stationed in Baghdad until recently. “We stepped

it up,” he says, adding that drone missions will continue to increase, in

part to compensate for the withdrawal of troops. In Afghanistan and Iraq,

almost all big convoys of Western equipment or personnel are preceded by a

scout drone, according to Mike Kulinski of Enerdyne Technologies, a

developer of military-communications software based in California. Such

drones can stream video back to drivers and transmit electromagnetic

jamming signals that disable the electronic triggers of some roadside

bombs.

In military parlance, drones do work that would be “dull, dirty and

dangerous” for soldiers. Some of them can loiter in the air for long

periods. The Eagle-1, for example, developed by Israel Aerospace

Industries and EADS, Europe’s aviation giant, can stay aloft for more than

50 hours at a time. (France deployed several of these aircraft this year

in Afghanistan.) Such long flights help operators, assisted with object-

recognition software, to determine normal (and suspicious) patterns of

movement for people and vehicles by tracking suspects for two wake-and-

sleep cycles.

In Afghanistan and Iraq, almost all big convoys are preceded by a scout

drone.
Drones are acquiring new abilities. New sensors that are now entering

service can make out the “electrical signature” of ground vehicles by

picking up signals produced by engine spark-plugs, alternators, and other

electronics. A Pakistani UAV called the Tornado, made in Karachi by a

company called Integrated Dynamics, emits radar signals that mimic a

fighter jet to fool enemies.

UAVs are hard to shoot down. Today’s heat-seeking shoulder-launched

missiles do not work above 3,000 metres or so, though the next generation

will be able to go higher, says Carlo Siardi of Selex Galileo, a

subsidiary of Finmeccanica in Ronchi dei Legionari, Italy. Moreover, drone

engines are smaller—and therefore cooler—than those powering heavier,

manned aircraft. In some of them the propeller is situated behind the

exhaust source to disperse hot air, reducing the heat signature. And

soldiers who shoot at aircraft risk revealing their position.

But drones do have an Achilles’ heel. If a UAV loses the data connection

to its operator—by flying out of range, for example—it may well crash.

Engineers have failed to solve this problem, says Dan Isaac, a drone

expert at Spain’s Centre for the Development of Industrial Technology, a

government research agency in Madrid. The solution, he and others say, is

to build systems which enable an operator to reconnect with a lost drone

by transmitting data via a “bridge” aircraft nearby.

Getty Images

Eyes in the sky, pilots on the ground
In late June America’s Northrop Grumman unveiled the first of a new

generation of its Global Hawk aircraft, thought to be the world’s fastest

drone. It can gather data on objects reportedly as small as a shoebox,

through clouds, day or night, for 32 hours from 18,000 metres—almost twice

the cruising altitude of passenger jets. After North Korea detonated a

test nuclear device in May, America said it would begin replacing its

manned U-2 spy planes in South Korea with Global Hawks, which are roughly

the size of a corporate jet.

Big drones are, however, hugely expensive. With their elaborate sensors,

some cost as much as $60m apiece. Fewer than 30 Global Hawks have been

bought. And it is not just the hardware that is costly: each Global Hawk

requires a support team of 20-30 people. As the biggest UAVs get bigger,

they are also becoming more expensive. Future American UAVs may cost a

third as much as the F-35 fighter jet (each of which costs around $83m,

without weapons). The Neuron, a jet-engine stealth drone developed by

France’s Dassault Aviation and partners including Italy’s Alenia, will be

about the size of the French manned Mirage fighter.

Small drones, by contrast, cost just tens of thousands of dollars. With

electric motors, they are quiet enough for low-altitude spying. But

batteries and fuel cells have only recently become light enough to open up

a large market. A fuel cell developed by AMI Adaptive Materials, based in

Ann Arbor, Michigan, exemplifies the progress made. Three years ago AMI

sold a 25-watt fuel cell weighing two kilograms. Today its fuel cell is

25% lighter and provides eight times as much power. This won AMI a

$500,000 prize from the Department of Defence. Its fuel cells, costing

about $12,000 each, now propel small drones.

Most small drones are launched without airstrips and are controlled in the

field using a small computer. They can be recovered with nets, parachutes,

vertically strung cords that snag a wingtip hook or a simple drop on the

ground after a stall a metre or two in the air. Their airframes break

apart to absorb the impact; users simply snap them back together.

With some systems, a ground unit can launch a drone for a quick bird’s-eye

look around with very little effort. Working with financing from Italy’s

defence ministry, Oto Melara, an Italian firm, has built prototypes of a

short-range drone launched from a vehicle-mounted pneumatic cannon. The

aircraft’s wings unfold upon leaving the tube. It streams back video while

flying any number of preset round-trip patterns. Crucially, operators do

not need to worry about fiddling with controls; the drone flies itself.

Send in the drones
Indeed, as UAVs become more technologically complex, there is also a clear

trend towards making their control systems easier to use, according to a

succession of experts speaking at a conference in La Spezia, Italy, held

in April by the Association for Unmanned Vehicle Systems International

(AUVSI), an industry association. For example, instead of manoeuvring

aircraft, operators typically touch (or click on) electronic maps to

specify points along a desired route. Software determines the best flight

altitudes, speeds and search patterns for each mission—say, locating a

well near a hilltop within sniping range of a road.

Eyevine

This is most certainly not a computer game
Next year Lockheed Martin, an American defence contractor, begins final

testing of software to make flying drones easier for troops with little

training. Called ECCHO, it allows soldiers to control aircraft and view

the resulting intelligence on a standard hand-held device such as an

iPhone, BlackBerry or Palm Pre. It incorporates Google Earth mapping

software, largely for the same reason: most recruits are already

proficient users.

What’s next? A diplomat from Djibouti, a country in the Horn of Africa,

provides a clue. He says private companies in Europe are now offering to

operate spy drones for his government, which has none. (Djibouti has

declined.) But purchasing UAV services, instead of owning fleets, is

becoming a “strong trend”, says Kyle Snyder, head of surveillance

technology at AUVSI. About 20 companies, he estimates, fly spy drones for

clients.

One of them, a division of Boeing called Insitu, sees a lucrative untapped

market in Afghanistan, where the intelligence needs of some smaller NATO

countries are not being met by larger allies. (Armed forces are often

reluctant to share their intelligence for tactical reasons.) Alejandro

Pita, Insitu’s head of sales, declines to name customers, but says his

firm’s flights cost roughly $2,000 an hour for 300 or so hours a month.

The drones-for-hire market is also expanding into non-military fields.

Services include inspecting tall buildings, monitoring traffic and

maintaining security at large facilities.

AP

X marks the spot
Drone sales and research budgets will continue to grow. Raytheon, an

American company, has launched a drone from a submerged submarine. Mini

helicopter drones for reconnaissance inside buildings are not far off. In

China, which is likely to be a big market in the future, senior officials

have recently talked of reducing troop numbers and spending more money

developing “informationised warfare” capabilities, including unmanned

aircraft.

There is a troubling side to all this. Operators can now safely manipulate

battlefield weapons from control rooms half a world away, as if they are

playing a video game. Drones also enable a government to avoid the

political risk of putting combat boots on foreign soil. This makes it

easier to start a war, says P.W. Singer, the American author of “Wired for

War”, a recent bestseller about robotic warfare. But like them or not,

drones are here to stay. Armed forces that master them are not just

securing their hold on air superiority—they are also dramatically

increasing its value.

--

Hacking goes squishy
Sep 3rd 2009
From The Economist print edition

Biotechnology: The falling cost of equipment capable of manipulating DNA

is opening up a new field of “biohacking” to enthusiasts
Illustration by Brett Ryder

MANY of the world’s great innovators started out as hackers—people who

like to tinker with technology—and some of the largest technology

companies started in garages. Thomas Edison built General Electric on the

foundation of an improved way to transmit messages down telegraph wires,

which he cooked up himself. Hewlett-Packard was founded in a garage in

California (now a national landmark), as was Google, many years later.

And, in addition to computer hardware and software, garage hackers and

home-build enthusiasts are now merrily cooking up electric cars, drone

aircraft and rockets. But what about biology? Might biohacking—tinkering

with the DNA of existing organisms to create new ones—lead to innovations

of a biological nature?


The potential is certainly there. The cost of sequencing DNA has fallen

from about $1 per base pair in the mid-1990s to a tenth of a cent today,

and the cost of synthesising the molecule has also fallen. Rob Carlson,

the founder of a firm called Biodesic, started tracking the price of

synthesis a decade ago. He found a remarkably steady decline, from over

$10 per base pair to, lately, well under $1 (see chart). This decline

recalls Moore’s law, which, when promulgated in 1965, predicted the

exponential rise of computing power. Someday history may remember drops in

the cost of DNA synthesis as Carlson’s curve.

A growing culture
And as the price falls, amateurs are wasting little time getting started.

Several groups are already hard at work finding ways to duplicate at home

the techniques used by government laboratories and large corporations. One

place for them to learn about biohacking is DIYbio, a group that holds

meetings in America and Britain and has about 800 people signed up for its

newsletter. DIYbio plans to perform experiments such as sending out its

members in different cities to swab public objects. The DNA thus collected

could be used to make a map showing the spread of micro-organisms.


Strictly, that is not really biohacking. But attempts to construct micro-

organisms that make biofuels efficiently certainly are—though it will be

impressive if a group of amateurs can succeed in cracking a problem that

is confounding many established companies. Amateur innovation,

nevertheless, is happening. When a science blog called io9 ran a

competition for biohackers, it received entries for modified

microorganisms that, among other things, help rice plants process nitrogen

fertiliser more efficiently, measure the alcohol content of a person’s

breath and respond to commands from a computer.

The template for biohacking’s future may be the International Genetically

Engineered Machine (iGem) competition, held annually at the Massachusetts

Institute of Technology. This challenges undergraduates to spend a summer

building an organism from a “kit” provided by a gene bank called the

Registry of Standard Biological Parts. Their work is possible because the

kit is made up of standardised chunks of DNA called BioBricks.

As Jason Kelly, the co-founder of a gene-synthesis firm called Ginkgo

BioWorks, observes, there is no equivalent of an electrical engineer’s

diagram to help unravel what is going on in a cell. As he puts it, “what

the professionals can do in terms of engineering an organism is really

rudimentary. It’s really a tinkering art more than a predictable

engineering system.” BioBricks are, nevertheless, an attempt to provide

the equivalent of electronic components with known properties to the

field—and using them is part of Ginkgo’s business plan. Information on

BioBricks is kept public, helping the students understand which work

together best.

Illustration by Brett Ryder

What the students actually create, however, is left to their imaginations.

And the results are often unexpected. A team from National Yang-Ming

University in Taiwan conceived a bacterium that can do the work of a

failed kidney; another, from Imperial College, London, worked on a

“biofabricator” capable of building other biological materials.

From relatively simple beginnings in 2003, iGem has grown to a competition

involving 84 teams and 1,200 participants, most of whom leave with enough

knowledge to do work at home. They are limited mainly by the novelty of

the pursuit. Although there are no laws banning the sale of DNA, reagents

or equipment, such items are usually priced for sale to large

institutions. Indeed, it is this problem of finding ways to manage without

expensive equipment, rather than a desire to work on “wetware”, or living

organisms, that motivates many biohackers.

Tito Jankowski, now a member of DIYbio, became interested in toolmaking

for biohackers after taking part in iGem with a team from Brown University

that had set itself the goal of modifying bacteria to detect lead in

water. After graduating, Mr Jankowski was interested in doing more, but

found his access to equipment restricted. He decided to create a cheaper

version of the gel-electrophoresis box, a basic tool used in a wide range

of experiments. Despite its simple construction, which can be as spare as

a few panes of coloured plastic over a heating element, a gel box can sell

for over $1,000. But according to Mr Jankowski, “this equipment is only

expensive because it has never been used for personal stuff before.”

Mr Jankowski likens the current state of biohacking to the years in which

amateurs first began working with personal computers, a metaphor that Dr

Kelly also uses. Computers were once both expensive and arcane. Today,

they are built mostly from off-the-shelf components, and even a relatively

non-technical person can assemble one. If hobbyists like Mr Jankowski can

help reduce the cost of equipment, say, tenfold, while BioBricks or

something similar become cheaper and more predictable, then the stage will

be set for a bioscience version of Apple or Google to be born in a

dormitory room or garage.

But what about viruses?
The computer metaphor, though, is a reminder that there is no shortage of

fools and criminals ready to construct viruses and other harmful computer

programs. If such people got interested in the biological world, the

consequences might be even more serious—because in biology, there is no

rebooting the machine.

More than any other detail of biohacking, this is the one that laymen

grasp. And the resulting fear can have unpleasant effects, as Steve Kurtz,

a professor of art at the State University of New York in Buffalo who

works with biological material, found out. In May 2004 he awoke to find

that his wife, Hope, was not breathing. The police who accompanied

paramedics to his home found Petri dishes used in his art displays, and

notified the Federal Bureau of Investigation (FBI), which brought in the

Department of Homeland Security and charged him with bioterrorism. The

authorities claimed the body of his wife, who had died of congenital heart

failure, for examination. This took place over the protestations of Mr

Kurtz, his colleagues and the local commissioner of public health, all of

whom insisted that nothing in the exhibit could be harmful.

The right way to regulate biohacking may not become apparent for some

time.
The initial reaction of the local police was hardly surprising. The

motives of the FBI, which has experts capable of examining Mr Kurtz’s art

scientifically, are harder to decode. After a grand jury refused to indict

Mr Kurtz, the bureau then pursued him with a mail-fraud charge carrying a

sentence of up to 20 years, which a judge dismissed this year. Mr Kurtz,

known for his anti-establishment art, may simply have become the target of

harassment for his views. But the FBI may genuinely be wary of biohackers;

rumour suggests it has followed up the case by discreetly instructing

reagent suppliers not to sell to individuals, despite the lack of any law

against their doing so.

So far legislators have shown little interest in regulating individuals.

When they choose to do so, it will not be easy. If groups such as DIYbio

are successful, the basic tools of biohacking will be both cheap to buy

and easy to construct at home. Many DNA sequences, including those for

harmful diseases, are already widely published, and can hardly be

retracted. The falling cost of DNA synthesis suggests that there will be

automated “printers” for the molecule before long. There are some

substances that can be controlled, like the reagents used to modify DNA.

But a strict government policy regulating the chemical components of

biohacking might have much the same effect as laws banning gun ownership—

ordinary citizens will be discouraged, while criminals will still find

what they want on black markets.

In all likelihood, the right way to regulate biohacking will not become

apparent for some time. But some people think that any regulation at all

could be harmful. Dr Carlson, who has a book on biohacking coming out

later this year, is a proponent of light regulation at most. “If you look

at our ability to respond to infectious diseases at this point in time,

we’re essentially helpless,” he says. “The quandary we face is that we

need the garage hackers, because that’s where innovation comes from.”

Freeman Dyson, a venerable and polymathic physicist who has been thinking

about the problem, is also a believer in biological innovation. He has

written about a variety of futuristic possibilities, including modified

trees that are better than natural ones at absorbing carbon dioxide, and

termites that can eat old cars. If regulation of biohacking is too tight,

such innovations—or, at least, things like them—might never come to pass.
--
3-D: It's nearly there
Sep 3rd 2009
From The Economist print edition

Three-dimensional imaging: New technologies that display 3-D visuals are

on the verge of spreading from cinemas into the wider world

Sara Forrest

BRIGHT and crisp high-definition (HD) images, a luxury not so long ago,

are fast becoming standard in consumer electronics. HD technology is now

well entrenched in the marketplace in the form of televisions, video

cameras, Blu-ray players, games consoles and projectors. There seems

little scope to improve the display of two-dimensional images, which

provide about as much detail as the human eye can appreciate. So attention

is shifting to the next frontier in display technology: three-dimensional

(3-D) images.

In recent years 3-D cinema projection has made a dramatic comeback,

shaking off its image as a gimmick and replacing the cheesy old red-and-

blue glasses with new technologies that are easier to use and produce more

lifelike results. Studios love 3-D because it is immune to piracy. Cinemas

love 3-D because it allows them to offer something that even the most

elaborate home cinema cannot match, and charge more for it. Now 3-D seems

to be on the verge of moving out of the cinema and into a wider range of

products.


Would you look at that
Better and cheaper 3-D display technologies for home and office use are

“ready for prime time”, says a senior executive at Wistron, a Taiwanese

firm that manufactures computers for many leading brands. By the end of

this year the first mass-market laptops capable of displaying 3-D images

will be on sale, he says, and by the end of 2010 all of the world’s top

ten computer-makers will include 3-D displays in their product line-ups.

At the Consumer Electronics Show held in Las Vegas in January, prototype

3-D televisions and other products were unveiled by JVC, LG, Panasonic,

Samsung, Sony and others.

Such prototypes have been around for a few years, but they have recently

made rapid progress, and the industry is now stumbling towards agreeing on

the necessary standards. Even without such standards, several firms plan

to launch 3-D products and services next year anyway. Beyond that, even

more elaborate technologies are under development that use holograms to

display 3-D images.

Creating images that appear to burst forth from a screen and invite you to

reach out and touch them is not easy. One way of doing so is to use

“stereoscopic” optical technologies, in which scenes are filmed from two

angles. When displayed, special eyewear then ensures that one perspective

is beamed exclusively to the right eye and the other to the left eye,

fooling the brain into thinking that it is looking at a 3-D scene. So-

called “autostereoscopic” 3-D systems do not require glasses. One approach

uses tiny lenses on the front of the display to direct images for the left

and right eyes in several different directions. Provided your head is in

the right place, and you keep it still, a 3-D image appears.

But building a 3-D display is only one piece of the puzzle: there must

also be 3-D content to show on it. A games console can be programmed to

produce separate images for left and right eyes relatively easily, but

most films and television programmes are not shot in 3-D. Now, however, it

is possible to convert existing video into 3-D automatically. DDD Group,

based in Santa Monica, California, makes a conversion chip, called TriDef,

that uses object-recognition software to analyse colours and shapes and

determine distances, inferring that, for example, the muzzle of a gun is

closer to the viewer than the shooter’s face. When the software is unsure

it does not add depth, says Chris Yewdall, DDD’s boss.

One of DDD’s customers is Samsung, a South Korean electronics giant, which

plans to launch 3-D television sets next year. DDD and its main

competitors—JVC in Japan and NVIDIA in California—are also developing 3-D

conversion technologies for computers. Acer, a Taiwanese manufacturer, is

expected to launch a laptop equipped with a 3-D conversion chip made by

DDD, in October. (Its display will require users to wear special glasses.)

SeaReal

An alternative approach to creating 3-D images is based on holography. A

hologram is a special interference pattern created in a photosensitive

medium (which can be as simple as a traditional photographic film). Light

striking this pattern is scattered as though it were actually striking the

object encoded by the interference pattern. The pattern is usually created

by combining two laser beams, one of which has been bounced off the object

being displayed.

Holograms have many advantages over stereoscopic images. Not only is no

special eyewear needed, but also the images do not distort when observers

move. But producing a fixed hologram of a static object is tricky enough;

making a holographic display, or something that functions like one, is

even more difficult. One approach involves firing carefully orchestrated

pulses from an array of lasers at a sheet of glass scored with tiny

grooves; another, demonstrated by researchers at the University of

Southern California Graphics Lab, involves projecting high-speed video

onto a rapidly spinning mirror, so that the appropriate views of an object

are reflected in different directions. Such technology is still embryonic,

but several industries are interested in it.

Reach out and touch
Kolpi, a French company based in Sophia Antipolis, has devised a 3-D

display that will allow oil-exploration companies to direct their remotely

operated submarines. Video and sonar data from the submarine are displayed

as a volleyball-sized hologram. An operator can direct the robot by moving

a cursor around inside the hologram. The display is expected to cost

$140,000 when it goes on sale next year.

USC

Almost close enough to touch: 3-D displays from Actuality Medical (top of

article), SeeReal (second image) and the University of Southern California

Graphics Lab (above)
Actuality Medical, based in Bedford, Maryland, hopes to improve

radiotherapy with a different type of 3-D display. At the moment doctors

“hope the patient doesn’t move” as they zap cancerous tissue with a beam

of radiation, says Gregg Favalora, the firm’s founder. Working with

Philips, a Dutch electronics company, Actuality Medical has built an early

version of a system that could limit damage to healthy tissue. Called

Perspecta, it graphically depicts a simulated beam of radiation shooting

through a hologram-like image of body tissue. This could eventually help

doctors redirect radiation as body parts move slightly during treatment.

The 3-D image is created by projecting about 6,000 images a second onto a

nearly transparent spinning disc some 25 centimetres (10 inches) across,

which forms a basketball-sized sphere.

Creating actual holograms—or images that resemble them, as Perspecta does

—requires enormous amounts of processing power. So far this has kept

images small: they are rarely bigger than a shoebox. To make them larger a

company called SeeReal, based in Luxembourg, has built systems that use

two eye-tracking cameras above a large 3-D display to follow the viewer’s

eyes. It is then necessary to generate only the parts of the hologram that

are relevant to the viewer’s position and direction of gaze, greatly

reducing the amount of processing required.

SeeReal reckons that the information needed to construct small holograms

can be carried over existing telecoms networks. That would allow

scientists working in different locations to examine the same object, for

example. Drugs companies, which are keen to improve co-operation between

researchers in different laboratories, could represent a lucrative market

for the technology within two years, SeeReal predicts.

Another obvious use for 3-D displays is videoconferencing. Accenture, a

consultancy and research firm, has equipped two non-adjacent rooms at its

research centre in Sophia Antipolis, France, with cameras so that a wall-

mounted screen in each one serves as a window into the other. It is now

using 3-D displays to allow people to “share” objects and data between the

two rooms. The result, says head researcher Kelly Dempski, is an

“extension” of each room into the other. As hologram and data-transmission

technologies improve over the next decade, the rooms will increasingly

meld together, he says.

Room with a view
Holografika, a company based in Budapest, hopes to realise this vision

even sooner. One of its products, HoloVizio, displays 3-D images that

“practically surround” users, says Peter Kovacs, the firm’s software

chief. Its customers include carmakers and oil-exploration companies.

Working with 13 companies and research institutions in America, Europe and

Japan, Holografika is developing a system that will use holographic laser

arrays, driven by data from about 100 video cameras, to replicate the

contents of one room in another. It is expected to cost about $500,000.

Another 3-D extension of videoconferencing is the Eyeliner holographic

projection system devised by Musion, a company based in London. It does

not actually use holograms, but projects high-definition video onto nearly

transparent screens made of very thin foil, in a modern updating of the

old “Pepper’s ghost” stage illusion. The effect, for viewers a few metres

away, is a lifelike, full-sized 3-D moving image of a person that appears

to float in space, without any visible screen.

Musion’s technology has been used by Al Gore, Bill Gates, Prince Charles

and many other celebrities to appear on stage at conferences without being

physically present. From televisions and laptop screens to operating

theatres and conference halls, 3-D in all its forms is suddenly being

taken much more seriously than it was just a few years ago.

--

Paranoid survivor
Sep 3rd 2009
From The Economist print edition

Andrew Grove, the former boss of Intel, believes other fields can learn

from the chipmaking industry that he helped bring into being

Illustration by Andy Potts

EARLIER this year Andrew Grove taught a class at Stanford Business School.

As a living legend in Silicon Valley and a former boss of Intel, the

world’s leading chipmaker, Dr Grove could have simply used the opportunity

to blow his own trumpet. Instead he started by displaying a headline from

the Wall Street Journal heralding the recent takeover of General Motors by

the American government as the start of “a new era”. He gave a potted

history of his own industry’s spectacular rise, pointing out that plenty

of venerable firms—with names like Digital, Wang and IBM—were nearly or

completely wiped out along the way.

Then, to put a sting in his Schumpeterian tale, he displayed a fabricated

headline from that same newspaper, this one supposedly drawn from a couple

of decades ago: “Presidential Action Saves Computer Industry”. A fake

article beneath it describes government intervention to prop up the ailing

mainframe industry. It sounds ridiculous, of course. Computer firms come

and go all the time, such is the pace of innovation in the industry. Yet

for some reason this healthy attitude towards creative destruction is not

shared by other industries. This is just one of the ways in which Dr Grove

believes that his business can teach other industries a thing or two. He

thinks fields such as energy and health care could be transformed if they

were run more like the computer industry—and made greater use of its

products.

Dr Grove may be 73 and coping with Parkinson’s disease, but his wit is

still barbed and his desire to provoke remains as strong as ever. Rather

than slipping off to a gilded retirement of golf or gallivanting, as many

other accomplished men of his age do, he is still spoiling for a fight.

His achievements mean that his provocations are worth paying attention to.

He has arguably done as much as anyone to usher in the age of cheap,

cheerful and ubiquitous personal computing. In part, he did this through

technological prowess. He graduated at the top of his engineering class at

New York’s City College (one of the few options available to him as a poor

Jewish refugee from Communist-controlled Hungary). He then went on to earn

a doctorate at the University of California at Berkeley, and wrote a book

on semiconductors that remains a standard text.


He joined Fairchild Semiconductor, once a pioneering electronics firm,

where he caught the eye of Robert Noyce and Gordon Moore. The former was a

co-inventor of the integrated circuit, while the latter coined Moore’s law

(which decrees, roughly, that the amount of computing power available at a

given price doubles every 18 months). When the two left Fairchild to found

Intel in 1968—initially to make memory chips, not microprocessors—they

took the young Dr Grove with them. He eventually ended up in charge of the

company, becoming chief executive in 1987. He continued in that role until

1998, when he became chairman, holding that post until 2004.

Though his scientific credentials are solid, he will probably be best

remembered as a daring and successful businessman. Richard Tedlow, a

historian at Harvard Business School, calls him “one of the master

managers in the history of American business”. One reason is market

success: under his tenure, Intel came to dominate the microprocessor

industry and its market capitalisation rocketed (making it, at one point,

the world’s most valuable company). A bigger reason, though, lies in how

exactly he managed to steer Intel to such spectacular success.

Intelligence inside
Two particularly risky decisions he took are revealing. In “Only the

Paranoid Survive”, Dr Grove’s bestselling book, he argues that every

company will face a confluence of internal and external forces, often

unanticipated, that will conspire to make an existing business strategy

unviable. In Intel’s case, such a “strategic inflection point” arose

because its memory-chip business came under heavy assault from new

Japanese rivals willing to undercut any price Intel offered.

What could he do? The firm’s roots and most of its profits lay in making

memory chips; Intel’s microprocessor group was just a small niche. The

firm’s two founders and much of its engineering staff were too emotionally

wedded to its past successes to make a break. But Dr Grove decided to bet

the future of the company on microprocessors, a move that saved his

company and transformed the industry.

Dr Grove thinks pharmaceutical firms should study chipmakers to accelerate

learning and innovation.
The second big decision was Dr Grove’s radical announcement that Intel

would market its microchips directly to consumers. Previously, chipmakers

had regarded computer-makers such as Dell and Compaq as their customers,

and had not bothered with fancy advertising campaigns to end users. But Dr

Grove believed that such a relationship allowed these assembly and

marketing firms, which did little original research of their own, to

capture too much of the value created by his firm’s innovation.

So he launched the “Intel Inside” campaign, which marketed microprocessor

chips directly to consumers, starting in 1991. This incensed his rivals

and his immediate customers, the computer-makers, but the strong demand

for Intel’s new Pentium chip showed that the strategy had worked. True,

the firm stumbled when a minor flaw was discovered in the Pentium that

affected some mathematical calculations. Rather than rush to correct the

problem, Intel tried to downplay it—a strategy that quickly turned into a

public-relations disaster. The firm was forced to offer a replacement for

all affected chips, at a cost of nearly half a billion dollars.

Painful though that was, Dr Grove now thinks this episode actually

benefited the firm in two ways. First, it proved to internal sceptics that

Intel really had become a consumer brand. Second, he reckons that it

bolstered his efforts to improve the shoddy quality of manufacturing, to

protect the firm from future fiascos. In hindsight, his risky decision to

turn Intel from a component-maker into a consumer brand was a

masterstroke.

An American success story
Some observers have suggested that it was his family’s escape from the

Nazis, and his own experience of the abuses of communism, that shaped Dr

Grove’s strict management style. On this view, his demanding but

meritocratic approach, rewarding ideas and knowledge over power, was a

rejection of the injustices of communism.

Dr Grove, however, insists that it was his experience at City College,

where talent and hard work were rewarded and where students challenged

their professors without concern for rank, that impressed upon him the

value of meritocracy. By contrast, he recalls an elitist, back-stabbing

and lax corporate culture at Fairchild. Senior executives would stroll

into the office or into meetings as late as they pleased, but blue-collar

workers were penalised or even fired if they committed similar offences.

When he took control of Intel Dr Grove imposed a strict arrival time of

8am, with latecomers forced to sign a sheet. He also refused to go along

with popular management trends such as flexi-time and teleworking. He was

known as a blunt and demanding manager, but he also gained a reputation as

a fair-minded boss who rewarded good ideas, no matter where they came

from.

Asked today if he regrets imposing his disciplinarian personality on his

company, he makes a confession: “You don’t understand—I was never that

disciplined myself, and I’m not even a morning person!” He was determined

to impose discipline on Intel, he says, for two reasons that ultimately

worked to the firm’s advantage. First, he wanted to avoid the outrageous

double standards he had experienced at Fairchild. The meritocratic culture

he created at Intel then helped it attract the best talent in the

industry. Second, he knew that strong discipline would also be necessary

to improve his firm’s shoddy manufacturing.

At the time the microchip business was producing such unreliable products

that customers insisted that companies like Intel always license new

products to a secondary supplier to ensure reliability of supply. His

efforts to tighten up quality control led to a commercial coup. When his

firm introduced its widely anticipated 386 processor, he stunned the

industry by declaring that Intel would not license any secondary

manufacturers. This was a huge risk for computer-makers, but such was

their appetite for the new chip that they bought it anyway. Intel’s

ability to deliver good enough chips in large numbers meant profits no

longer had to be shared with secondary manufacturers.

With his reputation for ruthlessness in the marketplace and rigorous

discipline inside his firm, Dr Grove has much in common with another

American business leader: Lee Raymond, the formidable former chairman of

Exxon Mobil. Both men were feared by both rivals and many of their

employees. Dr Grove once even spearheaded a sales campaign against a

superior chip made by Motorola in an effort dubbed “Operation Crush”. When

asked about such bully-boy tactics, Dr Grove remains unrepentant. He even

likes the comparison with the unloved oilman: “I never knew Lee Raymond,

but he did take Exxon to the top of the Fortune 500—and that’s OK with

me.”

Personal admiration aside, however, Dr Grove is convinced that Exxon and

its Big Oil brethren are in a sunset industry. He has written and lectured

widely on energy and environmental topics in recent years, arguing that

oil and cars are heading for a divorce. He regards electricity as the most

promising replacement fuel, and thinks battery technology has the

potential to produce an Intel-like giant as the industry develops.

Another business he believes to be ripe for disruption is health care. He

complains that the industry seems to innovate much too slowly. The lack of

proper electronic medical records and smart “clinical decision systems”

bothers him, as does the slow-moving, bureaucratic nature of clinical

trials. He thinks pharmaceutical firms should study the fast “knowledge

turns” achieved by chipmakers, so that the cycles of learning and

innovation are accelerated. (A knowledge turn, a term coined by Dr Grove,

is the time it takes for an experiment to proceed from hypothesis to

results, and then to a new hypothesis—around 18 months in chipmaking, but

10-20 years in medicine.)

And what of chipmaking—is it, too, a sunset industry ripe for disruption?

Dr Grove still believes in Moore’s law (with the caveat that it will get

ever pricier for chipmakers to uphold) but he has a grave concern. At a

recent ceremony honouring his achievements, he shocked the gathered

bigwigs by declaring that the industry’s approach to hoarding patents was

an abuse of intellectual-property rights and risked undermining its

future. Asked to defend that claim, which upset even his own family

members, he does not backtrack. He insists that firms must use their

patents or lose them: “You can’t just sit on your ass and give everyone

the finger.” Even though Dr Grove is no longer running Intel, it seems

that his desire to shake things up is undimmed.