A factory on your desk
Sep 3rd 2009
From The Economist print edition
Manufacturing: Producing solid objects, even quite complex ones, with 3-D
printers is gradually becoming easier and cheaper. Might such devices some
day become as widespread as document printers?
Todd May
JUST before going on holiday you decide to buy a new pair of trainers. The
usual procedure would be to pop down to the shops, select a style and try
on a pair to make sure they are comfortable. Instead, imagine doing this:
designing shoes exactly the right size in the style and colour you want on
a computer, or downloading a design from the web and customising it. Then
press print and go off to have lunch while a device on your desk
manufactures them for you. On your return, your trainers are ready. But
they are not quite right. So after another fiddle on the computer you
print a second pair. Perfect.
The technology to print a pair of trainers, or at least to do so in one go
rather than in parts that have to be glued together, is not yet available.
But it is getting close. An increasing number of things, from mock-ups of
new consumer products to jewellery and aerospace components, are being
produced by machines that build objects layer by layer, just like printing
in three dimensions. The general term the industry uses for this is
“additive manufacturing”, but the most widely used devices are called 3-D
printers. Some of these printers are becoming small enough to be desktop
devices. They are making their way not just into workshops and factories,
but also into the offices of designers, architects and researchers, and
are being embraced by entrepreneurs who are using them to invent entirely
new businesses.
The 3-D printers currently available use a variety of technologies, each
of which is suited to different applications. They range in price from
under $10,000 to more than $1m for a high-end device capable of making
sophisticated production parts. Depending on the size of the object, the
material it is made from and the level of detail required, the printing
process takes around an hour for a relatively small, simple object that
would fit into the palm of your hand, and up to a day for a bigger, more
sophisticated part. The latest machines can produce objects to an accuracy
of slightly less than 0.1mm.
Terry Wohlers, a consultant based in Colorado who monitors the industry,
reckons the global market for additive manufacturing was worth $1.2
billion in 2008 and that it could double in size by 2015. He estimates
that 3-D printers of various sorts account for about 75% of sales, and
high-performance industrial machines the remainder. He expects lower-cost
3-D printers to account for as much as 90% of the market as prices fall
and performance improves. Model-making and rapid prototyping remain the
most popular uses, but all types of machines are increasingly being used
for direct manufacturing of parts for finished products, rather than just
prototypes.
The ability of 3-D printers to speed up the design process will have a big
impact on industry.
Although powerful design software allows the virtual creation of 3-D
objects on a computer screen, many designers and their clients prefer to
examine, touch and hold a physical object before committing to huge
investments in manufacturing or construction. Models help take some of the
guesswork out of the process. They are traditionally crafted by hand from
materials such as clay, wood or metal. It is a slow and costly business.
Even making a non-working model of what might seem to be a relatively
simple thing, like a new sole for a shoe, is in fact a complex process. It
used to take Timberland, an American firm, a week to turn the design of a
new sole into a model, at a cost of around $1,200. Using a 3-D printer
made by Z Corporation, based in Burlington, Massachusetts, it has cut the
time to 90 minutes and the cost to $35.
The ability of 3-D printers to speed up the design process will have a big
impact on industry. “Now engineers can think of an idea, print it, hold it
in their hand, share it with other people, change it and go back and print
another one,” says David Reis, the chief executive of Objet Geometries, an
Israeli firm that makes 3-D printers. “Suddenly design becomes much more
innovative and creative.” Objet’s machines can produce not only solid
things out of plastic-type materials, but complex ones with moving parts
too, such as a working model of a bicycle chain or a small gearbox. And
they can print objects in multiple materials, such as a plastic remote-
control unit with rubbery buttons.
Little by little
The first step in all 3-D printing processes is for software to take
cross-sections through the part to be created and calculate how each layer
needs to be constructed. Different machines then take different
approaches. Most processes can trace their roots back to the earliest form
of 3-D printing: stereolithography. It was pioneered by 3D Systems, based
in South Carolina, which made the first commercially available
stereolithography machine in 1986.
Such machines build up objects, a layer at a time, by dispensing a thin
layer of liquid resin and using an ultraviolet laser, under computer
control, to make it harden in the required pattern of the cross-section.
The build tray then descends, a new liquid surface is applied and the
process is repeated. At the end, the excess soft resin is cleaned away
using a chemical bath. A related approach, which also dates back to the
1980s, is selective laser-sintering, in which a high-temperature laser is
used to melt and fuse together powdered ceramics, metal or glass, one
layer at a time, to produce the desired 3-D shape.
Both Z Corporation and Objet, by contrast, use modified forms of inkjet
printing. Z Corporation uses the printing heads in its machine to squirt a
liquid binder onto a bed of white powder, but only in the areas where the
layer needs to be solid. Colour is applied at the same time, allowing
multicoloured objects to be created. The bed is lowered by a fraction of a
millimetre and a new layer of powder is spread and rolled. The print head
then repeats the process to create the next layer. When the process is
complete and the material is set, the loose powder is blown away with an
air jet to reveal the completed structure. The powder can be one of
several substances including plastic, a special material that can be
treated to become flexible like rubber, and casting materials suitable for
making moulds. Each layer takes 15-30 seconds to output.
Objet’s machines have print heads that slide back and forth depositing
extremely thin layers of two types of liquid photopolymer. One type is
printed where the cross-section is required to be solid, and the other
where there are cavities, overhangs and other features with spaces. After
each layer is printed, an ultraviolet light-source in the print head
hardens the polymer in the areas that need to be solid, and causes the
second polymer to assume a gel-like state to provide structural support.
The build tray then moves down and the process is repeated for the next
layer. At the end, a jet of water washes away the gel-like support
material. The machine is capable of making objects out of multiple kinds
of solid photopolymer, each with different colours or properties.
Another form of 3-D printing is “fused deposition modelling”. Stratasys,
based in Minneapolis, is the market leader in this field. This approach
involves unwinding a filament of thermoplastic material from a spool and
feeding it through a moving extrusion nozzle, heating the material to melt
it and deposit it in the desired pattern on the build tray. The material
then hardens to form the solid parts required in each layer. As subsequent
layers are added the molten thermoplastic fuses to the layers below. In
areas such as overhangs, physical supports can be added and removed later,
or water-soluble materials can be deposited and then washed away.
3-D printers can already be found in the workshops of artists and
enthusiasts.
Fred Fischer of Stratasys sees the market developing in two directions. On
one hand there will be more demand for cheaper and simpler 3-D printers
capable of quickly turning out concept models, which are likely to sit on
the desks of engineers and designers. On the other hand there will also be
demand for more elaborate machines with added features and higher
performance, the most elaborate of which will provide a cost-effective way
to manufacture thousands, and perhaps even tens of thousands, of
components. Today’s rapid prototyping, in other words, will shade into
tomorrow’s rapid manufacturing. Mr Fischer draws an analogy with the
development of document printers, which range from small, cheap devices
for home use to industrial printing presses capable of producing high-
quality glossy magazines.
Today’s largest and most expensive 3-D printing machines, capable of
directly producing complex plastic, and metal and alloy components using
selective laser-sintering, are becoming increasingly popular in the
consumer-electronics, aerospace and carmaking industries. It is not just
their ability to make a small number of parts, without having to spread
the massive tool-up costs of traditional manufacturing across thousands of
items, that makes these machines useful. They can also be used to build
things in different ways, such as producing the aerodynamic ducting on a
jet-fighter as a single component, rather than assembling it from dozens
of different components, each of which has to be machined and tested.
Some 3-D printers can already be found in the workshops of artists and
enthusiasts. Jay Leno, an American television celebrity, bought a
Stratasys machine to help keep his large collection of old cars on the
road. He can scan a broken part that is no longer available into a
computer, or design a missing one from scratch, and then print out a copy
made of plastic. This can be fitted to a vehicle to check that the design
is correct. After any adjustments, a final plastic copy can either be used
by a machinist to make an exact copy from metal, or the model’s numerical
data can be fed directly into a computer-controlled milling machine. Mr
Leno’s 1907 White steam-driven car is now back on the road thanks to his
3-D printer.
Where now?
Many in the industry believe that low-cost 3-D printers for the consumer
market will eventually appear. 3D Systems launched a new model costing
less than $10,000 in May. That may sound a lot, but it is what laser
printers cost in the early 1980s, and they can now be had for less than
$100. Desktop Factory, a start-up based in Pasadena, California, hopes to
launch a 3-D printer for $4,995 that is around the same size as an early
laser printer.
Objet believes the way to the mass market is via inkjet technology, just
as it has been with 2-D printers. The ability to print different materials
with inkjet heads greatly increases not just model-making abilities but
production possibilities, too. The firm thinks it is getting close to
being able to print with engineering-quality plastics through inkjet
heads. “When we reach that point, it would allow us to go to short-term
manufacturing,” says Amit Shvartz, Objet’s head of marketing.
Todd May
One of Z Corporation’s printers and (below) a finished model of a
camcorder
As with 2-D printing, many individuals and small firms may not need
sophisticated machines, especially if they can use 3-D printing bureaus to
produce their more demanding digital creations. Some of these make-to-
order services are starting to appear. Z Corporation’s machines are being
used by companies to let players of video games, including “World of
Warcraft”, “Spore” and “Rock Band”, produce colourful, 3-D models of their
in-game characters, for example. “We are at that point where people are
looking at this technology and saying ‘We can make a business out of
that’,” says Scott Harmon, head of business development at Z Corporation.
Shapeways, a firm based in the Netherlands, lets users upload designs,
choose a construction material and get a production quote. It then turns
the design into an object with a 3-D printer and ships it to the customer.
3D Systems recently set up a joint venture called MQast, which is an
online provider of aluminium and stainless-steel parts produced using its
machines. And iKix, based in Chennai, India, has equipped itself with Z
Corporation machines and set up a chain of online service-bureaus to
produce architectural models, for delivery anywhere on earth.
Mr Wohlers thinks medical applications of 3-D printing also have a lot of
potential. It is already possible to print 3-D models from the digital
slices produced by computed-tomography scans. These can be used for
training, to explain procedures to patients and to help surgeons plan
complex operations. Some hospitals have started using 3-D printing to
produce custom-made metallic and plastic parts to be used as artificial
implants and in reconstructive surgery. “It is possible to deposit living
cells through inkjet printers onto a biodegradable scaffold,” adds Mr
Wohlers. “There are a lot of problems to overcome, like the creation of
blood vessels, but eventually I think we will see replacement body parts
being printed too.”
Todd May
Meanwhile, what about making those trainers? A 3-D printer cheap enough to
do that at home is probably many years away. But customising a
standardised product by changing its outward appearance, like re-skinning
a mobile phone, would be easier. “You can do that pretty much with
existing technology,” says Mr Harmon. You could also make other simple but
useful things, like a missing piece for a broken toy. And you might even
make your own 3-D printer. The RepRap project, an open-source group based
at the University of Bath in England, has produced designs for a 3-D
printer which can be built for around $700, including royalty-free designs
that can be fed into the machine to produce the plastic parts needed to
create another RepRap machine. This could be fun for the mechanically
minded. Others might want to wait until the local hardware store buys a 3
-D printer and begins to offer one-off manufacturing services on demand.
//
Keeping pirates at bay
Sep 3rd 2009
From The Economist print edition
Policing the internet: The music industry has concluded that lawsuits
alone are not the way to discourage online piracy
Illustration by Belle Mellor
THREE big court cases this year—one in Europe and two in America—have
pitted music-industry lawyers against people accused of online piracy. The
industry prevailed in each case. But the three trials may mark the end of
its efforts to use the courts to stop piracy, for they highlighted the
limits of this approach.
The European case concerned the Pirate Bay, one of the world’s largest and
most notorious file-sharing hubs. The website does not actually store
music, video and other files, but acts as a central directory that helps
users locate particular files on BitTorrent, a popular file-sharing
network. Swedish police began investigating the Pirate Bay in 2003, and
charges were filed against four men involved in running it in 2008. When
the trial began in February 2009, they claimed the site was merely a
search engine, like Google, which also returns links to illegal material
in some cases. One defendant, Peter Sunde, said a guilty verdict would “be
a huge mistake for the future of the internet…it’s quite obvious which
side is the good side.”
The court agreed that it was obvious and found the four men guilty, fining
them a combined SKr30m ($3.6m) and sentencing them each to a year in jail.
Despite tough talk from the defendants, they appear to have tired of legal
entanglements: in June another firm said it would buy the Pirate Bay’s
internet address for SKr60m and open a legal music site.
The Pirate Bay is the latest in a long list of file-sharing services, from
Napster to Grokster to KaZaA, to have come under assault from the media
giants. If it closes, some other site will emerge to take its place; the
music industry’s victories, in short, are never final. Cases like this
also provoke a backlash against the music industry, though in Sweden it
took an unusual form. In the European elections in June, the Pirate Party
won 7.1% of the Swedish vote, making it the fifth-largest party in the
country and earning it a seat in the European Parliament. “All non-
commercial copying and use should be completely free,” says its manifesto.
So much for that plan
The Recording Industry Association of America (RIAA) has pursued another
legal avenue against online piracy, which is to pursue individual users of
file-sharing hubs. Over the years it has accused 18,000 American internet
users of engaging in illegal file-sharing and demanding settlements of
$4,000 on average. Facing the scary prospect of a federal copyright-
infringement lawsuit, nearly everyone settled; but two cases have
proceeded to trial. The first involved Jammie Thomas-Rasset, a single
mother from Minnesota who was accused of sharing 24 songs using KaZaA in
2005. After a trial in 2007, a jury ruled against her and awarded the
record companies almost $10,000 per song in statutory damages.
Critics of the RIAA’s campaign pointed out that if Ms Thomas-Rasset had
stolen a handful of CDs from Wal-Mart, she would not have faced such
severe penalties. The judge threw out the verdict, saying that he had
erred by agreeing to a particular “jury instruction” (guidance to the jury
on how they should decide a case) that had been backed by the RIAA. He
then went further, calling the damages “wholly disproportionate” and
asking Congress to change the law, on the basis that Ms Thomas-Rasset was
an individual who had not sought to profit from piracy.
But at a second trial, which concluded in June 2009, Ms Thomas-Rasset was
found guilty again. To gasps from the defendant and from other observers,
the jury awarded even higher damages of $80,000 per song, or $1.92m in
total. One record label’s lawyer admitted that even he was shocked. In
July, in a separate case brought against Joel Tenenbaum, a student at
Boston University, a jury ordered him to pay damages of $675,000 for
sharing 30 songs.
According to Steven Marks, general counsel for the RIAA, the main point of
pursuing these sorts of cases is to make other internet users aware that
file-sharing of copyrighted material is illegal. Mr Marks admits that the
legal campaign has not done much to reduce file-sharing, but how much
worse might things be, he wonders, if the industry had done nothing? This
year’s cases, and other examples (such as the RIAA’s attempt in 2005 to
sue a grandmother, who had just died, for file-sharing), certainly
generate headlines—but those headlines can also make the industry look
bad, even to people who agree that piracy is wrong.
That helps explain why, in late 2008, the RIAA abandoned the idea of suing
individuals for file-sharing. Instead it is now backing another approach
that seems to be gaining traction around the world, called “graduated
response”. This is an effort to get internet service-providers to play a
greater role in the fight against piracy. As its name indicates, it
involves ratcheting up the pressure on users of file-sharing software by
sending them warnings by e-mail and letter and then restricting their
internet access. In its strictest form, proposed in France, those accused
three times of piracy would have their internet access cut off and their
names placed on a national blacklist to prevent them signing up with
another service provider. Other versions of the scheme propose throttling
broadband-connection speeds.
All this would be much quicker and cheaper than going to court and does
not involve absurd awards of damages and their attendant bad publicity. A
British study found that most file-sharers will stop after receiving a
warning—but only if it is backed up by the threat of sanctions.
It sounds promising, from the industry’s perspective, but graduated
response has drawbacks of its own. In New Zealand the government scrapped
the idea before implementation, and in Britain the idea of cutting off
access has been ruled out. In France the first draft of the law was
savaged by the Constitutional Council over concerns that internet users
would be presumed guilty rather than innocent. Internet service-providers
are opposed to being forced to act as copyright police. Even the European
Parliament has weighed in, criticising any sanctions imposed without
judicial oversight. But the industry is optimistic that the scheme will be
implemented in some form. It does not need to make piracy impossible—just
less convenient than the legal alternatives.
But many existing sources of legal music have not offered what file-
sharers want. “In my view, growing internet piracy is a vote of no
confidence in existing business models,” said Viviane Reding, the European
commissioner for the information society, in July.
The industry is desperately searching for better business models, and is
offering its catalogue at low rates to upstarts that could never have
acquired such rights a decade ago. Services such as Pandora, Spotify and
we7 that stream free music, supported by advertising, are becoming
popular. Most innovative are the plans to offer unlimited downloads for a
flat fee. British internet providers are keen to offer such a service, the
cost of which would be rolled into the monthly bill. Similarly, Nokia’s
“Comes With Music” scheme includes a year’s downloads in the price of a
mobile phone. The music industry will not abandon legal measures against
piracy altogether. But solving the problem will require carrots as well as
sticks.
--
Tilting in the breeze
Sep 3rd 2009
From The Economist print edition
Energy: A novel design for a floating wind-turbine, which could reduce the
cost of offshore wind-power, has been connected to the electricity grid
StatoilHydro
Floating a new idea
FAR out to sea, the wind blows faster than it does near the coast. A
turbine placed there would thus generate more power than its inshore or
onshore cousins. But attempts to build power plants in such places have
foundered because the water is generally too deep to attach a traditional
turbine’s tower to the seabed.
One way round this would be to put the turbine on a floating platform,
tethered with cables to the seabed. And that is what StatoilHydro, a
Norwegian energy company, and Siemens, a German engineering firm, have
done. The first of their floating offshore turbines has just started a
two-year test period generating about 1 megawatt of electricity—enough to
supply 1,600 households.
Span of control
Sep 3rd 2009
From The Economist print edition
Engineering: A new generation of “smart” bridges use sensors to detect
structural problems and warn of impending danger
Illustration by Belle Mellor
WHEN an eight-lane steel-truss-arch bridge across the Mississippi River in
Minneapolis collapsed during the evening rush hour on August 1st 2007, 13
people were killed and 145 were injured. There had been no warning. The
bridge was 40 years old but had a life expectancy of 50 years. The central
span suddenly gave way after the gusset plates that connected the steel
beams buckled and fractured, dropping the bridge into the river.
In the wake of the catastrophe, there were calls to harness technology to
avoid similar mishaps. The St Anthony Falls bridge, which opened on
September 18th 2008 and replaces the collapsed structure, should do just
that. It has an embedded early-warning system made of hundreds of sensors.
They include wire and fibre-optic strain and displacement gauges,
accelerometers, potentiometers and corrosion sensors that have been built
into the span to monitor it for structural weaknesses, such as corroded
concrete and overly strained joints.
On top of this, temperature sensors embedded in the tarmac activate a
system that sprays antifreeze on the road when it gets too cold, and a
traffic-monitoring system alerts the Minnesota Department of
Transportation to divert traffic in the event of an accident or
overcrowding. The cost of all this technology was around $1m, less than 1%
of the $234m it cost to build the bridge.
The new Minneapolis bridge joins a handful of “smart” bridges that have
built-in sensors to monitor their health. Another example is the six-lane
Charilaos Trikoupis bridge in Greece, which spans the Gulf of Corinth,
linking the town of Rio on the Peloponnese peninsula to Antirrio on the
mainland. This 3km-long bridge, which was opened in 2004, has roughly 300
sensors that alert its operators if an earthquake or high winds warrant it
being shut to traffic, as well as monitoring its overall health. These
sensors have already detected some abnormal vibrations in the cables
holding the bridge, which led engineers to install additional weights as
dampeners.
The next generation of sensors to monitor bridge health will be even more
sophisticated. For one thing, they will be wireless, which will make
installing them a lot cheaper.
Jerome Lynch of the University of Michigan, Ann Arbor, is the chief
researcher on a project intended to help design the next generation of
monitoring systems for bridges. He and his colleagues are looking at how
to make a cement-based sensing skin that can detect excessive strain in
bridges. Individual sensors, says Dr Lynch, are not ideal because the
initial cracks in a bridge may not occur at the point the sensor is
placed. A continuous skin would solve this problem. He is also exploring a
paint-like substance made of carbon nanotubes that can be painted onto
bridges to detect corrosion and cracks. Since carbon nanotubes conduct
electricity, sending a current through the paint would help engineers to
detect structural weakness through changes in the paint’s electrical
properties.
The researchers are also developing sensors that could be placed on
vehicles that regularly cross a bridge, such as city buses and police
cars. These could measure how the bridge responds to the vehicle moving
across it, and report any suspicious changes.
Some civil engineers are sceptical about whether such instrumentation is
warranted. Emin Aktan, director of the Intelligent Infrastructure and
Transport Safety Institute at Drexel University in Philadelphia, points
out that although the sensors generate a huge amount of data, civil
engineers simply do not know what happened in the weeks and days before a
given bridge failed. It will take a couple of decades to arrive at a point
when bridge operators can use such data intelligently, he predicts.
Meanwhile, the Obama administration’s stimulus plan has earmarked $27
billion for building and repairing roads and bridges. Just 1% of that
would pay for a lot of sensors.
The Hywind is the first large turbine to be deployed in water more than 30
metres deep. The depth at the prototype’s location, 10 kilometres (six
miles) south-west of Karmoy, is 220 metres. But the turbine is designed to
operate in water up to 700 metres deep, meaning it could be put anywhere
in the North Sea. Three cables running to the seabed prevent it from
floating away.
It is an impressive sight. Its three blades have a total span of 82 metres
and, together with the tower that supports them, weigh 234 tonnes. That
makes the Hywind about the same size as a large traditional offshore
turbine.
Even though it is tethered, and sits on a conical steel buoy, the motion
of the sea causes the tower to sway slowly from side to side. This swaying
places stress on the structure, and that has to be compensated for by a
computer system that tweaks the pitch of the rotor blades to keep them
facing in the right direction as the tower rocks and rolls to the rhythm
of the waves. That both improves power production and minimises the strain
on the blades and the tower. The software which controls this process is
able to measure the success of previous changes to the rotor angle and use
that information to fine-tune future attempts to dampen wave-induced
movement.
If all works well, the potential is huge. Henrik Stiesdal of Siemens’s
windpower business unit reckons the whole of Europe could be powered using
offshore wind, but that competition for space near the coast will make
this difficult to achieve if only inshore sites are available. Siting
turbines within view of coastlines causes conflicts with shipping, the
armed forces, fishermen and conservationists. But floating turbines moored
far out to sea could avoid such problems. That, plus the higher wind
speeds which mean that a deep-water turbine could generate much more power
than a shallow-water one, make the sort of technology that the Hywind is
pioneering an attractive idea.
One obvious drawback is that connecting deep-water turbines to the
electrical grid will be expensive. But the biggest expense—the one that
will make or break far-offshore wind power—will probably be maintenance.
In deep seas, it will not be possible to use repair vessels that can jack
themselves up on the seabed for stability, like the machines that repair
shallow-water turbines. Instead maintenance will be possible only in good
weather. If the Hywind turbine turns out to need frequent repairs, the
cost of leaving it idle while waiting for fair weather, and of ferrying
the necessary people and equipment to and fro, will outweigh the gains
from generating more power. But if all goes according to plan, and the new
turbine does not need such ministrations, it would put wind in the sails
of far-offshore power generation.
Keeping a grip
Sep 3rd 2009
From The Economist print edition
Transport: A new type of tyre, equipped with built-in sensors, can help
avoid a skid—and could also improve fuel-efficiency
FEW sensations of helplessness match that of driving a car that
unexpectedly skids. In a modern, well-equipped (and often expensive) car,
electronic systems such as stability and traction control, along with
anti-lock braking, will kick in to help the driver avoid an accident. Now
a new tyre could detect when a car is about to skid and switch on safety
systems in time to prevent it. It could also improve the fuel-efficiency
of cars to which it is fitted.
The Cyber Tyre, developed by Pirelli, an Italian tyremaker, contains a
small device called an accelerometer which uses tiny sensors to measure
the acceleration and deceleration along three axes at the point of contact
with the road. A transmitter in the device sends those readings to a unit
that is linked to the braking and other control systems.
The accelerometers in the Cyber Tyre contain two tiny structures, the
distance between which changes during acceleration, altering the
electrical capacitance of the device, which is measured and converted into
a voltage. Powered by energy scavengers that exploit the vibration of the
tyre, the device encapsulating the accelerometers and the transmitter is
about 2.5 centimetres in diameter and about the thickness of a coin.
Constantly monitoring the forces that tyres are subjected to as they grip
the road could help reduce fuel consumption by optimising braking and
suspension. Moreover, it could promote the greater use of tyres with a low
rolling-resistance, which are often fitted to hybrid vehicles. These save
fuel by reducing the resistance between the tyre and the road but, to do
so, they have a reduced grip, especially in the wet. If fitted with
sensors, such tyres could be more closely monitored and controlled in
slippery conditions.
Pirelli believes its new tyre could be fitted to cars in 2012 or 2013, but
this will depend on getting carmakers to incorporate the necessary
monitoring and control systems into their vehicles. As with most
innovations, these are expected to be available in upmarket models first,
and cheaper cars later. But if the introduction in 1973 of Pirelli’s
steel-belted Cinturato radial tyre is any guide, devices that make cars
safer will be adopted rapidly.
Trappings of waste
Sep 3rd 2009
From The Economist print edition
Materials science: Plastic beads may provide a way to mop up radiation in
nuclear power-stations and reduce the amount of radioactive waste
Science Photo Library
They want us to drop beads into the cooling system?
NUCLEAR power does not emit greenhouse gases, but the technology does have
another rather nasty by-product: radioactive waste. One big source of
low-level waste is the water used to cool the core in the most common form
of reactor, the pressurised-water reactor. A team of researchers led by
Börje Sellergren of the University of Dortmund in Germany, and Sevilimedu
Narasimhan of the Bhabha Atomic Research Centre in Kalpakkam, India, think
they have found a new way to deal with it. Their solution is to mop up the
radioactivity in the water with plastic.
In a pressurised-water reactor, hot water circulates at high pressure
through steel piping, dissolving metal ions from the walls of the pipes.
When the water is pumped through the reactor’s core, these ions are
bombarded by neutrons and some of them become radioactive. The ions then
either settle back into the walls of the pipes, making the pipes
themselves radioactive, or continue to circulate, making the water
radioactive. Either way, a waste-disposal problem is created.
Because the pipes are steel, most of the ions are iron. When the commonest
isotope of iron (56Fe) absorbs a neutron, the result is not radioactive.
The steel used in the pipes, however, is usually alloyed with cobalt to
make it stronger. When common cobalt (59Co) absorbs a neutron the result
is 60Co, which is radioactive and has a half-life of more than five years.
At present, nuclear engineers clean cobalt from the system by trapping it
in what are known as ion-exchange resins. These swap bits of themselves
for ions in the water flowing over them. Unfortunately, the ion-exchange
technique traps many more non-radioactive iron ions than radioactive
cobalt ones.
To overcome that problem Drs Sellergren and Narasimhan have developed a
polymer that binds to cobalt while ignoring iron. They made the material
using a technique called molecular imprinting, which involves making the
polymer in the presence of cobalt ions, and then extracting those ions by
dissolving them in hydrochloric acid. The resulting cobalt-sized holes
tend to trap any cobalt ions that blunder into them, with the result that
a small amount of the polymer can mop up a lot of radioactive cobalt.
The team is now forming the new polymer into small beads that can pass
through the cooling systems of nuclear power-stations. Concentrating
radioactivity into such beads for disposal would be cheaper than trying to
get rid of large volumes of low-level radioactive waste, according to Dr
Sevilimedu. He thinks that the new polymer could also be used to
decontaminate decommissioned nuclear power-stations where residual
radioactive cobalt in pipes remains a problem.
Nuclear power is undergoing a renaissance. Some 40 new nuclear power-
stations are being built around the world. The International Atomic Energy
Agency estimates that a further 70 will be built over the next 15 years,
most of them in Asia. That is in addition to the 439 reactors which are
already operating. So there will be plenty of work for the plastic beads,
if Drs Sellergren and Narasimhan can industrialise their process.
Air power
Sep 3rd 2009
From The Economist print edition
Energy: Batteries that draw oxygen from the air could provide a cheaper,
lighter and longer-lasting alternative to existing designs
Illustration by Belle Mellor
MOBILE phones looked like bricks in the 1980s. That was largely because
the batteries needed to power them were so hefty. When lithium-ion
batteries were invented, mobile phones became small enough to be slipped
into a pocket. Now a new design of battery, which uses oxygen from ambient
air to power devices, could provide even an smaller and lighter source of
power. Not only that, such batteries would be cheaper and would run for
longer between charges.
Lithium-ion batteries have two electrodes immersed in an electrically
conductive solution, called an electrolyte. One of the electrodes, the
cathode, is made of lithium cobalt oxide; the other, the anode, is
composed of carbon. When the battery is being charged, positively charged
lithium ions break away from the cathode and travel in the electrolyte to
the anode, where they meet electrons brought there by a charging device.
When electricity is needed, the anode releases the lithium ions, which
rapidly move back to the cathode. As they do so, the electrons that were
paired with them in the anode during the charging process are released.
These electrons power an external circuit.
Peter Bruce and his colleagues at the University of St Andrews in Scotland
came up with the idea of replacing the lithium cobalt oxide electrode with
a cheaper and lighter alternative. They designed an electrode made from
porous carbon and lithium oxide. They knew that lithium oxide forms
naturally from lithium ions, electrons and oxygen, but, to their surprise,
they found that it could also be made to separate easily when an electric
current passed through it. They exposed one side of their porous carbon
electrode to an electrolyte rich in lithium ions and put a mesh window on
the other side of the electrode through which air could be drawn. Oxygen
from the air took the place of the cobalt oxide.
When they charged their battery, the lithium ions migrated to the anode
where they combined with electrons from the charging device. When they
discharged it, lithium ions and electrons were released from the anode.
The ions crossed the electrolyte and the electrons travelled round the
external circuit. The ions and electrons met at the cathode, and combined
with the oxygen to form lithium oxide that filled the pores in the carbon.
Because the oxygen being used by the battery comes from the surrounding
air, the device that Dr Bruce’s team has designed can be a mere one-eighth
to one-tenth the size and weight of modern batteries, while still carrying
the same charge. Making such a battery is also expected to be cheaper.
Lithium cobalt oxide accounts for 30% of the cost of a lithium-ion
battery. Air, however, is free
--
The taxonomy of tumours
Sep 3rd 2009
From The Economist print edition
Medicine: A new technique aims to measure the activity of a tumour, and
could also help provide a new way to classify cancers
ONCOLOGISTS would like to be able to classify cancers not by whereabouts
in the body they occur, but by their molecular origin. They know that
certain molecules become active in tumours found in certain parts of the
body. Both head-and-neck cancers and breast cancers, for example, have an
abundance of molecules called epidermal growth-factor receptors (EGFRs).
Now a team from Cancer Research UK’s London Research Institute has taken a
step towards this goal. Their technique can already identify how advanced
a person’s cancer is, and thus how likely it is to return after treatment.
At present, pathologists assess how advanced a cancer is by taking a
sample, known as a biopsy, and examining the concentration within it of
specific receptors, such as EGFRs, that are known to help cancers spread.
Peter Parker had the idea of employing a technique called fluorescence
resonance-energy transfer (FRET), which is used to study interactions
between individual protein molecules, to see if he could find out not only
how many receptors there are in a biopsy, but also how active they are.
The technique uses two types of antibody, each attached to a fluorescent
dye molecule. Each of the two types is selected to fuse with a different
part of an EGFR molecule, but one will do so only when the receptor has
become active.
Pointing a laser at the sample causes the first dye to become excited and
emit energy. With an activated receptor, the second dye will be attached
nearby and so will absorb some of the energy given off by the first.
Measuring how much energy is transferred between the two dyes indicates
the activity of the receptors.
Dr Parker’s idea was implemented by his colleague Banafshe Larijani. She
and her colleagues used FRET to measure the activity of receptors in 122
head-and-neck cancers. They found that the higher the activity of the
receptors they examined, the more likely it was the cancers would return
quickly following treatment. The technique was found to be a better
prognostic tool than conventional visual analysis of receptor density.
To speed things up, engineers in the same group have now created an
instrument that automates the analysis. Tumour biopsies are placed on a
microscope slide and stained with antibodies. The system then points the
laser at the samples, records images of the resulting energy transfer and
interprets those images to provide FRET scores. Results are available in
as little as an hour, compared with four or five days using standard
methods.
Having established the principle with head-and-neck cancer, the team hopes
to extend it. They are beginning a large-scale trial to see whether FRET
can accurately “hindcast” the clinical outcomes associated with 2,000
breast-cancer biopsies. Moreover, if patterns of receptor-activation for
other types of cancers can be characterised, the technique could be
applied to all solid tumours (ie, cancers other than leukaemias and
lymphomas).
If they succeed, it will be good news for researchers who want to switch
from classifying cancers anatomically to classifying them biochemically.
Most cancer specialists think that patients with tumours in different
parts of the body that are triggered by the same genetic mutations may
have more in common than those whose tumours are in the same organ, but
have been caused by different mutations. The new approach could help make
such classification routine. That could, in turn, create a new generation
of therapies and help doctors decide which patients should receive them,
and in which combinations and doses.
---
The digital geographers
Sep 3rd 2009
From The Economist print edition
The internet: Detailed digital maps of the world are in widespread use.
They are compiled using both high-tech and low-tech methods
IT IS a damp, overcast Monday morning in Watford, an undistinguished town
north of London that seems to offer little to the casual visitor. But one
man is eagerly snapping photographs. In fact, he is working with six
high-resolution cameras, all of which are attached to the roof of the car
in which he is being driven. He sits in the passenger seat with a keyboard
on his lap, tapping occasionally and muttering into a microphone. A
computer screen built into the dashboard shows the car’s progress as a
luminous dot travelling across a map of the town. The man is a geographic
analyst for NAVTEQ, one of a small group of companies that are creating
new, digital maps of the world.
Each keystroke he makes denotes a feature in the outside world that is
added to the map displayed on the screen. New details are also recorded in
audio form. Once the journey is finished, the analyst can also pick out
new details while watching a video playback. All this information is
transferred from a server in the car’s boot to NAVTEQ’s database.
Companies such as NAVTEQ and its rivals, which include Tele Atlas and
Microsoft, always start a new map by going to trusted sources such as
local governments or mapping organisations. This information can be
corroborated using aerial or satellite photography. Only when these
sources are exhausted do they switch to the more expensive process of
gathering data themselves. The digital maps they create are used mostly by
motorists in rich countries. But the same companies are now creating maps
of the developing world, which is requiring them to do things in somewhat
different ways.
A geographic analyst in India would probably have deserted his vehicle,
finding it impractical to manoeuvre on the country’s crowded urban
streets. Instead, he would go on foot and use a pen to annotate a map
printed on paper, a technique abandoned by his Western counterparts a
decade ago. Official mapmaking in some poor countries is far from
comprehensive, leaving the likes of NAVTEQ or Tele Atlas to generate the
most accurate maps available.
The type of data that must be gathered also varies. Navigation in wealthy
Western markets generally requires gathering the information that is of
most interest to motorists. But lower levels of car ownership in poor
countries makes such information less relevant. Instead, the proliferation
of mobile phones in countries such as China or India, many of which
incorporate satellite-positioning chips, may make pedestrian navigation
more relevant for local customers. Mapmakers are more likely to spend time
hanging around bus stations collecting timetables, or finding the quickest
route, which is not always the most direct one, from a city’s railway
station to its main shopping street. All this information has to be
constantly refreshed, sometimes several times a year.
To reduce the cost of sending staff on such reconnaissance trips, mapping
companies are asking their customers to do more of the work. Tele Atlas,
for example, gathers data from users of satellite-navigation systems made
by TomTom, a firm based in the Netherlands. Drivers can report errors and
suggest new features, or can agree to submit data passively: the TomTom
device automatically logs their vehicle’s position, leaving a trail where
it has travelled. It is then possible to calculate the vehicle’s direction
and speed, which can help identify the class of road on which it is
travelling. Altitude measurements mean the road’s gradient can be
determined. Other information can also be deduced. If a lot of cars all
seem to be driving across what was thought to be a ploughed field, for
example, then it is likely that a new road has been built. Such detective
work keeps the company’s mapping database up to date.
In some parts of the world, however, mapmaking relies heavily on voluntary
contributions. Google’s Map Maker service, for example, makes up for the
lack of map data for much of the world by asking volunteers to provide it.
Among its contributors is Tim Akinbo, a Nigerian software developer who
got involved with the project last year. He has mapped recognisable
features in Lagos, where he lives, as well as his home town of Jos.
Churches, banks, office buildings and cinemas all feature on his map.
His working method is relatively simple. His mobile phone does not have
satellite positioning, but he can use it to call up Google Maps, see what
is on the map in a particular area and make a note of things to add. He
then goes online when he gets home to add new features.
Why should people freely give up their time to improve local maps? Mr
Akinbo explains that local businesses could use Map Maker to alert
potential customers to their existence. “They will be contributing to a
tool from which other people can benefit, as well as themselves,” he
explains. With enough volunteers a useful map can be created without the
need for fancy camera-toting cars.
Washing without water
Sep 3rd 2009
From The Economist print edition
Environment: A washing machine uses thousands of nylon beads, and just a
cup of water, to provide a greener way to do the laundry
Xeros
Water? Who needs it?
SYNTHETIC fibres tend to make low quality clothing. But one of the
properties that makes nylon a poor choice of fabric for a shirt, namely
its ability to attract and retain dirt and stains, is being exploited by a
company that has developed a new laundry system. Its machine uses no more
than a cup of water to wash each load of fabrics and uses much less energy
than conventional devices.
The system developed by Xeros, a spin-off from the University of Leeds, in
England, uses thousands of tiny nylon beads each measuring a few
millimetres across. These are placed inside the smaller of two concentric
drums along with the dirty laundry, a squirt of detergent and a little
water. As the drums rotate, the water wets the clothes and the detergent
gets to work loosening the dirt. Then the nylon beads mop it up.
The crystalline structure of the beads endows the surface of each with an
electrical charge that attracts dirt. When the beads are heated in humid
conditions to the temperature at which they switch from a crystalline to
an amorphous structure, the dirt is drawn into the core of the bead, where
it remains locked in place.
The inner drum, containing the clothes and the beads, has a small slot in
it. At the end of the washing cycle, the outer drum is halted and the
beads fall through the slot; some 99.95% of them are collected.
Because so little water is used and the warm beads help dry the laundry,
less tumble drying is needed. An environmental consultancy commissioned by
Xeros to test its system reckoned that its carbon footprint was 40%
smaller than the most efficient existing systems for washing and drying
laundry.
The first machines to be built by Xeros will be aimed at commercial
cleaners and designed to take loads of up to 20 kilograms. Customers will
still be able to use the same stain treatments, bleaches and fragrances
that they use with traditional laundry systems. Nylon may be nasty to
wear, but it scrubs up well inside a washing machine.
--
Hard act to follow
Sep 3rd 2009
From The Economist print edition
Environment: Making softwoods more durable could reduce the demand for
unsustainably logged tropical hardwoods
Kebony ASA
Kebony’s product is furfuryl
ONE of the reasons tropical forests are being cut down so rapidly is
demand for the hardwoods, such as teak, that grow there. Hardwoods, as
their name suggests, tend to be denser and more durable than softwoods.
But unsustainable logging of hardwoods destroys not only forests but also
local creatures and the future prospects of the people who lived there.
It would be better to use softwood, which grows in cooler climes in
sustainably managed forests. Softwoods are fast-growing coniferous species
that account for 80% of the world’s timber. But the stuff is not durable
enough to be used outdoors without being treated with toxic preservatives
to protect it against fungi and insect pests. These chemicals eventually
wash out into streams and rivers, and the wood must be retreated.
Moreover, at the end of its life, wood that has been treated with
preservatives in this way needs to be disposed of carefully.
One way out of this problem would be an environmentally friendly way of
making softwood harder and more durable—something that a Norwegian company
called Kebony has now achieved. It opened its first factory in January.
Kebony stops wood from rotting by placing it in a vat containing a
substance called furfuryl alcohol, which is made from the waste left over
when sugarcane is processed. The vat is then pressurised, forcing the
liquid into the wood. Next the wood is dried and heated to 110ºC. The heat
transforms the liquid into a resin, which makes the cell walls of the wood
thicker and stronger.
The approach is similar to that of a firm based in the Netherlands called
Titan Wood. Timber swells when it is damp and shrinks when it is dry
because it contains groups of atoms called hydroxyl groups, which absorb
and release water. Titan Wood has developed a technique for converting
hydroxyl groups into acetyl groups (a different combination of atoms) by
first drying the wood in a kiln and then treating it with a chemical
called acetic anhydride. The result is a wood that retains its shape in
the presence of water, and is no longer recognised as wood by grubs that
would otherwise attack it. It is thus extremely durable.
The products made by both companies are completely recyclable,
environmentally friendly and create woods that are actually harder than
most tropical hardwoods. The strengthened softwoods can be used in
everything from window frames to spas to garden furniture. Treated maple
is also being adopted for decking on yachts. The cost is similar to that
of teak, but the maple is more durable and easier to keep clean.
Obviously treating wood makes it more expensive. But because it does not
need to receive further treatments—a shed made from treated wood would not
need regular applications of creosote, for example—it should prove
economical over its lifetime. Kebony reckons that its pine cladding, for
example, would cost a third less than conventionally treated pine cladding
over the course of 40 years. Saving money, then, need not be at the
expense of helping save the planet.
--
Memories are made of this
Sep 3rd 2009
From The Economist print edition
Computing: Memory chips based on nanotubes and iron particles might be
capable of storing data for a billion years
FEW human records survive for long, the 16,000-year-old Paleolithic cave
paintings at Lascaux, France, being one exception. Now researchers led by
Alex Zettl of the University of California, Berkeley, have devised a
method that will, they reckon, let people store information electronically
for a billion years.
Dr Zettl and his colleagues constructed their memory cell by taking a
particle of iron just a few billionths of a metre (nanometres) across and
placing it inside a hollow carbon nanotube. They attached electrodes to
either end of the tube. By applying a current, they were able to shuttle
the particle back and forth. This provides a mechanism to create the “1”
and “0” required for digital representation: if the particle is at one end
it counts as a “1”, and at the other end it is a “0”.
The next challenge was to read this electronic information. The
researchers found that when electrons flowed through the tube, they
scattered when they came close to the particle. The particle’s position
thus altered the nanotube’s electrical resistance on a local scale.
Although they were unable to discover exactly how this happens, they were
able to use the effect to read the stored information.
What makes the technique so durable is that the particle’s repeated
movement does not damage the walls of the tube. That is not only because
the lining of the tube is so hard; it is also because friction is almost
negligible when working at such small scales.
Theoretical studies suggest that the system should retain information for
a long time. To switch spontaneously from a “1” to a “0” would entail the
particle moving some 200 nanometres along the tube using thermal energy.
At room temperature, the odds of that happening are once in a billion
years. In tests, the stored digital information was found to be remarkably
stable. Yet the distance between the ends of the tube remains small enough
to allow for speedy reading and writing of the memory cell when it is in
use.
The next challenge will be to create an electronic memory that has
millions of cells instead of just one. But if Dr Zettl succeeds in
commercialising this technology, digital decay itself could become a thing
of the past.
--
Only humans allowed
Sep 3rd 2009
From The Economist print edition
Computing: Can online puzzles that force internet users to prove that they
really are human be kept secure from attackers?
Illustration by Belle Mellor
ON THE internet, goes the old joke, nobody knows you’re a dog. This is
untrue, of course. There are many situations where internet users are
required to prove that they are human—not because they might be dogs, but
because they might be nefarious pieces of software trying to gain access
to things. That is why, when you try to post a message on a blog, sign up
with a new website or make a purchase online, you will often be asked to
examine an image of mangled text and type the letters into a box. Because
humans are much better at pattern recognition than software, these online
puzzles—called CAPTCHAs—can help prevent spammers from using software to
automate the creation of large numbers of bogus e-mail accounts, for
example.
Unlike a user login, which proves a specific identity, CAPTCHAs merely
show that “there’s really a human on the other end”, says Luis von Ahn, a
computer scientist at Carnegie Mellon University and one of the people
responsible for the ubiquity of these puzzles. Together with Manuel Blum,
Nicholas J. Hopper and John Langford, Dr von Ahn coined the term CAPTCHA
(which stands for “completely automated public Turing test to tell
computers and humans apart”) in a paper published in 2000.
But how secure are CAPTCHAs? Spammers stepped up their efforts to automate
the solving of CAPTCHAs last year, and in recent months a series of cracks
have prompted both Microsoft and Google to tweak the CAPTCHA systems that
protect their web-based mail services. “We modify our CAPTCHAs when we
detect new abuse trends,” says Macduff Hughes, engineering director at
Google. Jeff Yan, a computer scientist at Newcastle University, is one of
many researchers interested in cracking CAPTCHAs. Since the bad guys are
already doing it, he told a spam-fighting conference in Amsterdam in June,
the good guys should do it too, in order to develop more secure designs.
That CAPTCHAs work at all illuminates a failing in artificial-intelligence
research, says Henry Baird, a computer scientist at Lehigh University in
Pennsylvania and an expert in the design of text-recognition systems.
Reading mangled text is an everyday skill for most people, yet machines
still find it difficult.
The human ability to recognise text as it becomes more and more distorted
is remarkably resilient, says Gordon Legge at the University of Minnesota.
He is a researcher in the field of psychophysics—the study of the
perception of stimuli. But there is a limit. Just try reading small text
in poor light, or flicking through an early issue of Wired. “You hit a
point quite close to your acuity limit and suddenly your performance
crashes,” says Dr Legge. This means designers of CAPTCHAs cannot simply
increase the amount of distortion to foil attackers. Instead they must
mangle text in new ways when attackers figure out how to cope with
existing distortions.
Mr Hughes, along with many others in the field, thinks the lifespan of
text-based CAPTCHAs is limited. Dr von Ahn thinks it will be possible for
software to break text CAPTCHAs most of the time within five years. A new
way to verify that internet users are indeed human will then be needed.
But if CAPTCHAs are broken it might not be a bad thing, because it would
signal a breakthrough in machine vision that would, for example, make
automated book-scanners far more accurate.
CAPTCHA me if you can
Looking at things the other way around, a CAPTCHA system based on words
that machines cannot read ought to be uncrackable. And that does indeed
seem to be the case for ReCAPTCHA, a system launched by Dr von Ahn and his
colleagues two years ago. It derives its source materials from the
scanning in of old books and newspapers, many of them from the 19th
century. The scanners regularly encounter difficult words (those for which
two different character-recognition algorithms produce different
transliterations). Such words are used to generate a CAPTCHA by combining
them with a known word, skewing the image and adding extra lines to make
the words harder to read. The image is then presented as a CAPTCHA in the
usual way.
If the known word is entered correctly, the unknown word is also assumed
to have been typed in correctly, and access is granted. Each unknown word
is presented as a CAPTCHA several times, to different users, to ensure
that it has been read correctly. As a result, people solving CAPTCHA
puzzles help with the digitisation of books and newspapers.
Even better, the system has proved to be far better at resisting attacks
than other types of CAPTCHA. “ReCAPTCHA is virtually immune by design,
since it selects words that have resisted the best text-recognition
algorithms available,” says John Douceur, a member of a team at Microsoft
that has built a CAPTCHA-like system called Asirra. The ReCAPTCHA team has
a member whose sole job is to break the system, says Dr von Ahn, and so
far he has been unsuccessful. Whenever the in-house attacker appears to be
making progress, the team responds by adding new distortions to the
puzzles.
Even so, researchers are already looking beyond text-based CAPTCHAs. Dr
von Ahn’s team has devised two image-based schemes, called SQUIGL-PIX and
ESP-PIX, which rely on the human ability to recognise particular elements
of images. Microsoft’s Asirra system presents users with images of several
dogs and cats and asks them to identify just the dogs or cats. Google has
a scheme in which the user must rotate an image of an object (a teapot,
say) to make it the right way up. This is easy for a human, but not for a
computer.
The biggest flaw with all CAPTCHA systems is that they are, by definition,
susceptible to attack by humans who are paid to solve them. Teams of
people based in developing countries can be hired online for $3 per 1,000
CAPTCHAs solved. Several forums exist both to offer such services and
parcel out jobs. But not all attackers are willing to pay even this small
sum; whether it is worth doing so depends on how much revenue their
activities bring in. “If the benefit a spammer is getting from obtaining
an e-mail account is less than $3 per 1,000, then CAPTCHA is doing a
perfect job,” says Dr von Ahn.
--
The road ahead
Sep 3rd 2009
From The Economist print edition
Consumer electronics: Your next satellite-navigation device will be less
bossy and more understanding of your driving preferences
Illustration by Allan Sanders
DO YOU get a quiet sense of satisfaction in deviating from the route
recommended by your satellite-navigation device and ignoring its bossy
voice as it demands that you “make a U-turn” or “turn around when
possible”? A satnav’s encyclopedic knowledge of the road network may
justify its hectoring tone most of the time, but sometimes you really do
know better. The motorway might look like the fastest way but it can be a
nightmare at this time of the day; taking a country lane or a nifty
shortcut can avoid a nasty turn into heavy traffic; or sometimes the
chosen route is simply too boring.
Fortunately your next satnav will be more understanding, because it will
allow a greater level of personalisation. It may well, for example, try to
learn your motoring foibles, such as your favourite route into town. This
is just one of the features being readied for inclusion in the next
generation of devices. If you want them to, they will help you drive more
economically by offering the route that requires the least fuel, or
provide tips on how to adjust your driving style to be more frugal. Access
to real-time traffic information will also become more widespread.
Avoiding hold-ups is the most effective way a satnav can help a driver
save both time and fuel, and devices are getting better at doing this. By
taking data from special FM radio signals or via a built-in cellular-data
connection, satnavs can take account of current traffic conditions into
route calculations. The actual traffic data can come from a variety of
sources including traffic sensors, the anonymous monitoring of mobile
phones moving along stretches of road and information collected (also
anonymously) from satnavs in other vehicles. Access to real-time data will
generally mean paying for a subscription, but it turns a navigation device
into a live information system. This makes it useful not just when you do
not know where you are going but also on familiar journeys, when you want
to know which of several possible routes you should take.
The classic motorway dilemma provides an example. An overhead sign gives
warning of an accident ahead. You could turn off now but you might then
get stuck in a busy town because so many other drivers are following the
same alternative route. Or you could stay on the motorway in the hope that
the tailback will soon clear—only to find that it has got worse. A satnav
that knows the average speeds on particular roads at different times of
the day, as many now do, does a good job of predicting which route is the
fastest under normal circumstances. But one that can also use real-time
data would be able to tell that the traffic on the alternative route, say,
is moving at a snail’s pace while vehicles near the site of the accident
are beginning to pick up speed, suggesting that the emergency services
have started clearing the road. So it could then advise you to stay on the
motorway.
Keep on going
Journey planning using a satnav usually allows for a limited choice: you
can pick the fastest route, the shortest, the one that avoids motorways or
a route that passes through or avoids a particular point. Future devices
will learn about a driver’s preferences and adjust accordingly. MyDrive,
for example, is a piece of software developed by Journey Dynamics, a
British company, for satnav providers. It analyses the behaviour of an
individual driver on different types of road. Some people always prefer
motorways and drive quickly, others would much rather drive on local roads
and some like to keep moving even if that means a long detour around a
traffic jam. Understanding a driver’s foibles can ensure that the right
sort of route is chosen, and can also double the accuracy of the predicted
time of arrival, says John Holland, the company’s chief executive.
Satnavs with built-in data connections are also becoming more widespread,
making other new things possible. TomTom, which is based in the
Netherlands, lets users of its systems update maps and add points of
interest. With two-way communication, satnavs no longer have to be taken
out of the car and plugged into a computer to update their maps. “The
screen becomes a connected computer in the car,” says Mark Gretton,
TomTom’s chief technology officer. He expects other companies to develop
software that can be downloaded by satnavs, just as small programs, or
apps, can be added to mobile phones.
Another trend is towards greater integration between the satnav and the
car’s other systems. Bosch, a German car-component company, is working on
a satnav that can give warning of a sharp bend ahead, for example. If the
car is being driven too fast, it can prepare the brakes to slow the
vehicle swiftly when the driver realises—or pretension the seat belts if
he does not.
But such features are only possible with built-in satnav systems. These
can be far more convenient than portable units, but they also tend to be
much more expensive. Portable devices cost less and are easier to update,
but they often get stolen from cars. The distinction may be starting to
blur, however. Portable satnavs that plug into vehicle-information systems
are starting to appear. And TomTom has done a deal that allows its devices
to be specified as the built-in satnav in Renault cars.
All these innovations should give drivers more choice and flexibility.
There is still plenty of scope, it seems, for satnavs to learn new tricks.
--
Reality, improved
Sep 3rd 2009
From The Economist print edition
Computing: Thanks to mobile phones, augmented reality could be far more
accessible—and useful—than virtual reality
Nokia
VIRTUAL reality never quite lived up to the hype. In the 1990s films such
as “Lawnmower Man” and “The Matrix” depicted computer-generated worlds in
which people could completely immerse themselves. In some respects this
technology has become widespread: think of all those video-game consoles
capable of depicting vivid, photorealistic environments, for example. What
is missing, however, is a convincing sense of immersion. Virtual reality
(VR) doesn’t feel like reality.
One way to address this is to use fancy peripherals—gloves, helmets and so
forth—to make immersion in a virtual world seem more realistic. But there
is another approach: that taken by VR’s sibling, augmented reality (AR).
Rather than trying to create an entirely simulated environment, AR starts
with reality itself and then augments it. “In augmented reality you are
overlaying digital information on top of the real world,” says Jyri
Huopaniemi, director of the Nokia Research Centre in Tampere, Finland.
Using a display, such as the screen of a mobile phone, you see a live view
of the world around you—but with digital annotations, graphics and other
information superimposed upon it.
The data can be as simple as the names of the mountains visible from a
high peak, or the names of the buildings visible on a city skyline. At a
historical site, AR could superimpose images showing how buildings used to
look. On a busy street, AR could help you choose a restaurant: wave your
phone around and read the reviews that pop up. In essence, AR provides a
way to blend the wealth of data available online with the physical world—
or, as Dr Huopaniemi puts it, to build a bridge between the real and the
virtual.
AR, me hearties
It all sounds rather distant and futuristic. The idea of AR has, in fact,
been around for a few years without making much progress. But the field
has recently been energised by the ability to implement AR using advanced
mobile handsets, rather than expensive, specialist equipment. Several AR
applications are already available. Wikitude, an AR travel-guide
application developed for Google’s Android G1 handset, has already been
downloaded by 125,000 people. Layar is a general-purpose AR browser that
also runs on Android-powered phones. Nearest Tube, an AR application for
Apple’s iPhone 3GS handset, can direct you in London to the nearest
Underground station. Nokia’s “mobile augmented reality applications”
(MARA) software is being tested by staff at the world’s largest handset-
maker, with a public launch imminent.
What has made all this possible is the emergence of mobile phones equipped
with satellite-positioning (GPS) functions, tilt sensors, cameras, fast
internet connectivity and, crucially, a digital compass. This last item is
vital, and until recently it was the one bit of hardware that was missing
from the iPhone, says Philipp Breuss-Schneeweis of Mobilizy, the Austrian
software house which developed Wikitude. (A compass is standard on the
Android G1 handset.) But the launch of the compass-equipped iPhone 3GS
handset in June is expected to trigger a deluge of AR apps.
The combination of GPS, tilt sensors and a compass enables a handset to
determine where it is, its orientation relative to the ground, and which
direction it is being pointed in. The camera allows it to see the world,
and the wireless-internet link allows it to retrieve information relating
to its surroundings, which is combined with the live view from the camera
and displayed on the screen. All this is actually quite simple, says Mr
Breuss-Schneeweis. In the case of Wikitude, the AR software works out the
longitudes and latitudes of objects in the camera’s field of view so that
they can be tagged accordingly, he says.
Precisely which items in the real world are labelled varies from one AR
application to another. Wikitude, as its name implies, draws information
from Wikipedia, the online encyclopedia, by scouring it for entries that
list a longitude and latitude—which includes everything from the Lincoln
Memorial to the Louvre. Using the application a tourist can stroll through
the streets of a city and view the names of the landmarks in the vicinity.
The full Wikipedia entry on any landmark can then be summoned with a
click. There are 600,000 Wikipedia entries that include longitude and
latitude co-ordinates, says Mr Breuss-Schneeweis, and the number is
increasing all the time.
Information from social networks can be overlaid on the real world.
Another way to identify nearby landmarks is to draw upon existing
databases, such as those used in satellite navigation systems. That is how
Nokia’s MARA system works. It is doubly clever because harvesting local
points of interest from the NAVTEQ software built into many Nokia phones
means no wireless-internet connection is needed to look them up.
However it is done, the result of both approaches is to present detailed
information about the user’s surroundings. That said, the precision of the
tagging can vary somewhat, because satellite-positioning technology is
only accurate to within a few metres at best. This can cause problems when
standing very close to a landmark. “The farther you are away from the
buildings the more accurate it seems to be,” says Mr Breuss-Schneeweis.
But there is a way to improve the accuracy of AR tagging at close
quarters. Total Immersion, a firm based in Paris, is one of several
companies using object recognition. By looking for a known object in the
camera’s field of view, and then analysing that object’s position and
orientation, it can seamlessly overlay graphics so that they appear in the
appropriate position relative to the object in question.
Together with Alcatel-Lucent, a telecoms-equipment firm, Total Immersion
is developing a mobile-phone service that allows users to point their
phone’s camera at an object, such as the Mona Lisa. The software
recognises the object and automatically retrieves related information,
such as a video about Leonardo da Vinci. The same approach will also allow
advertisements in newspapers and on billboards to be augmented, too. Point
your camera at a poster of a car, for example, and you might see a 3-D
rendering of the vehicle floating in space, which can be viewed from any
angle by moving around.
Recognise this
The simplest way to make all this work, says Greg Davis of Total
Immersion, is to put 2-D bar-codes on posters and advertisements, which
are detected and used to retrieve content which is then superimposed on
the device’s screen. But the trend is towards “markerless” tracking, where
image recognition is used to identify targets. Putting a 2-D bar-code on
the Mona Lisa, after all, is not an option.
Nokia’s Point-and-Find software uses the markerless approach. It is a
mobile-phone application, currently in development, that lets you point
your phone at a film poster in order to call up local viewing times and
book tickets. In theory this approach should also be able to recognise
buildings and landmarks, such as the Eiffel Tower, although recognising 3
-D objects is much more difficult than identifying static 2-D images, says
Mr Davis. The way forward may be to combine image-recognition with
satellite-positioning, to narrow down the possibilities when trying to
identify a nearby building. The advantage of the image-recognition
approach, says Mr Davis, is that graphics can be overlaid on something no
matter where it is, or how many times it gets moved.
One category of moving objects that should be easy to track is people, or
at least those carrying mobile phones. Information from social networks,
such as Facebook, can then be overlaid on the real world. Clearly there
are privacy concerns, but Latitude, a social-networking feature of Google
Maps, has tested the water by letting people share their locations with
their friends, on an opt-in basis. The next step is to let people hold up
their handsets to see the locations and statuses of their friends, says Dr
Huopaniemi, who says Nokia is working on this very idea.
As well as being able to see what your friends are up to now, it can be
useful to see into the past. Nokia has developed an AR system called Image
Space which lets users record messages, photos and videos and tag them
with both place and time. When someone else goes to a particular location,
they can then scroll back through the messages that people have left in
the vicinity. More practically, Wikitude can also link virtual messages to
real places by overlaying user-generated reviews of bars, hotels and
restaurants from a website called Qype onto the establishments in
question.
T Mobile
Time for some strawberries, then
Other obvious uses for AR are turn-by-turn navigation, in which the route
to a particular destination is painted onto the world; house-hunting,
using AR to indicate which houses are for sale in a particular street; and
providing additional information at sporting events, such as biographies
of individual players and on-the-spot instant replays. Some of those
attending this year’s Wimbledon tennis tournament got a taste of things to
come with a special version of Wikitude, called Seer, developed for the
Android G1 handset in conjunction with IBM and Ogilvy, an advertising
agency. It could direct users to courts, restaurants and loos, provide
live updates from matches, and even show if there was a queue in the bar
or at the taxi rank.
These sorts of application really are just the beginning, says Dr
Huopaniemi. Virtual reality never really died, he says—it just divided
itself in two, with AR enhancing the real world by overlaying information
from the virtual realm, and VR becoming what he calls “augmented
virtuality”, in which real information is overlaid onto virtual worlds,
such as players’ names in video games. AR may be a relatively recent
arrival, but its potential is huge, he suggests. “It’s a very natural way
of exploring what’s around you.” But trying to imagine how it will be used
is like trying to forecast the future of the web in 1994. The building-
blocks of the technology have arrived and are starting to become more
widely available. Now it is up to programmers and users to decide how to
use them.
--
Attack of the drones
Sep 3rd 2009
From The Economist print edition
Military technology: Smaller and smarter unmanned aircraft are
transforming spying and redefining the idea of air power
Reuters
FIVE years ago, in the mountainous Afghan province of Baghlan, NATO
officials mounted a show of force for the local governor, Faqir Mamozai,
to emphasise their commitment to the region. As the governor and his
officials looked on, Jan van Hoof, a Dutch commander, called in a group of
F-16 fighter jets, which swooped over the city of Baghlan, their
thunderous afterburners engaged. This display of air power was, says Mr
van Hoof, an effective way to garner the respect of the local people. But
fighter jets are a limited and expensive resource. And in conflicts like
that in Afghanistan, they are no longer the most widespread form of air
power. The nature of air power, and the notion of air superiority, have
been transformed in the past few years by the rise of remote-controlled
drone aircraft, known in military jargon as “unmanned aerial vehicles”
(UAVs).
Drones are much less expensive to operate than manned warplanes. The cost
per flight-hour of Israel’s drone fleet, for example, is less than 5% the
cost of its fighter jets, says Antan Israeli, the commander of an Israeli
drone squadron. In the past two years the Israeli Defence Forces’ fleet of
UAVs has tripled in size. Mr Israeli says that “almost all” IDF ground
operations now have drone support.
Of course, small and comparatively slow UAVs are no match for fighter jets
when it comes to inspiring awe with roaring flyovers—or shooting down
enemy warplanes. Some drones, such as America’s Predator and Reaper, carry
missiles or bombs, though most do not. (Countries with “hunter-killer”
drones include America, Britain and Israel.) But drones have other
strengths that can be just as valuable. In particular, they are
unparalleled spies. Operating discreetly, they can intercept radio and
mobile-phone communications, and gather intelligence using video, radar,
thermal-imaging and other sensors. The data they gather can then be sent
instantly via wireless and satellite links to an operations room halfway
around the world—or to the hand-held devices of soldiers below. In
military jargon, troops without UAV support are “disadvantaged”.
The technology has been adopted at extraordinary speed. In 2003, the year
the American-led coalition defeated Saddam Hussein’s armed forces,
America’s military logged a total of roughly 35,000 UAV flight-hours in
Iraq and Afghanistan. Last year the tally reached 800,000 hours. And even
that figure is an underestimate, because it does not include the flights
of small drones, which have proliferated rapidly in recent years. (America
alone is thought to have over 5,000 of them.) These robots, typically
launched by foot soldiers with a catapult, slingshot or hand toss, far
outnumber their larger kin, which are the size of piloted aeroplanes.
Global sales of UAVs this year are expected to increase by more than 10%
over last year to exceed $4.7 billion, according to Visiongain, a market-
research firm based in London. It estimates that America will spend about
60% of the total. For its part, America’s Department of Defence says it
will spend more than $22 billion to develop, buy and operate drones
between 2007 and 2013. Following the United States, Israel ranks second in
the development and possession of drones, according to those in the
industry. The European leaders, trailing Israel, are roughly matched:
Britain, France, Germany and Italy. Russia and Spain are not far behind,
and nor, say some experts, is China. (But the head of an American navy
research-laboratory in Europe says this is an underestimate cultivated by
secretive Beijing, and that China’s drone fleet is actually much larger.)
In total, more than three dozen countries operate UAVs, including Belarus,
Colombia, Sri Lanka and Georgia. Some analysts say Georgian armed forces,
equipped with Israeli drones, outperformed Russia in aerial intelligence
during their brief war in August 2008. (Russia also buys Israeli drones.)
Iran builds drones, one of which was shot down over Iraq by American
forces in February. The model in question can reportedly collect ground
intelligence from an altitude of 4,000 metres as far as 140km from its
base. This year Iranian officials said they had developed a new drone with
a range of more than 1,900km. Iran has supplied Hizbullah militants in
Lebanon with a small fleet of drones, though their usefulness is limited:
Hizbullah uses lobbed rather than guided rockets, and it is unlikely to
muster a ground attack that would benefit from drone intelligence. But
ownership of UAVs enhances Hizbullah’s prestige in the eyes of its
supporters, says Amal Ghorayeb, a Beirut academic who is an expert on the
group.
Eyes wide open
How effective are UAVs? In Iraq, the significant drop in American
casualties over the past year and a half is partly attributable to the
“persistent stare” of drone operators hunting for insurgents’ roadside
bombs and remotely fired rockets, says Christopher Oliver, a colonel in
the American army who was stationed in Baghdad until recently. “We stepped
it up,” he says, adding that drone missions will continue to increase, in
part to compensate for the withdrawal of troops. In Afghanistan and Iraq,
almost all big convoys of Western equipment or personnel are preceded by a
scout drone, according to Mike Kulinski of Enerdyne Technologies, a
developer of military-communications software based in California. Such
drones can stream video back to drivers and transmit electromagnetic
jamming signals that disable the electronic triggers of some roadside
bombs.
In military parlance, drones do work that would be “dull, dirty and
dangerous” for soldiers. Some of them can loiter in the air for long
periods. The Eagle-1, for example, developed by Israel Aerospace
Industries and EADS, Europe’s aviation giant, can stay aloft for more than
50 hours at a time. (France deployed several of these aircraft this year
in Afghanistan.) Such long flights help operators, assisted with object-
recognition software, to determine normal (and suspicious) patterns of
movement for people and vehicles by tracking suspects for two wake-and-
sleep cycles.
In Afghanistan and Iraq, almost all big convoys are preceded by a scout
drone.
Drones are acquiring new abilities. New sensors that are now entering
service can make out the “electrical signature” of ground vehicles by
picking up signals produced by engine spark-plugs, alternators, and other
electronics. A Pakistani UAV called the Tornado, made in Karachi by a
company called Integrated Dynamics, emits radar signals that mimic a
fighter jet to fool enemies.
UAVs are hard to shoot down. Today’s heat-seeking shoulder-launched
missiles do not work above 3,000 metres or so, though the next generation
will be able to go higher, says Carlo Siardi of Selex Galileo, a
subsidiary of Finmeccanica in Ronchi dei Legionari, Italy. Moreover, drone
engines are smaller—and therefore cooler—than those powering heavier,
manned aircraft. In some of them the propeller is situated behind the
exhaust source to disperse hot air, reducing the heat signature. And
soldiers who shoot at aircraft risk revealing their position.
But drones do have an Achilles’ heel. If a UAV loses the data connection
to its operator—by flying out of range, for example—it may well crash.
Engineers have failed to solve this problem, says Dan Isaac, a drone
expert at Spain’s Centre for the Development of Industrial Technology, a
government research agency in Madrid. The solution, he and others say, is
to build systems which enable an operator to reconnect with a lost drone
by transmitting data via a “bridge” aircraft nearby.
Getty Images
Eyes in the sky, pilots on the ground
In late June America’s Northrop Grumman unveiled the first of a new
generation of its Global Hawk aircraft, thought to be the world’s fastest
drone. It can gather data on objects reportedly as small as a shoebox,
through clouds, day or night, for 32 hours from 18,000 metres—almost twice
the cruising altitude of passenger jets. After North Korea detonated a
test nuclear device in May, America said it would begin replacing its
manned U-2 spy planes in South Korea with Global Hawks, which are roughly
the size of a corporate jet.
Big drones are, however, hugely expensive. With their elaborate sensors,
some cost as much as $60m apiece. Fewer than 30 Global Hawks have been
bought. And it is not just the hardware that is costly: each Global Hawk
requires a support team of 20-30 people. As the biggest UAVs get bigger,
they are also becoming more expensive. Future American UAVs may cost a
third as much as the F-35 fighter jet (each of which costs around $83m,
without weapons). The Neuron, a jet-engine stealth drone developed by
France’s Dassault Aviation and partners including Italy’s Alenia, will be
about the size of the French manned Mirage fighter.
Small drones, by contrast, cost just tens of thousands of dollars. With
electric motors, they are quiet enough for low-altitude spying. But
batteries and fuel cells have only recently become light enough to open up
a large market. A fuel cell developed by AMI Adaptive Materials, based in
Ann Arbor, Michigan, exemplifies the progress made. Three years ago AMI
sold a 25-watt fuel cell weighing two kilograms. Today its fuel cell is
25% lighter and provides eight times as much power. This won AMI a
$500,000 prize from the Department of Defence. Its fuel cells, costing
about $12,000 each, now propel small drones.
Most small drones are launched without airstrips and are controlled in the
field using a small computer. They can be recovered with nets, parachutes,
vertically strung cords that snag a wingtip hook or a simple drop on the
ground after a stall a metre or two in the air. Their airframes break
apart to absorb the impact; users simply snap them back together.
With some systems, a ground unit can launch a drone for a quick bird’s-eye
look around with very little effort. Working with financing from Italy’s
defence ministry, Oto Melara, an Italian firm, has built prototypes of a
short-range drone launched from a vehicle-mounted pneumatic cannon. The
aircraft’s wings unfold upon leaving the tube. It streams back video while
flying any number of preset round-trip patterns. Crucially, operators do
not need to worry about fiddling with controls; the drone flies itself.
Send in the drones
Indeed, as UAVs become more technologically complex, there is also a clear
trend towards making their control systems easier to use, according to a
succession of experts speaking at a conference in La Spezia, Italy, held
in April by the Association for Unmanned Vehicle Systems International
(AUVSI), an industry association. For example, instead of manoeuvring
aircraft, operators typically touch (or click on) electronic maps to
specify points along a desired route. Software determines the best flight
altitudes, speeds and search patterns for each mission—say, locating a
well near a hilltop within sniping range of a road.
Eyevine
This is most certainly not a computer game
Next year Lockheed Martin, an American defence contractor, begins final
testing of software to make flying drones easier for troops with little
training. Called ECCHO, it allows soldiers to control aircraft and view
the resulting intelligence on a standard hand-held device such as an
iPhone, BlackBerry or Palm Pre. It incorporates Google Earth mapping
software, largely for the same reason: most recruits are already
proficient users.
What’s next? A diplomat from Djibouti, a country in the Horn of Africa,
provides a clue. He says private companies in Europe are now offering to
operate spy drones for his government, which has none. (Djibouti has
declined.) But purchasing UAV services, instead of owning fleets, is
becoming a “strong trend”, says Kyle Snyder, head of surveillance
technology at AUVSI. About 20 companies, he estimates, fly spy drones for
clients.
One of them, a division of Boeing called Insitu, sees a lucrative untapped
market in Afghanistan, where the intelligence needs of some smaller NATO
countries are not being met by larger allies. (Armed forces are often
reluctant to share their intelligence for tactical reasons.) Alejandro
Pita, Insitu’s head of sales, declines to name customers, but says his
firm’s flights cost roughly $2,000 an hour for 300 or so hours a month.
The drones-for-hire market is also expanding into non-military fields.
Services include inspecting tall buildings, monitoring traffic and
maintaining security at large facilities.
AP
X marks the spot
Drone sales and research budgets will continue to grow. Raytheon, an
American company, has launched a drone from a submerged submarine. Mini
helicopter drones for reconnaissance inside buildings are not far off. In
China, which is likely to be a big market in the future, senior officials
have recently talked of reducing troop numbers and spending more money
developing “informationised warfare” capabilities, including unmanned
aircraft.
There is a troubling side to all this. Operators can now safely manipulate
battlefield weapons from control rooms half a world away, as if they are
playing a video game. Drones also enable a government to avoid the
political risk of putting combat boots on foreign soil. This makes it
easier to start a war, says P.W. Singer, the American author of “Wired for
War”, a recent bestseller about robotic warfare. But like them or not,
drones are here to stay. Armed forces that master them are not just
securing their hold on air superiority—they are also dramatically
increasing its value.
--
Hacking goes squishy
Sep 3rd 2009
From The Economist print edition
Biotechnology: The falling cost of equipment capable of manipulating DNA
is opening up a new field of “biohacking” to enthusiasts
Illustration by Brett Ryder
MANY of the world’s great innovators started out as hackers—people who
like to tinker with technology—and some of the largest technology
companies started in garages. Thomas Edison built General Electric on the
foundation of an improved way to transmit messages down telegraph wires,
which he cooked up himself. Hewlett-Packard was founded in a garage in
California (now a national landmark), as was Google, many years later.
And, in addition to computer hardware and software, garage hackers and
home-build enthusiasts are now merrily cooking up electric cars, drone
aircraft and rockets. But what about biology? Might biohacking—tinkering
with the DNA of existing organisms to create new ones—lead to innovations
of a biological nature?
The potential is certainly there. The cost of sequencing DNA has fallen
from about $1 per base pair in the mid-1990s to a tenth of a cent today,
and the cost of synthesising the molecule has also fallen. Rob Carlson,
the founder of a firm called Biodesic, started tracking the price of
synthesis a decade ago. He found a remarkably steady decline, from over
$10 per base pair to, lately, well under $1 (see chart). This decline
recalls Moore’s law, which, when promulgated in 1965, predicted the
exponential rise of computing power. Someday history may remember drops in
the cost of DNA synthesis as Carlson’s curve.
A growing culture
And as the price falls, amateurs are wasting little time getting started.
Several groups are already hard at work finding ways to duplicate at home
the techniques used by government laboratories and large corporations. One
place for them to learn about biohacking is DIYbio, a group that holds
meetings in America and Britain and has about 800 people signed up for its
newsletter. DIYbio plans to perform experiments such as sending out its
members in different cities to swab public objects. The DNA thus collected
could be used to make a map showing the spread of micro-organisms.
Strictly, that is not really biohacking. But attempts to construct micro-
organisms that make biofuels efficiently certainly are—though it will be
impressive if a group of amateurs can succeed in cracking a problem that
is confounding many established companies. Amateur innovation,
nevertheless, is happening. When a science blog called io9 ran a
competition for biohackers, it received entries for modified
microorganisms that, among other things, help rice plants process nitrogen
fertiliser more efficiently, measure the alcohol content of a person’s
breath and respond to commands from a computer.
The template for biohacking’s future may be the International Genetically
Engineered Machine (iGem) competition, held annually at the Massachusetts
Institute of Technology. This challenges undergraduates to spend a summer
building an organism from a “kit” provided by a gene bank called the
Registry of Standard Biological Parts. Their work is possible because the
kit is made up of standardised chunks of DNA called BioBricks.
As Jason Kelly, the co-founder of a gene-synthesis firm called Ginkgo
BioWorks, observes, there is no equivalent of an electrical engineer’s
diagram to help unravel what is going on in a cell. As he puts it, “what
the professionals can do in terms of engineering an organism is really
rudimentary. It’s really a tinkering art more than a predictable
engineering system.” BioBricks are, nevertheless, an attempt to provide
the equivalent of electronic components with known properties to the
field—and using them is part of Ginkgo’s business plan. Information on
BioBricks is kept public, helping the students understand which work
together best.
Illustration by Brett Ryder
What the students actually create, however, is left to their imaginations.
And the results are often unexpected. A team from National Yang-Ming
University in Taiwan conceived a bacterium that can do the work of a
failed kidney; another, from Imperial College, London, worked on a
“biofabricator” capable of building other biological materials.
From relatively simple beginnings in 2003, iGem has grown to a competition
involving 84 teams and 1,200 participants, most of whom leave with enough
knowledge to do work at home. They are limited mainly by the novelty of
the pursuit. Although there are no laws banning the sale of DNA, reagents
or equipment, such items are usually priced for sale to large
institutions. Indeed, it is this problem of finding ways to manage without
expensive equipment, rather than a desire to work on “wetware”, or living
organisms, that motivates many biohackers.
Tito Jankowski, now a member of DIYbio, became interested in toolmaking
for biohackers after taking part in iGem with a team from Brown University
that had set itself the goal of modifying bacteria to detect lead in
water. After graduating, Mr Jankowski was interested in doing more, but
found his access to equipment restricted. He decided to create a cheaper
version of the gel-electrophoresis box, a basic tool used in a wide range
of experiments. Despite its simple construction, which can be as spare as
a few panes of coloured plastic over a heating element, a gel box can sell
for over $1,000. But according to Mr Jankowski, “this equipment is only
expensive because it has never been used for personal stuff before.”
Mr Jankowski likens the current state of biohacking to the years in which
amateurs first began working with personal computers, a metaphor that Dr
Kelly also uses. Computers were once both expensive and arcane. Today,
they are built mostly from off-the-shelf components, and even a relatively
non-technical person can assemble one. If hobbyists like Mr Jankowski can
help reduce the cost of equipment, say, tenfold, while BioBricks or
something similar become cheaper and more predictable, then the stage will
be set for a bioscience version of Apple or Google to be born in a
dormitory room or garage.
But what about viruses?
The computer metaphor, though, is a reminder that there is no shortage of
fools and criminals ready to construct viruses and other harmful computer
programs. If such people got interested in the biological world, the
consequences might be even more serious—because in biology, there is no
rebooting the machine.
More than any other detail of biohacking, this is the one that laymen
grasp. And the resulting fear can have unpleasant effects, as Steve Kurtz,
a professor of art at the State University of New York in Buffalo who
works with biological material, found out. In May 2004 he awoke to find
that his wife, Hope, was not breathing. The police who accompanied
paramedics to his home found Petri dishes used in his art displays, and
notified the Federal Bureau of Investigation (FBI), which brought in the
Department of Homeland Security and charged him with bioterrorism. The
authorities claimed the body of his wife, who had died of congenital heart
failure, for examination. This took place over the protestations of Mr
Kurtz, his colleagues and the local commissioner of public health, all of
whom insisted that nothing in the exhibit could be harmful.
The right way to regulate biohacking may not become apparent for some
time.
The initial reaction of the local police was hardly surprising. The
motives of the FBI, which has experts capable of examining Mr Kurtz’s art
scientifically, are harder to decode. After a grand jury refused to indict
Mr Kurtz, the bureau then pursued him with a mail-fraud charge carrying a
sentence of up to 20 years, which a judge dismissed this year. Mr Kurtz,
known for his anti-establishment art, may simply have become the target of
harassment for his views. But the FBI may genuinely be wary of biohackers;
rumour suggests it has followed up the case by discreetly instructing
reagent suppliers not to sell to individuals, despite the lack of any law
against their doing so.
So far legislators have shown little interest in regulating individuals.
When they choose to do so, it will not be easy. If groups such as DIYbio
are successful, the basic tools of biohacking will be both cheap to buy
and easy to construct at home. Many DNA sequences, including those for
harmful diseases, are already widely published, and can hardly be
retracted. The falling cost of DNA synthesis suggests that there will be
automated “printers” for the molecule before long. There are some
substances that can be controlled, like the reagents used to modify DNA.
But a strict government policy regulating the chemical components of
biohacking might have much the same effect as laws banning gun ownership—
ordinary citizens will be discouraged, while criminals will still find
what they want on black markets.
In all likelihood, the right way to regulate biohacking will not become
apparent for some time. But some people think that any regulation at all
could be harmful. Dr Carlson, who has a book on biohacking coming out
later this year, is a proponent of light regulation at most. “If you look
at our ability to respond to infectious diseases at this point in time,
we’re essentially helpless,” he says. “The quandary we face is that we
need the garage hackers, because that’s where innovation comes from.”
Freeman Dyson, a venerable and polymathic physicist who has been thinking
about the problem, is also a believer in biological innovation. He has
written about a variety of futuristic possibilities, including modified
trees that are better than natural ones at absorbing carbon dioxide, and
termites that can eat old cars. If regulation of biohacking is too tight,
such innovations—or, at least, things like them—might never come to pass.
--
3-D: It's nearly there
Sep 3rd 2009
From The Economist print edition
Three-dimensional imaging: New technologies that display 3-D visuals are
on the verge of spreading from cinemas into the wider world
Sara Forrest
BRIGHT and crisp high-definition (HD) images, a luxury not so long ago,
are fast becoming standard in consumer electronics. HD technology is now
well entrenched in the marketplace in the form of televisions, video
cameras, Blu-ray players, games consoles and projectors. There seems
little scope to improve the display of two-dimensional images, which
provide about as much detail as the human eye can appreciate. So attention
is shifting to the next frontier in display technology: three-dimensional
(3-D) images.
In recent years 3-D cinema projection has made a dramatic comeback,
shaking off its image as a gimmick and replacing the cheesy old red-and-
blue glasses with new technologies that are easier to use and produce more
lifelike results. Studios love 3-D because it is immune to piracy. Cinemas
love 3-D because it allows them to offer something that even the most
elaborate home cinema cannot match, and charge more for it. Now 3-D seems
to be on the verge of moving out of the cinema and into a wider range of
products.
Would you look at that
Better and cheaper 3-D display technologies for home and office use are
“ready for prime time”, says a senior executive at Wistron, a Taiwanese
firm that manufactures computers for many leading brands. By the end of
this year the first mass-market laptops capable of displaying 3-D images
will be on sale, he says, and by the end of 2010 all of the world’s top
ten computer-makers will include 3-D displays in their product line-ups.
At the Consumer Electronics Show held in Las Vegas in January, prototype
3-D televisions and other products were unveiled by JVC, LG, Panasonic,
Samsung, Sony and others.
Such prototypes have been around for a few years, but they have recently
made rapid progress, and the industry is now stumbling towards agreeing on
the necessary standards. Even without such standards, several firms plan
to launch 3-D products and services next year anyway. Beyond that, even
more elaborate technologies are under development that use holograms to
display 3-D images.
Creating images that appear to burst forth from a screen and invite you to
reach out and touch them is not easy. One way of doing so is to use
“stereoscopic” optical technologies, in which scenes are filmed from two
angles. When displayed, special eyewear then ensures that one perspective
is beamed exclusively to the right eye and the other to the left eye,
fooling the brain into thinking that it is looking at a 3-D scene. So-
called “autostereoscopic” 3-D systems do not require glasses. One approach
uses tiny lenses on the front of the display to direct images for the left
and right eyes in several different directions. Provided your head is in
the right place, and you keep it still, a 3-D image appears.
But building a 3-D display is only one piece of the puzzle: there must
also be 3-D content to show on it. A games console can be programmed to
produce separate images for left and right eyes relatively easily, but
most films and television programmes are not shot in 3-D. Now, however, it
is possible to convert existing video into 3-D automatically. DDD Group,
based in Santa Monica, California, makes a conversion chip, called TriDef,
that uses object-recognition software to analyse colours and shapes and
determine distances, inferring that, for example, the muzzle of a gun is
closer to the viewer than the shooter’s face. When the software is unsure
it does not add depth, says Chris Yewdall, DDD’s boss.
One of DDD’s customers is Samsung, a South Korean electronics giant, which
plans to launch 3-D television sets next year. DDD and its main
competitors—JVC in Japan and NVIDIA in California—are also developing 3-D
conversion technologies for computers. Acer, a Taiwanese manufacturer, is
expected to launch a laptop equipped with a 3-D conversion chip made by
DDD, in October. (Its display will require users to wear special glasses.)
SeaReal
An alternative approach to creating 3-D images is based on holography. A
hologram is a special interference pattern created in a photosensitive
medium (which can be as simple as a traditional photographic film). Light
striking this pattern is scattered as though it were actually striking the
object encoded by the interference pattern. The pattern is usually created
by combining two laser beams, one of which has been bounced off the object
being displayed.
Holograms have many advantages over stereoscopic images. Not only is no
special eyewear needed, but also the images do not distort when observers
move. But producing a fixed hologram of a static object is tricky enough;
making a holographic display, or something that functions like one, is
even more difficult. One approach involves firing carefully orchestrated
pulses from an array of lasers at a sheet of glass scored with tiny
grooves; another, demonstrated by researchers at the University of
Southern California Graphics Lab, involves projecting high-speed video
onto a rapidly spinning mirror, so that the appropriate views of an object
are reflected in different directions. Such technology is still embryonic,
but several industries are interested in it.
Reach out and touch
Kolpi, a French company based in Sophia Antipolis, has devised a 3-D
display that will allow oil-exploration companies to direct their remotely
operated submarines. Video and sonar data from the submarine are displayed
as a volleyball-sized hologram. An operator can direct the robot by moving
a cursor around inside the hologram. The display is expected to cost
$140,000 when it goes on sale next year.
USC
Almost close enough to touch: 3-D displays from Actuality Medical (top of
article), SeeReal (second image) and the University of Southern California
Graphics Lab (above)
Actuality Medical, based in Bedford, Maryland, hopes to improve
radiotherapy with a different type of 3-D display. At the moment doctors
“hope the patient doesn’t move” as they zap cancerous tissue with a beam
of radiation, says Gregg Favalora, the firm’s founder. Working with
Philips, a Dutch electronics company, Actuality Medical has built an early
version of a system that could limit damage to healthy tissue. Called
Perspecta, it graphically depicts a simulated beam of radiation shooting
through a hologram-like image of body tissue. This could eventually help
doctors redirect radiation as body parts move slightly during treatment.
The 3-D image is created by projecting about 6,000 images a second onto a
nearly transparent spinning disc some 25 centimetres (10 inches) across,
which forms a basketball-sized sphere.
Creating actual holograms—or images that resemble them, as Perspecta does
—requires enormous amounts of processing power. So far this has kept
images small: they are rarely bigger than a shoebox. To make them larger a
company called SeeReal, based in Luxembourg, has built systems that use
two eye-tracking cameras above a large 3-D display to follow the viewer’s
eyes. It is then necessary to generate only the parts of the hologram that
are relevant to the viewer’s position and direction of gaze, greatly
reducing the amount of processing required.
SeeReal reckons that the information needed to construct small holograms
can be carried over existing telecoms networks. That would allow
scientists working in different locations to examine the same object, for
example. Drugs companies, which are keen to improve co-operation between
researchers in different laboratories, could represent a lucrative market
for the technology within two years, SeeReal predicts.
Another obvious use for 3-D displays is videoconferencing. Accenture, a
consultancy and research firm, has equipped two non-adjacent rooms at its
research centre in Sophia Antipolis, France, with cameras so that a wall-
mounted screen in each one serves as a window into the other. It is now
using 3-D displays to allow people to “share” objects and data between the
two rooms. The result, says head researcher Kelly Dempski, is an
“extension” of each room into the other. As hologram and data-transmission
technologies improve over the next decade, the rooms will increasingly
meld together, he says.
Room with a view
Holografika, a company based in Budapest, hopes to realise this vision
even sooner. One of its products, HoloVizio, displays 3-D images that
“practically surround” users, says Peter Kovacs, the firm’s software
chief. Its customers include carmakers and oil-exploration companies.
Working with 13 companies and research institutions in America, Europe and
Japan, Holografika is developing a system that will use holographic laser
arrays, driven by data from about 100 video cameras, to replicate the
contents of one room in another. It is expected to cost about $500,000.
Another 3-D extension of videoconferencing is the Eyeliner holographic
projection system devised by Musion, a company based in London. It does
not actually use holograms, but projects high-definition video onto nearly
transparent screens made of very thin foil, in a modern updating of the
old “Pepper’s ghost” stage illusion. The effect, for viewers a few metres
away, is a lifelike, full-sized 3-D moving image of a person that appears
to float in space, without any visible screen.
Musion’s technology has been used by Al Gore, Bill Gates, Prince Charles
and many other celebrities to appear on stage at conferences without being
physically present. From televisions and laptop screens to operating
theatres and conference halls, 3-D in all its forms is suddenly being
taken much more seriously than it was just a few years ago.
--
Paranoid survivor
Sep 3rd 2009
From The Economist print edition
Andrew Grove, the former boss of Intel, believes other fields can learn
from the chipmaking industry that he helped bring into being
Illustration by Andy Potts
EARLIER this year Andrew Grove taught a class at Stanford Business School.
As a living legend in Silicon Valley and a former boss of Intel, the
world’s leading chipmaker, Dr Grove could have simply used the opportunity
to blow his own trumpet. Instead he started by displaying a headline from
the Wall Street Journal heralding the recent takeover of General Motors by
the American government as the start of “a new era”. He gave a potted
history of his own industry’s spectacular rise, pointing out that plenty
of venerable firms—with names like Digital, Wang and IBM—were nearly or
completely wiped out along the way.
Then, to put a sting in his Schumpeterian tale, he displayed a fabricated
headline from that same newspaper, this one supposedly drawn from a couple
of decades ago: “Presidential Action Saves Computer Industry”. A fake
article beneath it describes government intervention to prop up the ailing
mainframe industry. It sounds ridiculous, of course. Computer firms come
and go all the time, such is the pace of innovation in the industry. Yet
for some reason this healthy attitude towards creative destruction is not
shared by other industries. This is just one of the ways in which Dr Grove
believes that his business can teach other industries a thing or two. He
thinks fields such as energy and health care could be transformed if they
were run more like the computer industry—and made greater use of its
products.
Dr Grove may be 73 and coping with Parkinson’s disease, but his wit is
still barbed and his desire to provoke remains as strong as ever. Rather
than slipping off to a gilded retirement of golf or gallivanting, as many
other accomplished men of his age do, he is still spoiling for a fight.
His achievements mean that his provocations are worth paying attention to.
He has arguably done as much as anyone to usher in the age of cheap,
cheerful and ubiquitous personal computing. In part, he did this through
technological prowess. He graduated at the top of his engineering class at
New York’s City College (one of the few options available to him as a poor
Jewish refugee from Communist-controlled Hungary). He then went on to earn
a doctorate at the University of California at Berkeley, and wrote a book
on semiconductors that remains a standard text.
He joined Fairchild Semiconductor, once a pioneering electronics firm,
where he caught the eye of Robert Noyce and Gordon Moore. The former was a
co-inventor of the integrated circuit, while the latter coined Moore’s law
(which decrees, roughly, that the amount of computing power available at a
given price doubles every 18 months). When the two left Fairchild to found
Intel in 1968—initially to make memory chips, not microprocessors—they
took the young Dr Grove with them. He eventually ended up in charge of the
company, becoming chief executive in 1987. He continued in that role until
1998, when he became chairman, holding that post until 2004.
Though his scientific credentials are solid, he will probably be best
remembered as a daring and successful businessman. Richard Tedlow, a
historian at Harvard Business School, calls him “one of the master
managers in the history of American business”. One reason is market
success: under his tenure, Intel came to dominate the microprocessor
industry and its market capitalisation rocketed (making it, at one point,
the world’s most valuable company). A bigger reason, though, lies in how
exactly he managed to steer Intel to such spectacular success.
Intelligence inside
Two particularly risky decisions he took are revealing. In “Only the
Paranoid Survive”, Dr Grove’s bestselling book, he argues that every
company will face a confluence of internal and external forces, often
unanticipated, that will conspire to make an existing business strategy
unviable. In Intel’s case, such a “strategic inflection point” arose
because its memory-chip business came under heavy assault from new
Japanese rivals willing to undercut any price Intel offered.
What could he do? The firm’s roots and most of its profits lay in making
memory chips; Intel’s microprocessor group was just a small niche. The
firm’s two founders and much of its engineering staff were too emotionally
wedded to its past successes to make a break. But Dr Grove decided to bet
the future of the company on microprocessors, a move that saved his
company and transformed the industry.
Dr Grove thinks pharmaceutical firms should study chipmakers to accelerate
learning and innovation.
The second big decision was Dr Grove’s radical announcement that Intel
would market its microchips directly to consumers. Previously, chipmakers
had regarded computer-makers such as Dell and Compaq as their customers,
and had not bothered with fancy advertising campaigns to end users. But Dr
Grove believed that such a relationship allowed these assembly and
marketing firms, which did little original research of their own, to
capture too much of the value created by his firm’s innovation.
So he launched the “Intel Inside” campaign, which marketed microprocessor
chips directly to consumers, starting in 1991. This incensed his rivals
and his immediate customers, the computer-makers, but the strong demand
for Intel’s new Pentium chip showed that the strategy had worked. True,
the firm stumbled when a minor flaw was discovered in the Pentium that
affected some mathematical calculations. Rather than rush to correct the
problem, Intel tried to downplay it—a strategy that quickly turned into a
public-relations disaster. The firm was forced to offer a replacement for
all affected chips, at a cost of nearly half a billion dollars.
Painful though that was, Dr Grove now thinks this episode actually
benefited the firm in two ways. First, it proved to internal sceptics that
Intel really had become a consumer brand. Second, he reckons that it
bolstered his efforts to improve the shoddy quality of manufacturing, to
protect the firm from future fiascos. In hindsight, his risky decision to
turn Intel from a component-maker into a consumer brand was a
masterstroke.
An American success story
Some observers have suggested that it was his family’s escape from the
Nazis, and his own experience of the abuses of communism, that shaped Dr
Grove’s strict management style. On this view, his demanding but
meritocratic approach, rewarding ideas and knowledge over power, was a
rejection of the injustices of communism.
Dr Grove, however, insists that it was his experience at City College,
where talent and hard work were rewarded and where students challenged
their professors without concern for rank, that impressed upon him the
value of meritocracy. By contrast, he recalls an elitist, back-stabbing
and lax corporate culture at Fairchild. Senior executives would stroll
into the office or into meetings as late as they pleased, but blue-collar
workers were penalised or even fired if they committed similar offences.
When he took control of Intel Dr Grove imposed a strict arrival time of
8am, with latecomers forced to sign a sheet. He also refused to go along
with popular management trends such as flexi-time and teleworking. He was
known as a blunt and demanding manager, but he also gained a reputation as
a fair-minded boss who rewarded good ideas, no matter where they came
from.
Asked today if he regrets imposing his disciplinarian personality on his
company, he makes a confession: “You don’t understand—I was never that
disciplined myself, and I’m not even a morning person!” He was determined
to impose discipline on Intel, he says, for two reasons that ultimately
worked to the firm’s advantage. First, he wanted to avoid the outrageous
double standards he had experienced at Fairchild. The meritocratic culture
he created at Intel then helped it attract the best talent in the
industry. Second, he knew that strong discipline would also be necessary
to improve his firm’s shoddy manufacturing.
At the time the microchip business was producing such unreliable products
that customers insisted that companies like Intel always license new
products to a secondary supplier to ensure reliability of supply. His
efforts to tighten up quality control led to a commercial coup. When his
firm introduced its widely anticipated 386 processor, he stunned the
industry by declaring that Intel would not license any secondary
manufacturers. This was a huge risk for computer-makers, but such was
their appetite for the new chip that they bought it anyway. Intel’s
ability to deliver good enough chips in large numbers meant profits no
longer had to be shared with secondary manufacturers.
With his reputation for ruthlessness in the marketplace and rigorous
discipline inside his firm, Dr Grove has much in common with another
American business leader: Lee Raymond, the formidable former chairman of
Exxon Mobil. Both men were feared by both rivals and many of their
employees. Dr Grove once even spearheaded a sales campaign against a
superior chip made by Motorola in an effort dubbed “Operation Crush”. When
asked about such bully-boy tactics, Dr Grove remains unrepentant. He even
likes the comparison with the unloved oilman: “I never knew Lee Raymond,
but he did take Exxon to the top of the Fortune 500—and that’s OK with
me.”
Personal admiration aside, however, Dr Grove is convinced that Exxon and
its Big Oil brethren are in a sunset industry. He has written and lectured
widely on energy and environmental topics in recent years, arguing that
oil and cars are heading for a divorce. He regards electricity as the most
promising replacement fuel, and thinks battery technology has the
potential to produce an Intel-like giant as the industry develops.
Another business he believes to be ripe for disruption is health care. He
complains that the industry seems to innovate much too slowly. The lack of
proper electronic medical records and smart “clinical decision systems”
bothers him, as does the slow-moving, bureaucratic nature of clinical
trials. He thinks pharmaceutical firms should study the fast “knowledge
turns” achieved by chipmakers, so that the cycles of learning and
innovation are accelerated. (A knowledge turn, a term coined by Dr Grove,
is the time it takes for an experiment to proceed from hypothesis to
results, and then to a new hypothesis—around 18 months in chipmaking, but
10-20 years in medicine.)
And what of chipmaking—is it, too, a sunset industry ripe for disruption?
Dr Grove still believes in Moore’s law (with the caveat that it will get
ever pricier for chipmakers to uphold) but he has a grave concern. At a
recent ceremony honouring his achievements, he shocked the gathered
bigwigs by declaring that the industry’s approach to hoarding patents was
an abuse of intellectual-property rights and risked undermining its
future. Asked to defend that claim, which upset even his own family
members, he does not backtrack. He insists that firms must use their
patents or lose them: “You can’t just sit on your ass and give everyone
the finger.” Even though Dr Grove is no longer running Intel, it seems
that his desire to shake things up is undimmed.
No hay comentarios:
Publicar un comentario