nettime's smart reader on Thu, 31 Oct 2013 13:26:20 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

<nettime> Anthony Townsend: What if the smart cities of the future are chock full of bugs?


Original to:
http://places.designobserver.com/feature/smart-cities-buggy-and-brittle/38111/
(with smart pics)

[bwo WaiWai, with thanks.]

Anthony Townsend
What if the smart cities of the future are chock full of bugs?


Calafia Café in Palo Alto is one of the smartest eateries in the world.
With Google?s former executive chef Charlie Ayers at the helm, the food
here isn?t just for sustenance. This is California ? eating is also a path
to self-improvement. Each dish is carefully crafted with ingredients that
not only keep you slim, but make you smarter and more energized too. A
half-dozen venture capitalists pick at their dandelion salads. A sleepy
suburb at night, by day Palo Alto becomes the beating heart of Silicon
Valley, the monied epicenter of the greatest gathering of scientific and
engineering talent in the history of human civilization. To the west,
across the street, lies Stanford University. The Googleplex sprawls a few
miles to the east. In the surrounding region, some half-million engineers
live and work. A tech tycoon or two wouldn?t be out of place here. Steve
Jobs was a regular.

Excusing myself to the men?s room, however, I discover that Calafia Café
has a major technology problem. Despite the pedigree of its clientele, the
smart toilet doesn?t work. As I stare hopefully at the stainless steel
throne, a red light peering out from the small black plastic box that
contains the bowl?s ?brains? blinks at me fruitlessly. Just above, a sign
directs an escape path. ?If sensor does not work,? it reads, ?use manual
flush button.? And so I bail out, sidestepping fifty years of progress in
computer science and industrial engineering in the blink of an eye.

Back at my table, I try to reverse-engineer the model of human-waste
production encoded in the toilet?s CPU. I imagine a lab somewhere in
Japan. Technicians in white lab coats wield stopwatches as they
methodically clock an army of immodest volunteers seated upon row after
row of smart johns. The complexity of the problem becomes clear. Is it
supposed to flush as soon as you stand up? Or when you turn around? Or
pause for a fixed amount of time? But how long? Can it tell if you need
another flush? It?s not quite as challenging an engineering task as
putting a man on the moon, or calculating driving directions to the
airport. Somehow, though, that stuff works every time.

My bewilderment quickly yields to a growing sense of dread. How is it that
even in the heart of Silicon Valley it?s completely acceptable for smart
technology to be buggy, erratic, or totally dysfunctional? Someone
probably just cured cancer in the biotechnology lab across the street and
is here celebrating over lunch. Yet that same genius will press the manual
flush button just as I did, and never think twice about how consistently
this new world of smart technology is letting us down. We are weaving
these technologies into our homes, our communities, even our very bodies ?
but even experts have become disturbingly complacent about their
shortcomings. The rest of us rarely question them at all.

I know I should stop worrying, and learn to love the smart john. But what
if it?s a harbinger of bigger problems? What if the seeds of smart cities?
own destruction are already built into their DNA? I?ve argued that smart
cities are a solution to the challenges of 21st-century urbanization, that
despite potential pitfalls, the benefits outweigh the risks, especially if
we are aggressive about confronting the unintended consequences of our
choices. But in reality we?ve only scratched the surface.

What if the smart cities of the future are buggy and brittle? What are we
getting ourselves into?

A few weeks later, I found myself wandering around the MIT campus in
Cambridge, Massachusetts, with nary a thought about uncooperative toilets
in mind. Strolling west from Kenmore Square, a few minutes later I came
across the new home of the Broad Institute, a monolith of glass and steel
that houses a billion-dollar center for research in genomic medicine. The
street wall was tricked out with an enormous array of displays showing in
real time the endless sequences of DNA base pairs being mapped by the
machinery upstairs.

And then, out of the corner of my eye, I saw it. The Blue Screen of Death,
as the alert displayed by Microsoft Windows following an operating-system
crash is colloquially known. Forlorn, I looked through the glass at the
lone panel. Instead of the stream of genetic discoveries, a meaningless
string of hexadecimals stared back, indicating precisely where, deep in
the core of some CPU, a lone miscomputation had occurred. Just where I had
hoped to find historic fusion of human and machine intelligence, I?d found
yet another bug.

The term ?bug,? derived from the old Welsh bwg (pronounced ?boog?), has
long been used as slang for insects. But appropriation of the term to
describe technical failings dates to the dawn of the telecommunications
age. The first telegraphs invented in the 1840s used two wires, one to
send and one to receive. In the 1870s, duplex telegraphs were developed,
permitting messages to be sent simultaneously in both directions over a
single wire. But sometimes stray signals would come down the line, which
were said to be ?bugs? or ?buggy.? [1] Thomas Edison himself used the
expression in an 1878 letter to Puskás Tivadar, the Hungarian inventor who
came up with the idea of a telephone exchange that allowed individual
lines to be connected into a network for the first time. [2] According to
an early history of Edison?s own quadruplex, an improved telegraph that
could send two signals in each direction, by 1890 the word had become
common industry parlance. [3]

The first documented computer bug, however, was an actual insect. In
September 1947, Navy researchers working with professors at Harvard
University were running the Mark II Aiken Relay Calculator through its
paces when it suddenly began to miscalculate. Tearing open the primitive
electromechanical computer, they found a moth trapped between one of its
relays. On a website maintained by Navy historians, you can still see a
photograph of the page from the lab notebook where someone carefully taped
the moth down, methodically adding an annotation: ?First actual case of
bug being found.? [4] As legend has it, that person was Grace Hopper, a
programmer who would go on to become an important leader in computer
science. (Hopper?s biographer, however, disputes this was the first time
?bug? was used to describe a malfunction in the early development of
computers, arguing ?it was clear the term was already in use.?) [5]

Since that day, bugs have become endemic in our digital world, the result
of the enormous complexity and ruthless pace of modern engineering. But
how will we experience bugs in the smart city? They could be as isolated
as that faulty toilet or a crashed public screen. In 2007 a Washington
Metro rail car caught fire after a power surge went unnoticed by buggy
software designed to detect it. [6] Temporarily downgrading back to the
older, more reliable code took just 20 minutes per car while engineers
methodically began testing and debugging.

But some bugs in city-scale systems will ripple across networks with
potentially catastrophic consequences. A year before the DC Metro fire, a
bug in the control software of San Francisco?s BART system forced a
system-wide shutdown not just once, but three times over a 72-hour period.
More disconcerting is the fact that initial attempts to fix the faulty
code actually made things worse. As an official investigation later found,
?BART staff began immediately working to configure a backup system that
would enable a faster recovery from any future software failure.? But two
days after the first failure, ?work on that backup system inadvertently
contributed to the failure of a piece of hardware that, in turn, created
the longest delay.? [7] Thankfully, no one was injured by these subway
shutdowns, but their economic impact was likely enormous ? the economic
toll of the two-and-a-half-day shutdown of New York?s subways during a
2005 strike was estimated at $1 billion. [8]

The troubles of automation in transit systems are a precursor to the kinds
of problems we?re likely to see as we buy into smart cities. As
disconcerting as today?s failures are, however, they are actually a
benchmark for reliability. Current smart systems are painstakingly
designed and extensively tested. They have multiple layers of fail-safes.
With the urgency of urban problems increasing and the resources and will
to deal with them in doubt, in the future many smart technologies will be
thrown together under tight schedules and even tighter budgets. They will
struggle to match this gold standard of reliability, with only a few
short-lived, sporadic glitches each year.

The sheer size of city-scale smart systems comes with its own set of
problems. Cities and their infrastructure are already the most complex
structures humankind has ever created. Interweaving them with equally
complex information processing can only multiply the opportunities for
bugs and unanticipated interactions. As Kenneth Duda, a high-performance
networking expert told the New York Times, ?the great enemy is complexity,
measured in lines of code, or interactions.? [9] Ellen Ullman, a writer
and former software developer, argues, ?it is impossible to fully test any
computer system. To think otherwise is to misunderstand what constitutes
such a system. It is not a single body of code created entirely by one
company. Rather, it is a collection of ?modules? plugged into one another.
The resulting system is a tangle of black boxes wired together that
communicate through dimly explained ?interfaces.? A programmer on one side
of an interface can only hope that the programmer on the other side has
gotten it right.? [10]

In his landmark 1984 study of technological disasters, Normal Accidents,
sociologist Charles Perrow argued that in highly complex systems with many
tightly linked elements, accidents are inevitable. What?s worse is that
traditional approaches to reducing risk, such as warnings and alerts (or
the installation of the backup recovery system in the BART incident), may
actually introduce more complexity into systems and thereby increase
risks. The Chernobyl nuclear disaster, for instance, was caused by an
irreversible chain of events triggered during tests of a new reactor
safety system. Perrow?s conclusion: ?Most high-risk systems have some
special characteristics, beyond their toxic or explosive or genetic
dangers, that make accidents in them inevitable, even ?normal.?? [11]

Normal accidents will be ever-present in smart cities. Just as the rapid
pace of urbanization has revealed shoddy construction practices, most
notably in China?s notorious ?tofu buildings,? hastily put together smart
cities will have technological flaws created by designers? and builders?
shortcuts. These hasty hacks threaten to make earlier design shortcuts
like the Y2K bug seem small in comparison. Stemming from a trick commonly
used to save memory in the early days of computing, by recording dates
using only the last two digits of the year, Y2K was the biggest bug in
history, prompting a worldwide effort to rewrite millions of lines of code
in the late 1990s. Over the decades, there were plenty of opportunities to
undo Y2K, but thousands of organizations chose to postpone the fix, which
ended up costing over $300 billion worldwide when they finally got around
to it. [12] Bugs in the smart city will be more insidious, living inside
lots of critical, interconnected systems. Sometimes there may be no way to
anticipate the interdependencies. Who could have foreseen the massive
traffic jam caused on U.S. Interstate 80 when a bug in the system used to
manage juror pools by Placer County, California, erroneously summoned
twelve hundred people to report for duty on the same day in 2012? [13]

The pervasiveness of bugs in smart cities is disconcerting. We don?t yet
have a clear grasp of where the biggest risks lie, when and how they will
cause systems to fail, or what the chain-reaction consequences will be.
Who is responsible when a smart city crashes? And how will citizens help
debug the city? Today, we routinely send anonymous bug reports to software
companies when our desktop crashes. Is this a model that?s portable to the
world of embedded and ubiquitous computing?

Counterintuitively, buggy smart cities might strengthen and increase
pressure for democracy. Wade Roush, who studied the way citizens respond
to large-scale technological disasters like blackouts and nuclear
accidents, concluded that ?control breakdowns in large technological
systems have educated and radicalized many lay citizens, enabling them to
challenge both existing technological plans and the expertise and
authority of the people who carry them out.? This public reaction to
disasters of our own making, he argues, has spurred the development of ?a
new cultural undercurrent of ?technological citizenship? characterized by
greater knowledge of, and skepticism toward, the complex systems that
permeate modern societies.? [14] If the first generation of smart cities
does truly prove fatally flawed, from their ashes may grow the seeds of
more resilient, democratic designs.

In a smart city filled with bugs, will our new heroes be the adventurous
few who can dive into the ductwork and flush them out? Leaving the Broad
Institute?s Blue Screen of Death behind, I headed back in the rain to my
hotel, reminded of Brazil, the 1985 film by Monty Python troupe member
Terry Gilliam, which foretold an autocratic smart city gone haywire.
Arriving at my room, I opened my laptop and started up a Netflix stream of
the film. As the scene opens, the protagonist, Sam Lowry, played by
Jonathan Pryce, squats sweating by an open refrigerator. Suddenly the
phone rings, and Harry Tuttle, played by Robert De Niro, enters. ?Are you
from Central Services?? asks Lowry, referring to the uncaring bureaucracy
that runs the city?s infrastructure. ?They?re a little overworked these
days,? Tuttle replies. ?Luckily I intercepted your call.? Tuttle is a
guerrilla repairman, a smart-city hacker valiantly trying to keep
residents? basic utilities up and running. ?This whole system of yours
could be on fire, and I couldn?t even turn on a kitchen tap without
filling out a twenty-seven-B-stroke-six.?

Brittle

Creation myths rely on faith as much as fact. The Internet?s is no
different. Today, netizens everywhere believe that the Internet began as a
military effort to design a communications network that could survive a
nuclear attack.

The fable begins in the early 1960s with the publication of ?On
Distributed Communications? by Paul Baran, a researcher at the RAND think
tank. At the time, Baran had been tasked with developing a scheme for an
indestructible telecommunications network for the U.S. Air Force. Cold War
planners feared that the hub-and-spoke structure of the telephone system
was vulnerable to a preemptive Soviet first strike. Without a working
communications network, the United States would not be able to coordinate
a counterattack, and the strategic balance of ?mutually assured
destruction? between the superpowers would be upset. What Baran proposed,
according to Harvard University science historian Peter Galison, ?was a
plan to remove, completely, critical nodes from the telephone system.?
[15] In ?On Distributed Communications? and a series of pamphlets that
followed, he demonstrated mathematically how a less centralized
latticework of network hubs, interconnected by redundant links, could
sustain heavy damage without becoming split into isolated sections. [16]
The idea was picked up by the Pentagon?s Advanced Research Projects Agency
(ARPA), a group set up to fast-track R&D after the embarrassment of the
Soviet space program?s Sputnik launch in 1957. ARPANET, the Internet?s
predecessor, was rolled out in the early 1970s.

So legend has it.

The real story is more prosaic. There were indeed real concerns about the
survivability of military communications networks. But RAND was just one
of several research groups that were broadly rethinking communications
networks at the time ? parallel efforts on distributed communications were
being led by Lawrence Roberts at MIT and Donald Davies and Roger
Scantlebury at the United Kingdom?s National Physical Laboratory. Each of
the three efforts remained unaware of each other until a 1967 conference
organized by the Association for Computing Machinery in Gatlinburg,
Tennessee, where Roberts met Scantlebury, who by then had learned of
Baran?s earlier work. [17] And ARPANET wasn?t a military command network
for America?s nuclear arsenal, or any arsenal for that matter. It wasn?t
even classified. It was a research network. As Robert Taylor, who oversaw
the ARPANET project for the Pentagon, explained in 2004 in a widely
forwarded e-mail, ?The creation of the ARPA net was not motivated by
considerations of war. The ARPA net was created to enable folks with
common interests to connect to one another through interactive computing
even when widely separated by geography.? [18]

We also like to think that the Internet is still widely distributed as
Baran envisioned, when in fact it?s perhaps the most centralized
communications network ever built. In the beginning, ARPANET did indeed
hew closely to that distributed ideal. A 1977 map of the growing network
shows at least four redundant transcontinental routes, run over phone
lines leased from AT&T, linking up the major computing clusters in Boston,
Washington, Silicon Valley, and Los Angeles. Metropolitan loops created
redundancy within those regions as well. [19] If the link to your neighbor
went down, you could still reach them by sending packets around in the
other direction. This approach is still commonly used today.

By 1987, the Pentagon was ready to pull the plug on what it had always
considered an experiment. But the research community was hooked, so plans
were made to hand over control to the National Science Foundation, which
merged the civilian portion of the ARPANET with its own research network,
NSFNET, launched a year earlier. In July 1988, NSFNET turned on a new
national backbone network that dropped the redundant and distributed grid
of ARPANET in favor of a more efficient and economical hub-and-spoke
arrangement. [20] Much like the air-transportation network today,
consortia of universities pooled their resources to deploy their own
regional feeder networks (often with significant NSF funding), which
linked up into the backbone at several hubs scattered strategically around
the country.

Just seven years later, in April 1995, the National Science Foundation
handed over management of the backbone to the private sector. The move
would lead to even greater centralization, by designating just four major
interconnection points through which bits would flow across the country.
Located outside San Francisco, Washington, Philadelphia, and Chicago,
these hubs were the center not just of America?s Internet, but the
world?s. At the time, an e-mail from Europe to Asia would almost certainly
transit through Virginia and California. Since then, things have
centralized even more. One of those hubs, in Ashburn, Virginia, is home to
what is arguably the world?s largest concentration of data centers, some
forty buildings boasting the collective footprint of 22 Walmart
Supercenters. [21] Elsewhere, Internet infrastructure has coalesced around
preexisting hubs of commerce. Today, you could knock out a handful of
buildings in Manhattan where the world?s big network providers connect to
each other ? 60 Hudson Street, 111 Eighth Avenue, 25 Broadway ? and cut
off a good chunk of transatlantic Internet capacity. (Fiber isn?t the
first technology to link 25 Broadway to Europe. The elegant 1921 edifice
served as headquarters and main ticket office for the great ocean-crossing
steamships of the Cunard Line until the 1960s.)

Despite the existence of many chokepoints, the Internet?s nuke-proof
design creation myth has only been strengthened by the fact that the few
times it has actually been bombed, it has proven surprisingly resilient.
During the spring 1999 aerial bombardment of Serbia by NATO, which
explicitly targeted telecommunications facilities along with the power
grid, many of the country?s Internet Protocol networks were able to stay
connected to the outside world. [22] And the Internet survived 9/11
largely unscathed. Some 3 million telephone lines were knocked out in
lower Manhattan alone ? a grid the size of Switzerland?s ? from damage to
a single phone-company building near the World Trade Center. Broadcast
radio and TV stations were crippled by the destruction of the north tower,
whose rooftop bristled with antennas of every size, shape, and purpose.
Panic-dialing across the nation brought the phone system to a standstill.
[23] But the Internet hardly blinked.

But while the Internet manages to maintain its messy integrity, the
infrastructure of smart cities is far more brittle. As we layer ever more
fragile networks and single points of failure on top of the Internet?s
still-resilient core, major disruptions in service are likely to be
common. And with an increasing array of critical economic, social, and
government services running over these channels, the risks are compounded.

The greatest cause for concern is our growing dependence on untethered
networks, which puts us at the mercy of a fragile last wireless hop
between our devices and the tower. Cellular networks have none of the
resilience of the Internet. They are the fainting ladies of the network
world ? when the heat is on, they?re the first to go down and make the
biggest fuss as they do so.

Cellular networks fail in all kinds of ugly ways during crises; damage to
towers (15 were destroyed around the World Trade Center on 9/11 alone),
destruction of the ?backhaul? fiber-optic line that links the tower into
the grid (many more), and power loss (most towers have just four hours of
battery backup). In 2012, flooding caused by Hurricane Sandy cut backhaul
to over 2000 cell sites in eight counties in and around New York City and
its upstate suburbs (not including New Jersey and Connecticut), and power
to nearly 1500 others. [24] Hurricane Katrina downed over a thousand cell
towers in Louisiana and Mississippi in August 2005, severely hindering
relief efforts because the public phone network was the only common radio
system among many responding government agencies. In the areas of Japan
north of Tokyo annihilated by the 2011 tsunami, the widespread destruction
of mobile-phone towers literally rolled the clock back on history, forcing
people to resort to radios, newspapers, and even human messengers to
communicate. ?When cellphones went down, there was paralysis and panic,?
the head of emergency communications in the city of Miyako told the New
York Times. [25]

The biggest threat to cellular networks in cities, however, is population
density. Because wireless carriers try to maximize the profit-making
potential of their expensive spectrum licenses, they typically only build
out enough infrastructure to connect a fraction of their customers in a
given place at the same time. ?Oversubscribing,? as this carefully
calibrated scheme is known in the business, works fine under normal
conditions, when even the heaviest users rarely chat for more than a few
hours a day. But during a disaster, when everyone starts to panic, call
volumes surge and the capacity is quickly exhausted. On the morning of
September 11, for instance, fewer than 1 in 20 mobile calls were connected
in New York City. [26] A decade later, little has changed. During a scary
but not very destructive earthquake on the U.S. East Coast in the summer
of 2011, cell networks were again overwhelmed. Yet media reports barely
noted it. Cellular outages during crises have become so commonplace in
modern urban life that we no longer question why they happen or how the
problem can be fixed.

Disruptions in public cloud-computing infrastructure highlight the
vulnerabilities of dependence on network apps. Amazon Web Services, the
800-pound gorilla of public clouds that powers thousands of popular
websites, experienced a major disruption in April 2011, lasting three
days. According to a detailed report on the incident posted to the
company?s website, the outage appears to have been a normal accident, to
use Perrow?s term. A botched configuration change in the data center?s
internal network, which had been intended to upgrade its capacity, shunted
the entire facility?s traffic onto a lower-capacity backup network. Under
the severe stress, ?a previously unencountered bug? reared its head,
preventing operators from restoring the system without risk of data loss.
[27] Later, in July 2012, a massive electrical storm cut power to the
company?s Ashburn data center, shutting down two of the most popular
Internet services ? Netflix and Instagram. [28] ?Amazon Cloud Hit By Real
Cloud,? quipped a PC World headline. [29]

The cloud is far less reliable than most of us realize, and its
fallibility may be starting to take a real economic toll. Google, which
prides itself on high-quality data-center engineering, suffered a
half-dozen outages in 2008 lasting up to 30 hours. [30] Amazon promises
its cloud customers 99.5 percent annual uptime, while Google pledges 99.9
percent for its premium apps service. That sounds impressive until you
realize that even after years of increasing outages, even in the most
blackout-prone region (the Northeast), the much-maligned American electric
power industry averages 99.96 percent uptime. [31] Yet even that tiny gap
between reality and perfection carries a huge cost. According to Massoud
Amin of the University of Minnesota, power outages and power quality
disturbances cost the U.S. economy between $80 billion and $188 billion a
year. [32] A back-of-the-envelope calculation published by International
Working Group on Cloud Computing Resiliency tagged the economic cost of
cloud outages between 2007 and mid-2012 at just $70 million (not including
the July 2012 Amazon outage). [33] But as more and more of the vital
functions of smart cities migrate to a handful of big, vulnerable data
centers, this number is sure to swell in coming years.

Cloud-computing outages could turn smart cities into zombies. Biometric
authentication, for instance, which senses our unique physical
characteristics to identify individuals, will increasingly determine our
rights and privileges as we move through the city ? granting physical
access to buildings and rooms, personalizing environments, and enabling
digital services and content. But biometric authentication is a complex
task that will demand access to remote data and computation. The keyless
entry system at your office might send a scan of your retina to a remote
data center to match against your personnel record before admitting you.
Continuous authentication, a technique that uses always-on biometrics ?
your appearance, gestures, or typing style ? will constantly verify your
identity, potentially eliminating the need for passwords. [34] Such
systems will rely heavily on cloud computing, and will break down when it
does. It?s one thing for your e-mail to go down for a few hours, but it?s
another thing when everyone in your neighborhood gets locked out of their
homes.

Another ?cloud? literally floating in the sky above us, the Global
Positioning System satellite network, is perhaps the greatest single point
of failure for smart cities. Without it, many of the things on the
Internet will struggle to ascertain where they are. America?s rivals have
long worried about their dependence on the network of 24 satellites owned
by the U.S. Defense Department. But now even America?s closest allies
worry that GPS might be cut off not by military fiat but by neglect. With
a much-needed modernization program for the decades-old system way behind
schedule, in 2009 the Government Accountability Office lambasted the Air
Force for delays and cost overruns that threatened to interrupt service.
[35] And the stakes of a GPS outage are rising fast, as navigational
intelligence permeates the industrial and consumer economy. In 2011 the
United Kingdom?s Royal Academy of Engineering concluded that ?a surprising
number of different systems already have GPS as a shared dependency, so a
failure of the GPS signal could cause the simultaneous failure of many
services that are probably expected to be independent of each other.? [36]
For instance, GPS is extensively used for tracking suspected criminals and
land surveying. Disruptions in GPS service would require rapidly
reintroducing older methods and technologies for these tasks. While
alternatives such as Russia?s GLONASS already exist, and the European
Union?s Galileo and China?s Compass systems will provide more alternatives
in the future, the GPS seems likely to spawn its own nasty collection of
normal accidents. ?No one has a complete picture,? concluded Martyn
Thomas, the lead investigator on the UK study, ?of the many ways in which
we have become dependent on weak signals 12,000 miles above us.? [37]

Centralization of smart-city infrastructure is risky, but decentralization
doesn?t always increase resilience. Uncoordinated management can create
its own brittle structures, such as the Internet?s ?bufferbloat? problem.
Buffering, which serves as a kind of transmission gearbox to sync
fast-flowing and congested parts of the Internet, is a key tool to
smoothing out surges of data and reducing errors. But in 2010 Jim Gettys,
a veteran Internet engineer, noticed that manufacturers of network devices
had taken advantage of rapidly falling memory prices to beef up buffers
far beyond what the Internet?s original congestion-management scheme was
designed for. ?Manufacturers have reflexively acted to prevent any and all
packet loss and, by doing so, have inadvertently defeated a critical TCP
congestion-detection mechanism,? concluded the editors of ACM Queue, a
leading computer networking journal, referring to the Internet?s traffic
cop, the Transmission Control Protocol. The result of bufferbloat was
increasing congestion and sporadic slowdowns. [38] What?s most frightening
about bufferbloat is that it was hiding in plain view. Gettys concluded:
?the issues that create delay are not new, but their collective impact has
not been widely understood ... buffering problems have been accumulating
for more than a decade.? [39]

What a laundry list of accidental ways smart cities might be brittle by
design or oversight! But what if someone deliberately tried to bring one
to its knees? The threat of cyber-sabotage on civil infrastructure is only
just beginning to capture policymakers? attention. Stuxnet, the virus that
attacked Iran?s nuclear weapons plant at Natanz in 2010, was just the
beginning. Widely believed to the product of a joint Israeli-American
operation, Stuxnet was a clever piece of malicious software, or malware,
that infected computers involved with monitoring and controlling
industrial machinery and infrastructure, known by the acronym SCADA
(supervisory control and data acquisition). At Natanz some 6000
centrifuges were being used to enrich uranium to bomb-grade purity.
Security experts believe Stuxnet, carried in on a USB thumb drive,
infected and took over the SCADA systems controlling the plant?s
equipment. Working stealthily to knock the centrifuges off balance even as
it reported to operators that all was normal, Stuxnet is believed to have
put over a thousand machines out of commission, significantly slowing the
refinement process, and the Iranian weapons program. [40]

The wide spread of Stuxnet was shocking. Unlike the laser-guided,
bunker-busting smart bombs that would have been used in a conventional
strike on the Natanz plant, Stuxnet attacked with all the precision of
carpet bombing. By the time Ralph Langner, a German computer-security
expert who specialized in SCADA systems, finally deduced the purpose of
the unknown virus, it had been found on similar machinery not only in Iran
but as far away as Pakistan, India, Indonesia, and even the United States.
By August 2010, over 90,000 Stuxnet infections were reported in 115
countries. [41]

Stuxnet was the first documented attack on SCADA systems, but it is not
likely to be the last. A year later, in an interview with CNET, Langer
bristled at the media?s focus on attributing the attack to a specific
nation. ?Could this also be a threat against other installations, U.S.
critical infrastructure?? he asked. ?Unfortunately, the answer is yes
because it can be copied easily. That?s more important than the question
of who did it.? He warned of Stuxnet copycat attacks, and criticized
governments and companies for their widespread complacence. ?Most people
think this was to attack a uranium enrichment plant and if I don?t operate
that I?m not at risk,? he said. ?This is completely wrong. The attack is
executed on Siemens controllers and they are general-purpose products. So
you will find the same products in a power plant, even in elevators.? [42]

Skeptics argue that the threat of Stuxnet is overblown. Stuxnet?s payload
was highly targeted. It was programmed to only attack the Natanz
centrifuges, and do so in a very specific way. Most importantly, it
expended a highly valuable arsenal of ?zero-day? attacks, undocumented
vulnerabilities that can only be exploited once, after which a simple
update will be issued by the software?s supplier. In its report on the
virus, security software firm Symantec wrote, ?Incredibly, Stuxnet
exploits four zero-day vulnerabilities, which is unprecedented.? [43]

Stuxnet?s unique attributes aside, most embedded systems aren?t located in
bunkers, and they are increasingly vulnerable to much simpler attacks on
their human operators. Little more than a year after Stuxnet was
uncovered, a lone hacker known only as ?pr0f? attacked the water utility
of South Houston, a small town of 17,000 people just outside Texas?s most
populous city. Enraged by the U.S. government?s downplaying of a similar
incident reported in Springfield, Illinois, pr0f homed in on the utility?s
Siemens SIMATIC software, a web-based dashboard for remote access to the
waterworks? SCADA systems. While the Springfield attack turned out to be a
false alarm ? federal officials eventually reported finding ?no evidence
of a cyber intrusion? ? pr0f was already on the move, and the hacker
didn?t even need to write any code. [44] It turned out that the plant?s
operators had chosen a shockingly weak three-letter password. While pr0f?s
attack on South Houston could have easily been prevented, SIMATIC is
widely used and full of more fundamental vulnerabilities that hackers can
exploit. That summer Dillon Beresford, a security researcher at (oddly
coincidentally) Houston-based network security outfit NSS Labs, had
demonstrated several flaws in SIMATIC and ways to exploit them. Siemens
managed to dodge the collateral damage of Stuxnet, but the holes in
SIMATIC are indicative of far more serious risks it must address.

Another troubling development is the growing number of ?forever day?
vulnerabilities being discovered in older control systems. Unlike zero-day
exploits, for which vendors and security firms can quickly deploy
countermeasures and patches, forever-day exploits target holes in legacy
embedded systems that manufacturers no longer support ? and therefore will
never be patched. The problem affects industrial-control equipment sold in
the past by both Siemens and GE, as well as a host of smaller firms. [45]
It has drawn increased interest from the Cyber Emergency Response Team,
the government agency that coordinates American cyber-security efforts.

One obvious solution for securing smart-city infrastructure is to stop
connecting it to the Internet. But ?air-gapping,? as this technique is
known, is only a stopgap measure at best. Stuxnet, much like Agent.btz,
the virus that infected the Defense Department?s global computer network
in 2008, were likely both walked into secure facilities on USB sticks.
[46] Insecure wireless networks are everywhere, even emanating from inside
our own bodies. Researchers at the security firm McAfee have successfully
hijacked insulin pumps, ordering the test devices to release a lethal dose
of insulin, and a group of computer scientists at the University of
Washington and University of Massachusetts have disabled
heart-defibrillator implants using wireless signals. [47]

These vulnerabilities are calling the entire open design of the Internet
into question. No one in those early days of ARPANET ever imagined the
degree to which we would embed digital networks in the support systems of
our society, the carelessness with which we would do so, and the threat
that malevolent forces would present. Assuring that the building blocks of
smart cities are reliable will require new standards and probably new
regulation. Colin Harrison, IBM?s smarter-cities master engineer, argues
that in the future, ?if you want to connect a computer system to a piece
of critical national infrastructure it?s going to have to be certified in
various ways.? [48] We?ll also have take stronger measures to harden smart
cities against direct assault. South Korea has already seen attacks on its
civil infrastructure by North Korean cyber-warriors. One strike is
believed to have shut down air traffic control in the country for over an
hour. [49]

Nothing short of a crisis will force us to confront the risk of smart
cities? brittle infrastructure. The first mayor who has to deal with the
breakdown of a city-scale smart system will be in new territory, but who
will take the blame? The city? The military? Homeland security? The
technology firms that built it? Consider the accountability challenge
Stuxnet poses ? we?d likely never have known about it were it not for its
own bug. Carried out of Natanz by some unsuspecting Iranian engineer, the
worm failed to detect that it had escaped into the open, and instead of
deactivating its own reproductive mechanisms, like a real virus it
proliferated across the globe. [50]

A New Civics

If the history of city building in the last century tells us anything, it
is that the unintended consequences of new technologies often dwarf their
intended design. Motorization promised to save city dwellers from the
piles of horse manure that clogged 19th-century streets and deliver us
from a shroud of factory smoke back to nature. Instead, it scarred the
countryside with sprawl and rendered us sedentary and obese. If we don?t
think critically now about the technology we put in place for the next
century of cities, we can only look forward to all the unpleasant
surprises they hold in store for us.

Smart cities are almost guaranteed to be chock full of bugs, from smart
toilets and faucets that won?t operate to public screens sporting
Microsoft?s ominous Blue Screen of Death. But even when their code is
clean, the innards of smart cities will be so complex that so-called
normal accidents will be inevitable. The only questions will be when smart
cities fail, and how much damage they cause when they crash. Layered atop
the fragile power grid, already prone to overload during crises and open
to sabotage, the communications networks that patch the smart city
together are as brittle an infrastructure as we?ve ever had.

But that?s only if we continue doing business as usual. We can stack the
deck and improve the odds, but we need to completely rethink our approach
to the opportunities and challenges of building smart cities. We need to
question the confidence of tech-industry giants, and organize the local
innovation that?s blossoming at the grassroots into a truly global
movement. We need to push our civic leaders to think more about long-term
survival and less about short-term gain, more about cooperation than
competition. Most importantly, we need to take the wheel back from the
engineers, and let people and communities decide where we should steer.

People often ask me, ?What is a smart city?? It?s a hard question to
answer. ?Smart? is a problematic word that has come to mean a million
things. Soon, it may take its place alongside the handful of international
cognates ? vaguely evocative terms like ?sustainability? and
?globalization? ? that no one bothers to translate because there?s no
consensus about what they actually mean. When people talk about smart
cities, they often cast a wide net that pulls in every new public-service
innovation from bike sharing to pop-up parks. The broad view is important,
since cities must be viewed holistically. Simply installing some new
technology, no matter how elegant or powerful, cannot solve a city?s
problems in isolation. But there really is something going on here ?
information technology is clearly going to be a big part of the solution.
It deserves treatment on its own. I take a more focused view and define
smart cities as places where information technology is combined with
infrastructure, architecture, everyday objects, and even our bodies to
address social, economic, and environmental problems.

I think the more important and interesting question is, ?what do you want
a smart city to be?? We need to focus on how we shape the technology we
employ in future cities. There are many different visions of what the
opportunity is. Ask an IBM engineer and he will tell you about the
potential for efficiency and optimization. Ask an app developer and she
will paint a vision of novel social interactions and experiences in public
places. Ask a mayor and it?s all about participation and democracy. In
truth, smart cities should strive for all of these things.

There are trade-offs between these competing goals for smart cities. The
urgent challenge is weaving together solutions that integrate these aims
and mitigate conflicts. Smart cities need to be efficient but also
preserve opportunities for spontaneity, serendipity, and sociability. If
we program all of the randomness out, we?ll have turned them from rich,
living organisms into dull mechanical automatons. They need to be secure,
but not at the risk of becoming surveillance chambers. They need to be
open and participatory, but provide enough support structure for those who
lack the resources to self-organize. More than anything else, they need to
be inclusive. In her most influential book, The Death and Life of Great
American Cities, the acclaimed urbanist Jane Jacobs argued that ?cities
have the capability of providing something for everybody, only because,
and only when, they are created by everybody.? [51] Yet over fifty years
later, as we set out to create the smart cities of the 21st century, we
seem to have again forgotten this hard-learned truth.

But there is hope that a new civic order will arise in smart cities, and
pull every last one of us into the effort to make them better places.
Cities used to be full of strangers and chance encounters. Today we can
mine the social graph in an instant by simply taking a photo. Algorithms
churn in the cloud, telling the little things in our pocket where we
should eat and whom we should date. It?s a jarring transformation. But
even as old norms fade into the past, we?re learning new ways to thrive on
mass connectedness. A sharing economy has mushroomed overnight, as people
swap everything from spare bedrooms to cars, in a synergistic exploitation
of new technology and more earth-friendly consumption. Online social
networks are leaking back into the thriving urban habitats where they were
born in countless promising ways.

For the last 15 years, I?ve watched the struggle over how to build smart
cities evolve from the trenches. I?ve studied and critiqued these efforts,
designed parts of them myself, and cheered others along. I?ve written
forecasts for big companies as they sized up the market, worked with
start-ups and civic hackers toiling away at the grass roots, and advised
politicians and policy wonks trying to push reluctant governments into a
new era. I understand and share much of their agendas.

But I?ve also seen my share of gaps, shortfalls, and misguided assumptions
in the visions and initiatives that have been carried forth under the
banner of smart cities. And so I?m going to play the roles of myth buster,
whistle-blower, and skeptic in one. The technology industry is asking us
to rebuild the world around its vision of efficient, safe, convenient
living. It is spending hundreds of millions of dollars to convince us to
pay for it. But we?ve seen this movie before. As essayist Walter Lippmann
wrote of the 1939 World?s Fair, ?General Motors has spent a small fortune
to convince the American public that if it wishes to enjoy the full
benefit of private enterprise in motor manufacturing, it will have to
rebuild its cities and its highways by public enterprise.? [51] Today the
computer guys are singing the same song.

I believe there is a better way to build smart cities than to simply call
in the engineers. We need to lift up the civic leaders who would show us a
different way. We need to empower ourselves to build future cities
organically, from the bottom up, and do it in time to save ourselves from
climate change. If that seems an insurmountable goal, don?t forget that at
the end of the day the smartest city in the world is the one you live in.
If that?s not worth fighting for, I don?t know what is.



Editors? Note

"Smart Cities" is adapted from Smart Cities: Big Data, Civic Hackers, and
the Quest for a New Utopia, copyright © Anthony M. Townsend, published
this month by W.W. Norton & Company. It appears with the permission of the
author and publisher.

Notes

1. J. Casale, ?The Origin of the Word ?Bug,?? The OTB (Antique Wireless
Association), February 2004.

2. Thomas P. Hughes, American Genesis: Genesis: A History of the American
Genius for Invention (New York: Penguin Books, 1989), 75.

3. William Maver Jr. and Minor M. Davis, The Quadruplex (New York: W. J.
Johnston, 1890), 84.

4. Naval History and Heritage Command archives, Photo 96566-KN.

5. Kathleen Broome Williams, Grace Hopper: Admiral of the Cyber Sea
(Annapolis, MD: Naval Institute Press, 2004), 54.

6. ?Surge Caused Fire in Rail Car,? Washington Times, April 12, 2007.

7. ?About recent service interruptions, what we?re doing to prevent
similar problems in the future,? Bay Area Rapid Transit District.

8. ?The Economic Impact of Interrupted Service,? 2010 U.S. Transportation
Construction Industry Profile (Washington, DC: American Road &
Transportation Builders Association, 2010).

9. Quentin Hardy, ?Internet Experts Warn of Risks in Ultrafast Networks,?
New York Times, November 13, 2011, B3.

10. Ellen Ullman, ?Op-Ed: Errant Code? It?s Not Just a Bug,? New York
Times, August 8, 2012.

11. Charles Perrow, Normal Accidents: Living with High-Risk Technologies
(Princeton, NJ: Princeton University Press, 1999), 4.

12. Robert L. Mitchell, ?Y2K: The good, the bad and the ugly,?
Computerworld, December 28, 2009.

13. David Green, ?Computer Glitch Summons Too Many Jurors,? National
Public Radio, May 3, 2012.

14. Wade Roush, ?Catastrophe and Control: How Technological Disasters
Enhance Democracy,? PhD dissertation, Program in Science, Technology and
Society, Massachusetts Institute of Technology, 1994.

15. Peter Galison, ?War Against the Center,? Grey Room, no. 4 (2001): 26.

16. Paul Baran, On Distributed Communications (RAND: Santa Monica, CA,
1964), document no. RM-3420-PR .

17. Barry M. Leiner et al., ?Brief History of the Internet?, n.d., ,
accessed August 29, 2012. It was the First ACM Symposium on Operating
Systems Principles.

18. Bob Taylor, October 6, 2004, e-mail to Dave Farber reposted to
INTERESTINGPEOPLE listserv.

19. 1977 geographical map of ARPANET, originally published in F. Heart, A.
McKenzie, J. McQuillian, and D. Walden, ARPANET Completion Report (Bolt,
Beranek and Newman, Burlington, MA), January 4, 1978.

20. Suzanne Harris and Amy Hansen, ?The Internet: Changing the Way We
Communicate,? America's Investment in the Future, National Science
Foundation, n.d.

21. Marjorie Censer, ?After Dramatic Growth, Ashburn Expects Even More
Data Centers,? Washington Post, August 27, 2011.

22. Steven Branigan and Bill Cheswick, ?The effects of war on the
Yugoslavian Network,? 1999.

23. William J. Mitchell and Anthony M. Townsend, ?Cyborg Agonistes,? in
The Resilient City: How Modern Cities Recover From Disaster, edited by
Lawrence J. Vale and Thomas J. Campanella (New York: Oxford University
Press, 2005), 320?21.

24. New York State Public Service Commission, unpublished documents
provided to the author.

25. Martin Fackler, ?Quake Area Residents Turn to Old Means of
Communication to Keep Informed,? New York Times, March 28, 2011, A11.

26. National Research Council, Computer Science and Telecommunications
Board, The Internet Under Crisis Conditions: Learning From September 11
(Washington, DC: National Academies Press, 2003).

27. Amazon Web Services, ?Summary of the Amazon EC2 and Amazon RDS Service
Disruption,? April 29, 2011.

28. Chloe Albanesius, ?Amazon Blames Power, Generator Failure for Outage,?
PCMag.com, July 3, 2012.

29. Christina DesMarais, ?Amazon Cloud Hit by Real Clouds, Downing
Netflix, Instagram, Other Sites,? PCWorld, June 30, 2012.

30. J. R. Raphael, ?Gmail Outage Marks Sixth Downtime in Eight Months,?
PCWorld, February 24, 2009.

31. Author?s calculation based on statistics reported in Massoud Amin,
?U.S. Electrical Grid Gets Less Reliable,? IEEE Spectrum, January 2011.

32. Massoud Amin, ?The Rising Tide of Power Outages and the Need for a
Stronger and Smarter Grid,? Security Technology, Technological Leadership
Institute, University of Minnesota, October 8, 2010.

33. Maurice Gagnaire et al., ?Downtime statistics of current cloud
solutions,? International Working Group on Cloud Computing Resiliency,
June 2012.

34. Kathleen Hickey, ?DARPA : Dump Passwords for Always-on Biometrics,?
Government Computer News, March 21, 2012.

35. Global Positioning System: Significant Challenges in Sustaining and
Upgrading Widely Used Capabilities (US Government Accountability Office:
Washington, DC), GAO-09-670T, May 7, 2009.

36. Global Navigation Space Systems: Reliance and Vulnerabilities (London:
Royal Academy of Engineering, 2011), 3.

37. ?Scientists Warn of ?Dangerous Over-reliance on GPS,?? The Raw Story,
March 8, 2011, .

38. ?BufferBloat: What?s Wrong with the Internet?? ACMQueue, December 7,
2011, .

39. Jim Gettys and Kathleen Nichols, ?Bufferbloat: Dark Buffers in the
Internet,? ACMQueue, November 29, 2011.

40. Ellen Nakashima and Joby Warrick, ?Stuxnet was work of U.S. and
Israeli experts, officials say,? Washington Post, June 1, 2012.

41. Vivian Yeo, ?Stuxnet infections spread to 115 countries,? ZDNet,
August 9, 2010.

42. Elinor Mills, ?Ralph Langer on Stuxnet, copycat threats (Q&A),? CNet
News, May 22, 2011.

43. Symantec Corporation, ?W32.Stuxnet,? Security Responses.

44. Dan Goodin, ?FBI: No evidence of water system hack destroying pump,?
The Register, November 23, 2011.

45. Goodin, ?Rise of ?forever day? bugs in industrial systems threatens
critical infrastructure,? Ars Technica, April 9, 2012.

46. Ellen Nakashima, ?Cyber-intruder sparks massive federal response?and
debate over dealing with threats,? Washington Post, December 8, 2011.

47. Mark Ward, ?Warning Over Medical Implant Attacks,? BBC News, April 10,
2012; Daniel Halperin et al., ?Pacemakers and Implantable Cardiac
Defibrillators: Software Radio Attacks and Zero-Power Defenses,?
proceedings of the 2008 IEEE Symposium on Security and Privacy.

48. Colin Harrison, interview by author, May 9, 2011.

49. Chul-jae Lee and Gwang-li Moon, ?Incheon Airport cyberattack traced to
Pyongyang,? Korea JoongAng Daily, June 5, 2012.

50. David E. Sanger, ?Obama Order Sped Up Wave of Cyberattacks Against
Iran,? New York Times, June 1, 2012, A1.

51. Jane Jacobs, The Death and Life of Great American Cities (New York:
Random House, 1961), 238.

52. Walter Lippmann, New York Herald Tribune, June 6, 1939, quoted in
Robert W. Rydell, World of Fairs: The Century-of-Progress Expositions
(Chicago: University of Chicago Press, 1993), 115.
Share on email Share on reddit 35




#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org