"We have
created a Star Wars
civilization, with Stone Age
emotions, medieval institutions,
and godlike technology."
Edward O. Wilson, The Social
Conquest of Earth, 2012
"One cannot live outside the
machine for more perhaps than
half an hour."
Virginia
Woolf, The Waves, 1931
The late writer and futurist Arthur
C. Clarke once remarked that the
best proof for intelligent life in
the universe is that it hasn't come
here. He was also fond of
paraphrasing the polymath J. B. S.
Haldane by reminding his audience
that the universe is not only
stranger than we imagine, but also
stranger than we can imagine.
Presumably the inventor and futurist
Ray Kurzweil, who is currently
director of engineering at Google,
would accept the latter of Clarke's
propositions, but not necessarily
the former. Not only is he working
to apply his theory of intelligence
to Google's search engine to render
its current algorithms obsolete, but
he also believes the time is
approaching when humans will create
artificial intelligence with vastly
greater intelligence than our own,
with aims beyond mere mortal power
to understand.
The point at which this will
occur—Kurzweil predicts by 2045—has
been popularized as the Singularity:
"The moment when technological
change becomes so rapid and
profound, it represents a rupture in
the fabric of human history"
(Grossman). At this unprecedented
turn in our evolution, we will pass
the baton to post-organic
beings—"transhumans" (transitional
humans)—who will be engineered
through the confluence of artificial
intelligence, nanotechnology,
robotics, and genetic engineering to
have greatly enhanced capacities and
a dramatically extended life span.
Transhumans will eventually give way
to "posthumans," beings "in whom a
trace of the human may remain, but
we may not be able to recognize it"
(Zimmerman 31).
The ravings of a deluded futurist?
Hardly. Some of the planet's
brightest and respected inventors,
scientists, entrepreneurs, artists
and others are investing their time,
talent and money to bring this
vision to fruition. Not everyone
shares Kurzweil's optimistic launch
date of 2045, but this zealous band
of "Singularitarians" shares a
belief that not only is it possible,
it is desirable, if guided
prudently. Technology, they
maintain, can shape and transform
history. Humans are one step on the
evolutionary ladder, not the end of
it. Our technological progeny will
colonize the universe. It is
their—and our—destiny. Kurzweil
describes it in almost messianic
tones: "Once we saturate the matter
and energy in the universe with
intelligence, it will 'wake up', be
conscious and sublimely intelligent.
That's about as close to God as I
can imagine" (Singularity, 375).
Whether we ought to pursue the
Singularity—to say nothing of
whether it is even feasible—is a
question of some dispute. To see
why, we need to take a closer look
at humanity's relation to technology
itself and, more importantly, at
what we take to be the meaning
of "who we are" in the first
place, a question Kurzweil himself
believes will be the primary
political and philosophical issue of
the twenty-first century (Age of
Spiritual Machines, 229). Before we
transfer ourselves without remainder
into this brave new world, we should
be clear on where it is we think we
are going, what we are seeking, and
what we may be leaving behind.
Humanity's
Relation to Technology
Technology, let us say, is the
purposeful and rational development
and application of means to achieve
a given end. Whether it is a tool, a
machine, a method, tangible or
intangible, simple or complex,
technology is driven by the highest
degree of efficiency and rationality
possible at any given stage of
development and by ever improving
degrees of order and control.
This definition, which extends
beyond the common sense notion of
technology as material tools, brings
into focus the instrumental or use
value of technology, and can even be
said to encompass the immateriality
of human language (which we are now
racing to materialize as human
language technology employed in
machines) as the most glorious and
useful of all technologies developed
over the course of human history.
According to hard-core technological
determinists, technology follows its
own internal logic (efficiency,
rationality, order, control) and,
once set in motion, is the primary
determinant of social, cultural and
economic change. The pervasiveness
of technology in our so-called
postmodern age, its growing
dominance in all aspects of our
lives, feeds this sense that
technology is the cause of which we
are the effect, and even promotes an
attendant abject helplessness and
resignation, as in the
"we-don't-control-the-machine,
the-machine-controls-us" lament.
Free will is out the window in this
scenario, although some "soft"
technological determinists believe
that while technology may be
omnipresent, it is not entirely
omnipotent. As we become
increasingly aware that technology
can create as many problems as it
solves, humans still have a chance
to change the game plan.
In
that vein, perhaps a more accurate,
though considerably more messy and
complex, approach to viewing the
relation of humans to technology is
social constructivism: investigating
how technology both shapes, and is
shaped by, social, cultural and
economic change. For example, the
documented impact of fossil fuels on
global climate change has
precipitated research and
development in an array of new
energy technologies to reduce
dependence on coal, oil and natural
gas. What makes the development of
these new technologies more complex,
however, is the interplay between
existing economic interests and
infrastructure—the social and
economic relationships that prop up
entire nations and the vested
interests of powerful people and
organizations—and a growing
awareness that unless we speed up
development of new energy
technologies, humanity, like the
proverbial frog, will eventually be
unable to climb out of the global
pot of water set on slow boil.
Technology is not developed in a
vacuum. It arises out of
relationships between social,
cultural and economic forces on a
finite planet, where opportunities
for some—developing housing and
commercial tracts, using industrial
techniques to raise animals for
human consumption, increasing the
efficiency and profitability of all
manner of enterprise—can create
problems for others, and not just
humans: millions of extinct species,
loss of resilient and diverse
natural habitats, global warming
and, of course, loss of jobs through
the mechanization of work. The sheer
scale and pervasiveness of
technological and economic
development, particularly since the
Industrial Revolution in the late
eighteenth century, and its
attendant impact on the biosphere,
the realm of life, have led some to
proclaim that we are now living in
the Anthropocene, "an informal
geologic chronological term that
marks the evidence and extent of
human activities that have had a
significant global impact on the
Earth's ecosystems"
("Anthropocene"). Humanity, it
seems, not only lives in nature, but
is also increasingly a force of
nature by virtue of the application
of technologies to alter the
environment.
Today, the so-called technosphere
(that is, the developed world, those
parts of the world substantially
altered by human technology) is
viewed by some as an ecosystem in
its own right, and is evolving much
more rapidly—exponentially,
according to Singularitarians—than
the biosphere, which is unable to
react to the technosphere fast
enough to maintain equilibrium. The
technosphere now occupies most of
the land area of the planet. With
its relentless focus on efficiency,
order and control, who is to say it
will not expand to occupy the entire
world?
Technological optimists frame the
problem of how humans will continue
to thrive in a depleted biosphere as
an opportunity to develop and apply
new technologies to speed up
biological evolution by introducing
genetically engineered organisms. In
one scenario, biology will move from
a biological to a technical
substrate. So will humans. According
to recent surveys, nearly half of
the world's artificial intelligence
(AI) experts expect human-level
machine intelligence to be achieved
by 2040, and 90 percent say it will
arrive by 2075 (Cookson). This will
lead to a far higher level of
superintelligence that will be
applied to enhancing, then
transforming human evolution itself.
But in the
Singularity's desired trajectory,
what is the difference between
machines becoming human, and humans
becoming machines? And where in the
transition from human to transhuman
to posthuman do we put the contrast
between the inhuman and the humane?
What Does It Mean
To Be Human?
To
explore that question, we will
briefly consider the history of
humanism, which is first and
foremost concerned with human
beings—their needs, desires, and
experiences. Humanism is less a
particular philosophical system or
set of beliefs than it is an
attitude or perspective, which in
turn informs subsequent philosophies
and systems of belief, all with
human beings at the center.
Without getting sidelined by a
discussion of the many varieties of
humanism, we might posit some common
threads or ideas. First, humanists
hold that while human beings are an
inherent part of nature and subject
to its laws, they hold exceptional
status by virtue of their
rationality and sociability, and can
overcome, even remake, the
constraints placed on them by
nature. Second, humanists believe in
the unity of humankind, holding that
all humans possess something in
common, what is often described as
"human nature" (Malik). Third, many
humanists derive from this shared
nature the idea that we all possess
a human "essence" that transcends
differences in skin color, beauty
and intelligence and gives each of
us inherent value. Finally,
underlying all humanisms is a belief
in human emancipation: the idea that
humankind can transform society
through the agency of its own
efforts. This comes down to an
accompanying belief in human
rationality and capacity for social
progress, a belief that has driven
much of the western political
structure since the Enlightenment.
Humanism, of course, is hardly the
last word on what it means to be
human. To flesh out the
picture, we would need to take a
tour of the views of Burke,
Nietzsche, Heidegger and other
"antihumanists" who rejected
Enlightenment rationalism and
notions of social progress because
they viewed the masses of humans as
essentially irrational, atavistic
and a threat to civilized society.
We would also need to consider the
religious, those who place God, not
Man, at the center of the universe,
and who find the deepest sense of
what it means to be human in serving
God and living by His principles.
Nevertheless, by leaving God out of
the picture for the moment and
tracing western intellectual history
from the Enlightenment on, we can
get a sense of what humans took to
be the "essence" of their humanity
in developing western democracies
over the past 400 years. An
excellent summary is found in Terrry
Eagleton's Culture and the Death
of God. We are meaning-seeking
animals, Eagleton points out. And if
we can no longer believe in God, we
will find other things to believe
in. The Enlightenment found it in
reason, the Idealists in the human
spirit, the Romantics in nature and
culture, the Marxists in historical
materialism and revolution, and
Nietzsche in the Ubermensch. Others
found it in the nation, state, art,
the sublime, humanity, society,
science, the life force and personal
relationships. None of these was
entirely satisfactory, and none
proved self-sustaining.
The end result was postmodernism, or
the systematic subversion of meaning
altogether. Eagleton describes it as
"depthless, anti-tragic, non-linear,
anti-numinous, non-foundational and
anti-universalist, suspicious of
absolutes and averse to interiority"
(188). The central problem, as
Eagleton frames it, is that the West
no longer has a set of coherent
beliefs that would justify its
commitment to freedom and democracy.
Our mixture of "pragmatism,
culturalism, hedonism, relativism,
and anti-foundationalism" is an
inadequate defense against other
humans who believe in "absolute
truths, coherent identities and
solid foundations," and for whom
freedom and democracy are not values
to be pursued (198). If all we are
left with in the West is "Man the
Eternal Consumer" (190), we are left
with little at all.
Repositioning the
Singularity and Humanism
The "strong" Singularitarian
position has a response to this grim
assessment: a resurgence of the core
Enlightenment pursuit of human
rationality and scientific and
social progress. This is a
transcendent philosophy with Man as
the maker and creator at the center.
In some ways, it functions as a
religion among certain segments of
the Silicon Valley set, where all of
humanity's persistent
contradictions, pain and suffering
are ameliorated, banished and then
transformed into a glorious end of
the trans-and post-human, a process
some have caricaturized as "rapture
for nerds." From this vantage point,
disease, growing old, and dying are
insults. They are not accepted as
part of the natural biological order
of things, but are instead problems
to be solved by the application of
biotechnology and other emerging
techniques. And why not? The human
race is a sorry mess, with our
stubborn diseases, physical
limitations, short lives,
jealousies, violence and anxieties.
If it were technologically possible,
why wouldn't we want to transcend
our current species? Why shouldn't
we?
The "weak" Singularitarian position
– a potential "hell" to the strong
"heaven" position – supports the
pursuit of transforming our lives
through these new and powerful
technologies, but warns of the
potential pitfalls: environmental
disasters, the unequal distribution
between the transhumans who can
afford to enhance themselves and the
billions of people who cannot, and
the emergence of a race of sentient
machines that may see no reason not
to wipe inferior humans off the face
of the earth. In light of the
inherent dangers involved with the
application of these new
technologies, we ought to be
undertaking studies of risk and
reward before we embark on any
ambitious technological projects.
One should not assume from this
brief overview of the Singularity
and its passionate proponents that
it is a full-fledged technological
and social movement, complete with a
set of guiding principles and
strategic plan. It is primarily the
concern of a loose-knit, wealthy,
and talented group of
entrepreneurial white males. While
their rhetoric is idealistic and
soaring, it can cynically be argued
that their more prosaic concerns
revolve around making money. All the
same, the Singularity's focus on an
alleged watershed moment in human
history when we become transhuman
and, ultimately, posthuman brings
into stark relief the questions
posed at the beginning of this
discussion: what are human beings
becoming, and what are we leaving
behind?
One response is that in the West at
least, we are becoming instrumental
objects and a shifting series of
temporary selves in a vast, all
encompassing and media-dominated
social and economic network where
identity and worth are determined by
instrumental (exchange) value and
not by any abiding intrinsic value.
This is what Virginia Woolf referred
to as "living in the machine." She
thought you could live outside the
machine for perhaps half an hour;
over eighty years later, there are
apparently millions of people who
can't live outside the machine for
more than a minute or two. The logic
of the Singularity and the emerging
posthuman is entirely consistent
with this. Technology, after all, is
pure instrumental intelligence.
Kurzweil, for example, defines
intelligence as the "ability to use
optimally limited resources to
achieve goals" (Age of Spiritual
Machines, 67). If the
resources were not optimally
limited, the process wouldn't be
efficient and therefore not
intelligent. Kurzweil calls these
predicted creations "spiritual"
machines, but they are machines all
the same. This is the human becoming
machine. It is not the machine
becoming human.
What we are leaving behind in the
core humanistic tradition is the
notion of the human self, the
conscious subject, who has intrinsic
value and worth, and cannot be
reduced to, or explained away as, an
ensemble of instrumental means or a
deconstruction of signs, symbols,
social relations and structures on
the cutting room floor. The
postmodernists may deride the notion
of the autonomous subject as "false
consciousness," but it is a
consciousness whose coming into
being neither they nor (so far)
anyone else, including the
Singularitarians, can satisfactorily
explain. Consciousness is ultimately
a computational problem, say some of
the strong AI theorists. We'll
eventually figure it out and
simulate it in machines. Nonsense,
say others. It's a mystery. May it
remain so.
As for the future of humanism, its
current varieties of formulation
(and even its antihumanist critique)
seem inadequate to cope with, or
serve as a counterweight to, the
economic and technological forces
sweeping our vulnerable planet. If
we don't want to cede territory to
those who believe there are two
divisions in humanity—the redeemed
and the infidels—and if we still
value the principles of freedom,
justice and democracy, perhaps it is
time to reformulate humanism
(Braidotti). We might start with
taking the longer view and begin to
think of ourselves as species, and
not necessarily a privileged species
at that. Perhaps it is time to think
of ourselves as part of the planet
Earth, as enmeshed in the whole of
the life force and not just a
superior part of it. Finally,
perhaps it is time to think of
ourselves as becoming—yes—machines
with extended capabilities, but
which nevertheless remain sentient,
aware of the condition of other
species and life on Earth, and able
to recognize and apply positive
human values and morals to address
inhuman and irresponsible behavior
wherever it occurs. Put another way,
humans as humane machines.
The traditional
Enlightenment version of humanism
has been a powerful organizing
principle in the West for the past
200-300 years. Nevertheless, it may
no longer be sufficient for human
emancipation in the age of machines,
whether the Singularity arrives or
not. It is time for a more powerful
and encompassing formulation of what
it means to be human today. The
search must be enjoined.
Works Cited
"Anthropocene."
Wikipedia, www.wikipedia.com.
Braidotti, Rosi. The Posthuman.
Cambridge: Polity Press, 2013.
Cookson, Clive. Review of Superintelligence:
Paths, Dangers, Strategies, by
Nick Bostrom. Financial Times, July
13, 2014.
Eagleton, Terry. Culture and the
Death of God. New Haven: Yale
UP, 2014.
Grossman, Lev. "2045: The Year
Man Becomes Immortal." Time,
Feb. 21, 2011.
Kurzweil, Ray. The Age of
Spiritual Machines: When Computers
Exceed Human Intelligence. NY:
Viking, 1999.
____. The Singularity is Near:
When Humans Transcend Biology..
NY: Viking, 2005
Malik, Kenan. The Meaning of Race:
Race, History, and Culture in
Western Society. NY: Macmillan,
1996.
Wilson, Edward O. The Social
Conquest of Earth. NY: Norton,
2012.
Woolf, Virginia. The Waves.
1931. NY: Harvest Books, 1978.