strategic philosopher Max More

 

 

 

  home about writing appearances speaking    blog

 

“Embrace, Don't Relinquish the Future”
Max More 2006

 
 

Max's blog

 
 

01. During our last email exchange, you underlined the fact that now was a good time to comment on the latest developments of transhumanist ideas and theories. What made you think this? And why is it such a good time?

Back in 1994, an article appeared in Wired, titled “Meet the Extropians”. One of the readers, obviously hostile to transhumanism, declared that extropians and other transhumanists were just a fad, soon to be forgotten. Similarly, in her 1999 book How We Became Posthuman, literary critic Katherine Hayles thought she had disposed of transhumanism. But, in a 2008 article, Hayles noted: “Transhumanism has exponentially more adherents today than it did a decade ago… and its influence is clearly growing rather than diminishing.”

A great many of the ideas transhumanists wrote about back in the 1980s and 1990s in places like Extropy magazine, the Extro conferences, and to some extent Foresight Institute and Alcor Foundation gatherings, are now discussed in a myriad of publications, TV shows, and web forums. We regularly hear news items on the creation of synthetic biology, of the success of the Singularity University, and of continual advances in the technologies informing transhumanist goals and hopes.

Awareness and discussion of transhumanist ideas continues to heat up as we continue to see promising technological developments that support our ideas. These include recent leaps forward in synthetic biology, artificial intelligence, neuroscience and neural-computer interfaces, and the use of increasingly sophisticated social intelligence networks. Along with the heightened interest, we’ve seen more and more criticism of transhumanist thinking and goals. At the same time, the rapid international growth of the transhumanist movement means that many people (even those heading up some efforts such as conferences and magazines) lack a good sense of the history or of the full context of transhumanist thinking.

 

02. Let’s go back to the beginnings. You were born on the Old Continent, educated in Oxford. What were your first contacts with the « transhuman » movement? Were you at first influenced by a certain kind of lectures or intellectuals, any sort of sci-fi novels or comics? I remember for example Kevin Warwick answering me The Terminal Man of Michael Crichton and the movie Terminator to a similar question…

 Transhumanist ideas were certainly not part of my immediate environment as I grew up in the southwest of England. My parents and half-brothers showed no interest in such ideas, nor even in technological progress more generally. Nor was I encouraged in transhumanist directions by the school I attended from the ages of 10 to 16. That school, QEH, was founded in 1586 and was quite traditional, at that time even requiring boarders to wear the old bluecoat uniform.

All my transhumanist-related influences came from reading and from television shows. At least as early as 5 years old (when I watched the Apollo 11 moon landing), I was fascinated by space and space travel. That spurred me to read science fiction. Aside from the commonly known TV shows like Doctor Who, I was quite devoted to The Tomorrow People (“homo superior” children with special powers and an advanced AI called “Tim”), and Timeslip, a show that actually dealt briefly with cryonic suspension, intelligence-augmentation, and other transhumanist themes. Soon after that, I read large amounts of SF, especially Robert Heinlein, Phillip K. Dick, Robert Silverberg, but also Asimov, Clarke, and others. I was especially interested in SF with a vision or interesting philosophical and psychological speculation. Comics were a major hobby of mine from around 10 to 17 years old. These provided models of mutants and a kind of fantasy-posthumans, as well as technologically-enhanced people such as Tony Stark/Iron Man. They fed my sense of physical possibilities and, to some extent, intellectual (although super-intelligence people are hard to convincingly portray by regular-intelligence writers).

Clearly, from an early age I was always fascinated by the general idea of enhancing human capacities, physically, intellectually, as well as emotionally and ethically. For several years from the age of 10 or 11, this took the form of an interest in the occult and psychic phenomena. Besides reading about it, I tried out various groups and practices, starting with transcendental meditation (TM) at 11 (perhaps 12) years old, after a lecture by my latin teacher at school. I went on to try out the Rosicrucians with their odd mixture of Cartesian dualism and Egyptian style, and the International Order of Kabbalists. By my mid-teens, I had developed more critical thinking ability (and a stronger foundation of scientific understanding) and came to reject religious and occult thinking. I spent less time reading fiction and more reading psychology, economics, philosophy, and books such as The Mind’s I by Hofstadter and Dennett.

So, those were many of my early influences. None of my influences at that time were other people who I met. Back in the 1970s and early 1980s, there really wasn’t a transhumanist movement and I didn’t meet any other transhumanists or proto-transhumanists until 1982. My first contact with such people (in person and through newsletters) came through mutual interests in life extension, space colonization, and intelligence augmentation. Around 1982, I read Pearson and Shaw’s flawed but impressive book, Life Extension: A Practical Scientific Approach, and started meeting with several like-minded people in London at Imperial College to discuss these ideas.

This let to my trip to California in 1986, where I spent six weeks learning firsthand about cryonics. Back in England, I co-founded an organization that is now known as Alcor-UK—the first real European cryonics organization. Our little organization put out a newsletter/magazine, Biostasis, and we attracted plenty of interest from television, radio, newspapers, and magazines—probably bringing these ideas to a large new audience for the first time in England. Other publications that were around in the years shortly before and after I moved from England to the United States (in 1987) were Omni and its companion Future Life, Claustrophobia (a newsletter covering life extension, space colonization, and intelligence augmentation), and Reality Hackers and its successor Mondo 2000.

These influences and experiences came together during my second year in the USA, when my friend Tom and I started Extropy magazine (1988) then (with a few others) Extropy Institute. This led to the beginnings of real, modern transhumanism, the ideas being explicitly codified and presented in “The Extropian Principles” and articles such as my “Transhumanism: A Futurist Philosophy”.

 

03. The general tendency in Europe is clearly oriented towards dystopia, as if the media on the Old Continent, not finding pleasant stories anymore, were sinking into a particularly sombre pessimism. How do you analyze this lack of dynamism and the sometimes regressive aspect of European societies?

No simple answer can adequately explain the tendency of Europe toward pessimism. I suggested several likely factors back in 1997 at a talk to The Big Fatigue conference in Munich. This pessimistic, dystopian attitude has always had influence also in the USA, and more strongly now than ever (except perhaps the 1970s). People seem to have an addiction to claims of disaster, catastrophe, and crisis. We can see the appeal of extreme, catastrophic scenarios in the cases of climate change, the Y2K apocalypse, mad cow disease, SARS, autism vaccinations, swine flu, cell phone tumors, DDT and cancer, population growth and famine, and many other largely manufactured scares.

But it does seem to be true that pessimism and dystopian thinking is stronger in Europe than in the USA. For instance, European opposition to genetically modified foods is stronger than in the USA. During the middle ages and at other times and places, the Christian religion has been a drag on both social and technological progress. It tends to separate the holy spiritual world from the degraded, corrupt, “fallen” real world, denigrating material progress and antithetical to spiritual salvation. Yet, despite the comparative strength of Christianity in the USA, it seems to be Europe that has swallowed whole the idea that material success means spiritual impoverishment. I think this shows how tricky it is to convincingly account for cultural pessimism. In this case, the USA may have attracted the more material-progress-friendly strands of Christianity (as embodied in the “Protestant work ethic”).

One of the big causes of the sense of fatigue in Europe, I suspect, is the legacy of statism. By this I mean the belief in big, centralized government as the solution to all social problems and challenges. Big government has failed to adequately solve almost all the problem that it has tackled, and historically government has played a larger role in Europe than in the USA (although has been changing). Most Europeans (and many Americans) would prefer security and guaranteed income to the healthy chaos of the free market, which destroys industries and builds new ones and challenges people to to retrain, to learn, and to be more dynamic.

Another factor is the lack of frontiers. People of high energy, of creativity, those who are unhappy with the status quo tend to move, and they have usually moved west. Although immigration does bring some challenges and tensions, I have no doubt that the USA has massively benefited economically and culturally from immigration. The reduced dynamism and enthusiasm in Europe partly results from the relative lack of immigration by the energetic and optimistic.

 

04. Besides your academic career, what motivated you to settle in the U.S.?

The relativism pessimism of England and Europe that we just discussed, are a major part of the answer to this question. A big chunk of my formative years were in the 1970s—an especially gloomy decade. My last years in England, 1984 to 1987 were spent at Oxford, where the dominant mentality was one of protest, complaint, and opposition, rather than being constructive, entrepreneurial, or hopeful. I yearned for more positive, constructive attitudes and expected to find them in America, especially in California. And, to a large extent, I did. Things were very far from perfect in California, of course, but the place had an almost mythical attraction, fed by Hollywood. It was also the home to Silicon Valley, the hotbed of technological innovation.

I knew that moving to California would enable me to meet many more people with interests in creating a better future. Indeed, I met people like Dr. Roy Walford, the futurist FM-2030, and (at the home of Timothy Leary) Natasha Vita-More, who I would marry.

 

05. Sci-fi author Norman Spinrad wrote a text titled « The crisis of transformation » in which he develops the following idea. As a species, we would be living through a crucial time, especially from an energy point of view, facing two prospects: either our extinction or the development of a new human civilization, bound to explore the universe and perpetuate itself during millions of years. Do you share this vision of a crucial moment for the future of humankind?

It does make for a dramatic scenario. It’s rather like the singularity idea, which says that we are close to a point where our world will be suddenly and incomprehensibly transformed by super-intelligent machines. The confirmation bias makes it easy for us to support the idea that we are now at some turning point—an unprecedented time in the history of the human race and the planet. But millenarian thinking has been around for, well, millennia. It has taken religious forms for centuries and, more recently, you can see the same approach in both fiction and non-fiction, such as in the 1930s Things to Come: All the universe or nothing. Which shall it be, Passworthy?” In the title of his 1969 book, Buckminster Fuller posed the question: Utopia or Oblivion?

On the other hand, the “paradise or apocalypse” theme may be more plausible today than at any point in the past. That’s because technological progress has both given us more means to damage and destroy ourselves and more ways to improve and advance ourselves. But I think that will be even more true ten years from now, and 20 years, and 50 years, and so on. We do need to develop new energy sources fairly briskly, but I see no need to panic, nor is the situation unprecedented. In the Industrial Revolution, the British were running out of wood as they rapidly burned it for energy. They made the transition; so will we.

We do need to take action and plan well to make the transition, but I see no sound reason to strongly doubt that we will. What worries me most is that many of the people crying loudest that we are dooming ourselves are those who most vigorously oppose the technological progress needed to move ahead successfully. They are the ones who oppose nuclear power, genetically modified crops, and life extension technologies, for instance.

I agree that the complexity of our technosocial systems and the decisions related to them has never been greater. Their complexity threatens to overwhelm our decision-making capabilities. Again, these need not happen, but the threat is real so long as we continue failing to make use of the best methods for creative and critical thinking. That urgent and deep need is precisely why I have been focusing my energy on a tool for the future: what I call the Proactionary Principle.

 

06. Could you set out the « Proactionary Principle » and its ten principles for our readers? If my reading is right, you have formulated it in opposition to the « Principle of precaution » which you don’t seem to like very much.

The pervasive cultural pessimism discussed in previous answers has manifested itself in the form of a principle. This “precautionary principle” has even found its way into the European constitution. The precautionary principle is being widely used by regulators, negotiators, and activists, in Europe especially but also the USA, explicitly or implicitly, to control new technologies and to limit productive activities. A simple way to state that principle is: “Don’t do anything for the first time, and if someone else is already doing it, stop them at once.” The principle takes many specific forms, but its essential imperative is: When there is the possibility of risks to human health or the environment, precautionary measures should be taken even when there is a lack of scientific certainty.

Precautionary measures typically mean prohibitions or severe restrictions. As such, the precautionary principle is a bullet aimed at the heart of our ability to innovate and progress technologically. If you think about what the precautionary principle requires, if taken literally and consistently, it would mean an end to technological progress (and the social progress that can support). If we had pursued caution in such an extreme and zealous manner throughout history, practically all technologies would never have been allowed. The principle would clearly prohibit fire, the airplane, aspirin, chlorine, the contraceptive pill, DDT, all medical drugs with any side effects, electrification, energy production, knives, and penicillin.

These technologies have brought enormous benefits, despite some undesirable side effects. This leads me to the paradox of the precautionary principle: The principle endangers us by trying too hard to safeguard us. It tries “too hard” by being obsessively preoccupied with a single value—safety. By focusing us on safety to an excessive degree, the principle distracts policymakers and the public from other dangers. Among those other dangers, of course, are natural risks—those that are not the result of human activity but of the natural environment. The precautionary principle is asymmetrical in that it inherently favors nature and the status quo over humanity and progress, while routinely ignoring the potential benefits of technology and innovation.

You can easily find many other severe flaws in the precautionary principle. (It’s stunning and deeply disturbing that this principle is being so widely used.) The principle lacks objectivity and typically assumes worse-case scenarios. It is vague and unclear, making it vulnerable to misuse and its application vulnerable to corruption. It fails to specify comprehensive thinking about an issue. It assumes that the effects of regulation and restriction are all positive or neutral, never negative. It embodied an inappropriate burden of proof (illegitimately shifting the burden of proof and unfavorably positioning the proponent of the activity). It fails to accommodate tradeoffs. And it is ultra-conservative, protecting the position of existing technologies and methods by repelling innovations.

Because of the great potential for damage by the precautionary principle and because of its widespread use, often uncritically, I set out to develop an alternative, wiser and more balanced principle. This is what I call the Proactionary Principle (or ProP for short). The Proactionary Principle started out from the discussions at Extropy Institute’s Vital Progress Summit in 2004. Because the real world is complex, the ProP has to be more complex than the precautionary principle. Originally, the Proactionary Principle was composed of ten component principles. I have since reduce those to five.

The Proactionary Principle recognizes that the freedom to innovate technologically and to engage in new forms of productive activity is valuable to humanity and essential to our future. The burden of proof therefore belongs to those who propose measures to restrict new technologies. At the same time, technology can be managed more or less wisely. Stated most briefly, the ProP says that:

Progress should not bow to fear, but should proceed with eyes wide open.

Or:

Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.

Expanded a little, it can be formulated like this:

Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance.

I have broken down this overall principle into five component principles (or “Pro-Actions”) that make it easier to apply. I’ll state these here more briefly than in my book on the Principle. The first component principle is: Be Objective and Comprehensive. Big, complex decisions deserve to be tackled using a process that is objective, structured, comprehensive, and explicit. This means evaluating risks and generating alternatives and forecasts according to available science, not emotionally shaped perceptions, using the most well validated and effective methods available. This also means we should consider all reasonable alternative actions, including no action. We should estimate the opportunities lost by abandoning a technology, and take into account the costs and risks of substituting other credible options.

The second component principle is: Prioritize Natural and Human Risks. Avoiding all risks is not possible. They must be assessed and compared. The fact that a risk or threat is “natural” should not give it any special status. Treat technological risks should be treated the same way as natural risks. Avoid underweighting natural risks and overweighting human-technological risks. Inaction can bring harm as well as action. Actions to reduce risks always incur costs and come at the expense of tackling other risks. Therefore, give priority to: reducing immediate threats over remote threats; addressing known and proven threats to human health and environmental quality over hypothetical risks; more certain over less certain threats; irreversible or persistent impacts over transient impacts; proposals that are more likely to be accomplished with the available resources; and to measures with the greatest payoff for resources invested.

The third component principle is: Embrace Diverse Input. Take into account the interests of all potentially affected parties, and keep the process open to input from those parties or their legitimate representatives. Recognize and respect the diversity of values among people, as well as the different weights they place on shared values. Whenever feasible, enable people to make reasonable, informed tradeoffs according to their own values.

The fourth component principle is: Proportionate Response and Restitution. Consider restrictive protective measures only if the potential negative impact of an activity has both significant probability and severity. In such cases, if the activity also generates benefits, discount the impacts according to the feasibility of adapting to the adverse effects. If measures to limit technologies do appear justified, ensure that the extent of those measures is proportionate to the extent of the probable effects, and that the measures are applied as narrowly as possible while being effective. Those responsible for harm should make restitution swiftly.

The fifth component principle is: Revisit and Revise. We only learn from our decisions if we return to them later and check them against actual outcomes. To ensure that decisions are revisited and revised as necessary, decision makers should create a trigger to remind them. It should be set far enough in the future that conditions may have changed significantly, but soon enough to take effective and affordable corrective action. In some cases, this kind of assessment can be done continuously, improving the gains made in “learning by doing”.

 

07. During an interview done by RU Sirius for NeoFiles, you maintained that « we need to dip ourselves into chaos, uncertainty, and challenge every so often ». An opinion that I share, but is somehow difficult to explain to the middle classes already worried for the future of their children, be it European or American. Does this mean that transhumanists are part of an avant-garde, inevitably a minority, which explore the possible futures, at the margins of the masses? And if so, what would be the answer to critics calling this posture elitist, or even eugenic, as I’ve already heard?

Everyone should be able to appreciate the need for their children—if not themselves—to maintain their flexibility, to make it part of their nature to periodically challenge and stretch themselves. As our life expectancy continues to grow, and as jobs seem to last for shorter periods of time, adaptability and flexibility become more important than before.

In that interview, I wasn’t talking only about willingness to explore radical future possibilities. People can be radically transhumanist in terms of their expectations of the future while lacking the habit (or virtue) of frequently challenging themselves—and vice versa. So I wouldn’t describe transhumanists as an elite. Although the distinction isn’t sharp, you can think of the kind of transhumanist dynamism I’m talking about as having an intellectual and a personal or practical aspect.

So, if you want to talk about an elite or an avant-garde, you have to realize that there are at least two elites, with some people being in both, and others sometimes being in both but then in only one and perhaps later back in both. People can maintain their intellectual dynamism even as they settle into static lives that lack much change or challenge in any other respect. That’s why I don’t think it’s helpful or accurate to talk in terms of a defined elite.

Certainly, the fact that people identify themselves (or are identified) as transhumanist, doesn’t make them automatically better, more advanced, or smarter than those who do not. Transhumanists are a varied bunch. It’s probably true that, in addition to having thought more creatively and critically about the future than almost all non-transhumanists, on average we are more dedicated to rationality. But it’s certainly not true that all transhumanists are more rational than non-transhumanists, or that we live and behave more wisely. Like the rest of our species, transhumanists can and often do fail to fully live up to their ideals, even where they agree their ideals are relevant to their current lives.

 

08. How do you explain the violence of some of these reactions faced with the perspective of a post-humanity? What could prevent the transition from a humanism, losing momentum nowadays, to a form of post-humanism - which would sustain our evolution from a philosophical point of view, respecting essential notions of freedom (what we call “free will”), tolerance, independence, openness and curiosity?

Several powerful factors create resistance to the idea of improving upon the human condition. Transhumanists sometimes find these factors hard to understand or fully appreciate. To us, it’s obvious that the human condition evolved from natural causes that had no concern for our well-being. It’s obvious that aging and permanent, involuntary death are bad things. It’s obvious that human capacities for reasoning, feeling, and virtuous behavior fall badly short of what is possible. If we are to make progress in improving on the human condition—in moving beyond being human while retaining whatever is truly valuable about it—we must first fully appreciate the sources of resistance.

One major source of resistance to the transhumanist project is a fear of losing one’s species-identity. Over centuries, many noble ideals have been built into the notion of being human. Even when the idea of humanity is portrayed negatively (as in Christian notions of the Fall and inherent sinfulness), we are held to be unique and special. When people have no clear image of what could come after humanity, they fear the loss of that humanity. They think instead of all the ways of being sub-human.

It doesn’t help that the typical image of technology-augmented humanity is that of the cyborg. Cyborgs (as usually portrayed, especially onscreen) have greater than human strength and sometimes senses, but are emotionally subhuman, with a more limited and controlled set of values and desires. That is the opposite of the transhumanist desire for refined emotions, a wider and brighter range of emotions, and more noble, rationally-sculpted, and improved emotions and motivations.

Then there are a range of philosophical errors that lead many people to misunderstand or reject the transhumanist vision. Among these I would include dualism (the idea that the mind or soul is a substance separable from and independent of the body), essentialism about human nature, and a view of personal identity that makes it hard to see how an individual can survive the kinds of transformations that transhumanists talk about.

Another major factor may be a fear of new and excessive choices. We are in a period where the number of options open to us in most areas of our lives keep expanding, but our ability to make those choices sometimes falls behind. That leaves us in a state of anxiety. It takes a while for our individual and social capacities for choosing among so many alternatives to catch up. But they do catch up. For instance, online recommendation systems and advice in social networks now enables us to make consumer choices from among a much vaster array of choices than we had just a few decades ago.

The transhumanist project of creating new options for human biology, cognition, and emotion clearly opens us a whole new vista of deep, existential choices. This may cause some people to react: “Oh, no! Not more choices!” You see this kind of concern, stated in philosophical terms, in writers such as Michael Sandel (he talks of the new choices as “hyperagency”) and Leon Kass (who speaks of an “explosion of responsibility”). 

I’m concerned that some transhumanists are contributing to another source of resistance by over-emphasizing the idea of catastrophic or extinction (or “existential”) risks. By focusing attention heavily on small chances that new technologies that could destroy all of us, they may be feeding into our apocalyptic culture. This is a culture that is thrilled by end of the world stories, whether it is the Terminator movies or extreme views of global warming. Extinction risks do need to be considered carefully and planned for, of course. That is part of proactive thinking. But too much public talk of imagined catastrophic risks surely builds resistance to advancing technologies that could, in fact, save us.

More opposition to transhumanist ideas and goals originates from the same factors that have made the precautionary principle into a popular ideology and cultural value. Related to that, I think there is also a strand of hatred of our species in many parts of the world an in many cultures. We have long seen that in some elements and versions of religion, and now see it most obviously embodied in green ideology.

 

09. Behind the statistics illustrating the great disparity between the minority of ultra-rich people and the macro-majority of poor and middle-class people, aren’t we heading towards a division of humanity in distinct branches, motivated by a limited access to certain categories of healthcare? I’m thinking of course about nano-technologies or genetic therapies, that could in the long run assure an incomparable well-being and extreme longevity to their beneficiaries, but wouldn’t necessarily be accessible to most of the people, because of their high costs. And what would be the probability of a peaceful cohabitation between these different forms of humanity and post-humanity?

I would be careful about projecting short-term trends into the farther future. It is certainly true that some medical technologies have, in recent years, become increasingly expensive. This may or may not continue to be true over the next two or three decades. Much of that depends less on the technologies themselves than on regulations, laws, and business models. For instance, excessively heavily regulation (an instance of precautionary restrictions) raises the cost of developing new drugs and medical devices. We might lower the costs dramatically by loosening these restrictions (while maintaining liability for poor testing of potentially dangerous treatments) while also significantly reducing the length of patent protections. We don’t yet know what kind of treatment will be required once we understand how to greatly expand human life span. It may or may not turn out to be complex and expensive, or it may be simple and cheap.

What I would emphasize is that strong, coercive efforts to close the gap between the very well off and the poorer generally only result in slowing growth and making everyone worse off. The best way to reduce the gap (to the extent that it can be reduced) is to remove barriers to growth, education, opportunity, and trade.

I would also stress that it’s more important to narrow the gap between the poor present and the rich future. The disparities among people today are tiny compared to that between all of us today and where we can and should be in the decades ahead. Just think about we have today that was not available to the richest people of a century or two ago. If we allow technology to progress rapidly, the gap between what the richest of us today have and what the poorest or average person of a century from now has could be even larger, perhaps vastly larger if progress does accelerate.

There’s another point, that although quite basic, seems not to be widely enough appreciated: Advanced technologies, especially medical technologies, may have high costs initially—like most other new technologies and major products. The wealthier part of the global population, who can pay high prices, essentially enable the market for these to develop. These technologies and products become cheaper and spread to the less well off. This beneficial process can be stifled or slowed by regulation, trade barriers, and excessive protection of intellectual property. The spread of the benefits of new technologies is, in itself, a natural process. Without interference from bad policies, it would be inevitable. Our efforts should be focused on lowering or removing the barriers.

 

10. On a more optimistic note, what hopes do you have for the XXIst century? In your opinion, what would be the most credible scenarios for a general evolution of humankind, in answer to the current ecological, economic and demographical crises?

First of all, I have to say that I see most of our the urgent and important issues facing us not as “crises” but as challenges. True crises (such as the recent financial crisis and the current oil spill) tend to be relatively short term. Major issues, including Europe and Japan’s demographic difficulties and other issues such as climate change, seem to me to be challenges that only appear to be crises due to the exaggerated way in which they are typically presented. That is, I reserve “crisis” for what are truly emergency events, rather than longer term problems that we can solve over time in reasonably clear ways.

I am quite hopeful about our abilities to overcome both challenges and acute crises, in part because of continued and accelerated development of “The wisdom of the crowd”. By that, I mean mechanisms for collective or distributed smart decision making. Emerging technologies and social experiments are enabling new forms of collective intelligence. Collective intelligence allows us to better tackle cognition problems, coordination problems, and cooperation problems. As James Surowiecki explained in his book on the topic, for collective or distributed intelligence to work, the “crowd” must be characterized by diversity of opinion, independence of members from one another, and a specific kind of decentralization, and there needs to be a good method for aggregating opinions.

In addition to the wide range of technology-enabled applications for the wisdom of crowds, I hope we will see more widespread adoption of better methods of making decisions and forecasts. The Proactionary Principle directs decision makers toward evidence-based methods for better creative and critical thinking and for forecasting. I am starting to build an alliance of friends of progress to encourage the widespread adoption of the Proactionary Principle among decision makers in large institutions.

Technology might help us here, if we can develop artificial intelligences that assist our decision procedures. However, AI technology alone will help little if not accompanied by improved social dynamics and decision processes. Without that improved context, greater intelligence may just lead to more refined rationalization rather than to true reasoning.

On a material and economic level, I expect current worries about energy supplies to be resolved without any crash of civilization. While I do see a place for “renewable” sources such as solar, wind, and wave power, a large part of the solution should come from increased use of nuclear power. France has long stood out as a shining example of this. It’s a great shame that other countries have failed to make as much use of nuclear.

Other ways in which technology could help to improve our future dramatically: More people are seeing the possibility of cheap matter printers. These devices can take raw materials and produce just about any physical product you want. This could transform economies, removing most historically-familiar scarcity, especially if combined with open source plans for using the devices. Another major optimistic possibility is the discovery of an effective means of stopping and reversing the aging process. Once people believed that life extension treatments worked, the ramifications would be profound. It would be rapidly adopted, despite the current widespread opposition in principle.

 

11. Considering that the choice of evolution or stagnation belongs to each of us, what practical advice would you give to La Spirale’s readers willing to improve their well-being, cognitive abilities, longevity, but also to be actively involved in the evolutionary scenarios previously evoked?

I would advise readers to:

 

12. And finally to conclude, would you be so kind as to give some good reasons to smile to those of our readers still anxious and pessimistic about the future?

My answers to the question about my hopes for this century partly answer this one too. In contrast to the usual alarmist, pessimistic, and defeatist voices eager to be heard and sell their books, I believe that we improve the world over the long run. Humans make many mistakes, sometimes horrible ones, but our technological and social development have gradually improved the lot of the human race vastly. These many positive long-term trends are obscured and distorted by the media and the agendas of activist groups and even scientists.

I’ve already written a lot here, so I’ll just recommend reading some work on the positive side. I especially recommend Julian Simon’s book, The Ultimate Resource, or the more recent It’s Getting Better All the Time: 100 Greatest Trends of the Last 100 Years, by Stephen Moore and Julian L. Simon, or the even more recently book by Indur M. Goklany, The Improving State of the World: Why We’re Living Longer, Healthier, More Comfortable Lives on a Cleaner Planet.

In the long run, I also hope and expect that the technologically-enabled ability to reengineer and resculpt our human nature and instincts will allow us to improve our behavior and morality and to become wiser.