What is the best environment for innovation? Many will now think of Silicon Valley and similar ecosystems for high-tech advancements. Although such clusters are perfectly fitted to nourish creative destruction, there is yet another way of nurturing innovation. It is the imaginary laboratory of a science fiction artist.
Change Is Inevitable, Progress Is Not
To use mathematical language, the larger the set of capital, skilled labor, and entrepreneurs concentrated geographically (e.g. in Silicon Valley) or accumulated quickly over a short period of time, the larger the number of business ideas’ mutations, combinations, and permutations. This naturally leads to a higher number of novel products and services. Consequently, many countries and regions now create technological clusters in the hope of replicating the San Francisco Bay Area’s economic success. Informed by the imperative of economic growth, national and regional governments as well as international organizations have adopted the narrative of entrepreneurial innovation. The presumption is, in a simplified form, that by means of supporting new startups, investing into university technology centers, and into teaching code, one can generate more economic value and deal with the rising unemployment. This trend of publicly-backed Schumpeterian creative destruction, where entrepreneurs (in small teams) and intrapreneurs (as part of bigger structures) constantly redefine economic structures by introducing new goods, markets, and production methods, has intensified in light of the protracting economic crisis. The language of innovation is omnipresent, and you would be hard-pressed to find a politician or an policy strategy that do not speak it.
When looked at from technological angle, creative destruction can be perceived as an autopoietic system where “the collective of technology produces technology from itself” (Arthur 2012). Why is there so much fuss about innovation now, if this process is as old as humanity? Erik Brynjolfsson and Andrew McAfee give a thorough answer in their book The Second Machine Age. Population growth (more people means more ideas) and globalization (easier and quicker exchange), developed through the first machine age of industrial revolution, are now coupled with the continual replacement of human brain by the ever-expanding analytical and cognitive capacities of digital technologies. This leads to the exponential increase of new goods, solutions, …and problems, a pessimist would add.
No wonder then that this technological outbreak is reflected in official documents. How do governments go about innovation? Most of the policies focus either on financial aspects such as subsidies and loans for innovative enterprises (the easiest approach) or on the creation of legal and administrative context where these companies can thrive (a more demanding approach). Although these measures are understandable and necessary in the short term, they are mere reaction to a fast-changing world. Governments and politicians should look beyond the event horizon.
A Special Envoy for Science Fiction
Technological innovation always anticipates social transformation. With a slow pace of development, societies and individuals have had more time to adapt their organizational, economic, and value systems. Current tempo of innovations and data accumulation (either unprocessed or organized into knowledge) does not allow for that. Nowadays, world economy doubles several times within one’s lifetime and the trend is still on the increase. This is not to say that it can continue indefinitely, nevertheless, the fact still stands that the exponential growth makes us readjust our behavior much more frequently than in the past.
In practical terms, the speed of change may in short term cause higher labor turnover and structural unemployment due to substitution of human labor force by automated processes and a quicker obsolescence of skills, not to mention its psychological and cultural consequences. Since these are all complex problems, they are not eagerly addressed by governments, which rather adhere to politically painless measures like wealth redistribution, or to changing the entrepreneurial context through reforming the intellectual property rights or the tax systems. I am not trying to make an argument against public support for innovations. States and other political agents should nevertheless engage more than they do now in foresight activities and adjusting contemporary policies (welfare, education, health). Some do try novel solutions, like Finland and the Netherlands, where experiments on universal basic income will soon be conducted. But this is still too small an effort given the scope of challenges that await humanity.
Everything from processes, behaviors, structures to agents is gradually being converted into biochemical and electronic algorithms. When combined with advanced technologies and materials, they bring about completely new phenomena, e.g. transhumanism (enhancement of human capacities by technology) or machine superintelligence (intellects that greatly exceed the cognitive performance of humans, Bostrom). Unless some unexpected intervention interrupts the ongoing technological development like a deus ex machina, some sort of human-level machine intelligence is predicted to arrive by the end of this century (Bostrom). Even if this prognosis fails to materialize, artificial intelligence and transhumanism are facts, and soon we will have to deal with their effects on societies, individuals, and humankind in general. And it is also governments’ responsibility to deal with it.
First, protection and impact assessment are one of government’s core functions. In 1965 Irving John Good articulated this challenge in a simplified but blunt way: “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.” This quote is not meant to spread panic or plea for governmental control on science but to make us realize the scale of possible problems when certain processes are put in motion. By creating artificial intelligence which is capable of learning, we assist in the creation of “structures, patterns, and properties [that arise] during the process of self-organization in complex systems” (Goldstein). This process, known as emergence, is ubiquitous and ranges from physical world (water crystals, weather), through biology (animal swarming) to organizational life (city development, traffic jams). Similarly, by putting together elements of artificial intelligence we generate systems able to self-organize and become independent. Although the emergent behavior of such systems is hard to predict, it is the responsibility of political leadership to engage more in the debate of responsible innovation on what such novelties can mean for humanity. Second, embracing the positive potential of digital transformation is also a political task. Most governments (and companies alike) have so far treated digital technologies more as a fancy add-on, something that can facilitate old-fashioned processes. Instead, digital technologies should be put at the core of how governments organize their activities, interact with citizens, and provide services.
Here, finally, comes the role of science fiction. For many years, sci-fi books, films, and other artistic genres have examined the implications of new technologies and provided inspiration for new ones. These two qualities can prove invaluable in realizing the potential and dealing with the consequences of digital transformation. Unrestrained by political or economic interests, sci-fi lets us experiment safely and freely, pointing to solutions and dangers many of us cannot see from within our professional or imaginary boxes. How can a government function as a platform? What does the arrival of Internet of Everything mean? Are we at the brink of new religion, dataism, as recently presented by historian Yuval Harari? Will superintelligence pose a threat to liberal democracy? How can communities at various levels of digitalization coexist? And how to work with machines not against them? We need to start answering such questions now, and politicians have to change their myopic attitude to include more interdisciplinarity. So, next time a government (or a corporation) revises its innovation strategy, they should think of employing at least one sci-fi advisor.
Arthur W. B. (2012). What is Technology and How Does it Evolve?
The New York Academy of Sciences Magazine. Winter 2010 2,
Autioa E., Kenney M., Mustar Ph., Siegel D., Wright M. (2014).
Entrepreneurial innovation: The importance of context, Research
Policy 43 (2014) 1097–1108, Elsevier B.V.
Baumol, W. (2002). The Free-Market Innovation Machine:
Analyzing the Growth Miracle of Capitalism. Princeton University
Bostrom N. (2014). Superintelligence. Paths, Dangers, Strategies.
Oxford University Press, Oxford.
Brynjolfsson E., McAfee A. (2014). The Second Machine Age,
W. W. Norton & Company.
Goldstein J. (1999). “Emergence as a Construct: History and
Issues”. In: Emergence: Complexity and Organization, 1 (1): 49–72.
Good I. J. (1965). Speculations Concerning the First
Ultraintelligent Machine.” In Advances in Computers, edited by
Franz L. Alt and Morris Rubinoff, 31–88. Vol. 6.
Goodwin T. (3 September 2016). We’re at peak complexity — and
it sucks. retrieved from https://techcrunch.com/2016/09/03/
Johnson S. (2002). Emergence: The Connected Lives of Ants,
Brains, Cities, and Software. Penguine Books.
Harari Y. N. (2016). Homo Deus. A Brief History of Tomorrow,
Saltelli A., Dragomirescu-Gaina C. (2014). New Narratives for
the European Project, European Commission, Joint Research
Centre (JRC), Unit of Econometrics and Applied Statistics,
June 21, 2014.
Schomberg, R. von (2013). A vision of Responsible Research and
Innovation. In Responsible Innovation, ed. Owen, R., Heintz, M.
& Bessant, J. London: John Wiley: 51-74.
Schumpeter, Joseph A. (1994) . Capitalism, Socialism and
Democracy. London: Routledge.
W. Brian Arthur (2009). The Nature of Technology: What It Is
and How It Evolves. New York: Free Press.
Share this on social media
The support of our corporate partners, individual members and donors is critical to sustaining our work. We encourage you to join us at our roundtable discussions, forums, symposia, and special event dinners.