The world is not what we think it is.
The astonishing pattern recognition engine that is the human brain has leveraged billions of years of genetic evolution, thousands of years of social evolution, and (depending on your age) decades of observation to create the most amazing predictive model of the world ever constructed. And yet, our model still is not, nor will it ever be, a true representation of the real world.
We are creatures of perception, living within an evolving map.
While the breach between the real world and our model of it holds true in even our most impassive search of natural scientific knowledge, nowhere is it more true than for the study of human systems. In particular, our understanding of something as complex as the mass of human interaction which makes up the economy is nothing more than a convenient story reflecting the current conditions of the world. We create economic fairy tales to break a complex system into easily digested chunks of metaphor.
We believe in the economic legends not due to willful ignorance or nefarious means but because we see faces in the Martian landscape. The human mind abhors a perceptual vacuum, and we will reach out to fill any gap of understanding with explanation. To create a pattern where none may exist. We create stories because we must.
Over the course of the 20th century, as the industrial revolution took hold and transformed human life, we developed a deep belief in many such economic fairy tales, which fall all over the political spectrum. While these beliefs might have had pragmatic value in their time, changing conditions of the world mean that we must now reexamine some of our deeply held beliefs.
The Belief: Economic growth leads to job creation.
The Reality: While this may have held true for previous epochs of economic growth theory, there seems to be a diminishing connection between growth in economic output and the employment of people. Continual growth in automation, robotification, and artificial intelligence will mean that more can get done with less human involvement. With population growth on a continuous rise, this is a fundamental problem for a society built around the job as the sole legitimate mechanism of wealth redistribution. Job growth should no longer be the political or economic goal.
The Belief: Economic growth will always raise the living standard.
The Reality: The view that economic growth will make everyone’s lives better is central to the belief in the economic system as an overall good, sometimes simplified as “economic growth raises all boats”. Given the staggering disparity in wealth going to the top 1% or 0.1% of people, there is no reason to think that economic gains could not be so disproportionately distributed as to actually make some people worse off. The economy is not a zero-sum game, in that there could be more for everyone every year that we see growth, but there is no law that dictates that all those within the social hierarchy gain equally, or gain at all. This is of particular concern if we realize that the growth of capital could potentially become decoupled from the real world. growing of its own volition independent of actual growth in servicing human need. We do not need absolute equality, but we should aim for equality of betterment.
The Belief: Economic growth means destruction of the environment.
The Reality: Some on the left of the political spectrum tend to be totally dedicated to the belief that the economy is an absolute bad when it comes to the environment. A properly regulated economic system could be the best friend of the environment, innovating new means to improve efficiency and decrease environmental impact. Economic benefit means environmental destruction only as long as we allow it. The bottom line: We can have economic prosperity and environmental sustainability, but unless we have both we are going to end up with neither.
The Belief: Technological progress will allow people live happier, healthier lives.
The Reality: There is no belief more near and dear to the optimistic futurist than the view that despite ourselves, technology will allow us to live happier and healthier lives than ever before. While this is one possible outcome of technological progress, and the one we strive for as our technology works to improve our lives, technology can just as easily be turned towards human destruction. Maybe the robots can save us, but it’s up to us to ask them to. The fact is, technology will serve to magnify our success and sins. It is up to us to choose which ones.
While the use of such explanatory models to aid in our understanding of the world is absolutely inescapable, we risk becoming fundamentalist if we attach overzealously to any one of these beliefs. We must be willing to let go of our understanding if the changing conditions of the world dictate as such. If we refuse to change our beliefs and the intricate systems of politics, justice, and economics which we have built on top of them, we face nothing short of oblivion.
We must adapt or die.
In light of our adaption, and our technological innovation, there is a heated debate about whether or not we are really experiencing accelerating innovation and accelerating returns on technological progress. The nay side of the debate was best articulated by Robert J. Gordon, who compares modern growth and innovation to the massive improvements that modernization brought to the western world over the course of the 20th century. The mechanization of agricultural brought untold bounties of food, sanitation put toilets in every home, life expectancy increased greatly, and people went from the speed of horse to the speed of sound in the span of just the first half of that century.
And what do we have to show for one and a half decades of the 21st century? The internet went from a novelty to the center of modern commerce, entertainment, social interaction, and thought. Similarly, the smartphone went from non-existent, to an expensive luxury, to almost completely ubiquitous in the space of only a few years. Still, Gordon argues that although these recent innovations seem miraculous to us, they really pale in comparison to the innovations of the 20th century. Gordon sums up his argument with a question of whether you would trade your toilet for a cell phone? Personally, if I had to keep just one I would probably choose the cell phone. Nonetheless, if, for the sake of argument, we accept Gordon’s view that innovation is actually slowing over time there are several reasons why this might actually be true.
There is the low-hanging fruit argument. Basically, as we innovated in the spaces of medicine, agriculture, and transportation, we were able to achieve large gains in the early years because it took relatively little effort to realize those gains. The first inventions are the most powerful because they solve the biggest problems with the simplest technology.
In the case of transportation, the amount of energy that it takes to go from horse speed to the speed of a train, and then to the speed of a jet liner is more or less a linear relationship within two different paradigms, that of ground and air travel. However, once you get beyond the speed of a jet-liner (~90% of the speed of sound) the incremental cost to increase your speed becomes to high. It starts to take more and more fuel to realize less and less gains. You have entered the era of diminishing returns for technological innovation.
There is actually no argument that single technological paradigms are ruled by this dynamic, where initial gains are much more profitable but are eventually followed by an era where exponential increases in investment are necessary to realize smaller and smaller gains. Suffice it to say that the fact that exponential gains in single isolated technologies cannot go on forever could be an explanation for why we might actually be seeing a slow down in innovation into the 21st century.
Another intriguing explanation of Gordon’s hypothetical innovation slow down could be that the majority of innovations of the 21st century were actually meta-innovations. Perhaps inventions like the internet and smartphones are not most important in their direct ability to change human lives, but rather in they are most important for their ability to empower innovation itself.
In the 20th century, we saw a great advancement in concrete metrics of human progress. Innovation delivered more food, more cars, more speed, more health, etc., but the way that these innovations were realized remained much the same. Schools (especially Universities) actually looked about the same in 1900 as they did in 2000. Similarly, academic research was performed and published in much the same way over the entire 20th century. This is absolutely not the case in the 21st century. Although the changes started in the mid to late 1990s, one can actually almost draw a line through the year 2000 as the start of the internet era of scientific research. In the world of laboratory research, we often wax poetic about how much time researchers used to spend in the libraries doing the research.
A typical conversation might go something like this: So you used to have to actually physically go to a library to research your topic of interest? You would have to browse through entire journals with no Ctrl+F? Even then you would be looking at work that was months to years old with no updates to… well, that just sounds awful.
Compare this instead to the world we scientifically grew up in. We can use Google Scholar to get daily updates on the most recent work in any given field. We also benefit from other forms of instant communication for scientific gain, like email. And as maligned as Powerpoint often is, it is still an incredibly revolutionary tool for communicating the most recent research to colleagues on a more frequent basis than peer-reviewed publication allows. The speed of scientific research in the 21st century is not comparable to what existed before the year 2000.
As important as it has been for academic research, the internet has been just as revolutionary for every other aspect of life too. The internet is the archetype of meta-innovation.
This tendency towards increasingly transformative meta-innovation does not seem to be decreasing either. Technologies like the massive-open online course, 3D-printing, cryptocurrencies, and machine learning all stand to be as meta-revolutionary as the internet. To return to my favorite example of late, the proliferation of forms of cryptocurrency (think: Bitcoin) might seem to the cynic to be driven mostly by people trying to profit on hype, but I see something much deeper at work. Bitcoin and its compatriots have created an entirely new space for ideas about currencies, value, exchange, and trust. Cryptocurrencies allow us to ask fundamental questions about what is value, and how an ideal market should operate.
Cryptocurrencies (just like the internet and general-purpose computers) are a meta-innovation.
The ultimate meta-innovation will come in the form of a computer which can program itself. It has been said that an artificial intelligence which innovates to improve itself will be the final invention of humankind. In the world of IBM Watson, the self improving computer is not a far-off dream of science fiction.
We are now living in an age where the importance of new technologies lies not only in its change our lives, but in its ability to change change itself. So if Gordon is right and we really are seeing a slowdown in the rate of innovation in the 21st century, then maybe it is not only because all of the easy inventions have been invented.
Perhaps we are simply living in the middle-ages of innovation, a time where we are investing our innovation capacity in the future of the future.
Perhaps we are living in the age of meta-innovation.
As humans, we look to see patterns in everything we do. Our economic models and technological models are coming together. One cannot evolve without the other. And the ever-evolving map that we live in, where our perceptions grow and our understanding of the universe widens, is changing once again.