I find current futurist predictions very intriguing, extremely exciting and a bit worrisome; probably in that order.
Beyond the obvious ramifications for the future of our civilization, predictions by the leading thinkers (living and dead) of our world bring forth a full range of emotive arguments on all sides of the fence. Maybe a fence is not the best analogy for this though, as that connotates a duality where you are either on one side or the other.
I like to think that maybe we are more like goldfish looking out of a fishbowl, arguing about what may or may not exist outside the confines of the glass. The goldfish has limited personal resources and cognitive capability, but that does not stop it from reacting to its environment and doing its best to swim out of the bowl.
The seemingly inevitable trajectory of current technological progress should lead us to believe that profound change is just around the corner for our species. But is it?
Personally I like the theories put forth by futurists like Ray Kurzweil who describes the current trends in technological advancement as exponential, wherein the speed of development becomes so rapid that it exceeds any expectations based on historical observation.
So in essence, we can’t really base our predictions of what the future may be like by looking back at the past and thinking that progress will occur at roughly the same speed. If one looks at it in this light, then maybe we are in the midst of this vertical ascent up the exponential graph of technological development and we hardly even realize it because it’s happening so fast and critically… so parallel. Developments happen not just with higher frequency because of technology, but at the same time and in different fields all over the planet.
Just like this article in Wired which talks about essential elements coming together right now which are going to make Artificial Intelligence (AI) possible: Cheap parallel computing, big data and better algorithms. The pieces of the puzzle which might seem to most people as disparate projects catalyzed only by capitalism and the need to have newer shinier gadgets, may actually be part of the next logical stages of our human evolution: Giving birth to AI (Joe Rogan et al have discussed this concept many times in his podcasts).
This in turn leads to a TED talk I listened to recently by Nick Bostrom. I’ve also started reading his book a Superintelligence: Paths, Dangers, Strategies. I think he sums it up quite well in his talk when he says that it is imperative that we (the collective “we” of humanity I’m assuming) identify and address the potential issues that will arise from the creation of a super-intelligent AI. He says that “the more of the control problem that we solve in advance, the better the odds that the transition to the machine intelligence era will go well.”
Besides that last statement making absolute common sense, my paranoid sci-fi indoctrinated TrueJom self thinks humanity is far too human to compete, let alone control a superior intelligence that is unburdened by biophysical constraints. Hello World…Would you like to play a game? My name is Skynet.