Human beings aren’t just getting greedier, but stupider.
That’s according to Professor Stephen Hawking: and, really, it doesn’t seem like a particularly shocking statement or commentary.
Just simple observation seems to indicate a rapid stupidification process going on all over the place, for a whole host of reasons and manifesting in a whole bunch of different ways.
You see it in society. You see it in people. You see in politicians and political discourse. You see it all across social media. You see in the entertainment industries. You see it in the White House. I’m not even entirely sure when it started to happen – but we appear to be fast-heading towards the comedy-Dystopian future envisioned in the cult movie Idiocracy.
For anyone who’s never seen Idiocracy, it depicts a future in which mankind is so stupid that when a time-displaced person from the present-day gets stuck there he immediately becomes the most intelligent person on the planet – a saviour figure who becomes the source of all decisions and guidance for a human society that has lost all of its intellectual capacity.
Professor Hawking might’ve been being tongue-in-cheek with his own observations – he did have a wry sense of humour, as evidenced by his cult contributions to things like Futurama, The Simpsons, The Big Bang Theory and Star Trek: The Next Generation. But there’s also no doubt that he was making a serious point too.
In an interview with Larry King on the terrific Larry King Now talk show last year (which airs on RT in the UK), Professor Hawking talked about the increasing greediness and stupidity being the biggest threats to humanity’s survival, arguing that human beings are becoming stupider and greedier by the day.
And that this is going to push humanity towards extinction-level crises earlier than once predicted.
Some of his commentary related to the presidency of Donald Trump and in particular to the US withdrawal from the Paris Climate Agreement, but he was also focused on more pervasive things like the levels of air pollution that most people are regularly exposed to.
Hawking had in recent years, particularly the last year, become something of a prophet of doom, to the extent that he was even starting to get lightly mocked for his frequent warnings of catastrophe. His various warnings had also drawn argument or criticism from various other experts in the respective fields that the warnings related to.
It is possible that, as he advanced in age and perhaps became even more conscious of his mortality, Professor Hawking felt an increasing sense of urgency in speaking about possible or likely existential dangers to the human race or the planet.
Certainly, given some of the less cautious and more gung-ho elements of various scientific fields (particularly in relation to the rapid progression of AI, as well as areas like cyber warfare), there is something to be said for Professor Hawking’s more cautious commentary in an era that could be described as being a dangerous crossroads for the human race.
No doubt, Stephen Hawking wasn’t omnipotent – meaning he can always be wrong or just overreacting.
But science needs high-profile voices providing warning.
Or even sometimes objection. Just as the likes of Albert Einstein, Eugene Wigner and Leó Szilárd, were all early opponents of nuclear weapons: indeed, even Robert Oppenheimer, the ‘father of the bomb’, almost immediately regretted his contribution to war and destructive-capability, and – within two months – was already calling for a total ban on all nuclear weapons development.
And yet we’ve been trying, without success, to pressure world powers to get rid of nuclear weapons ever since – but it appears there’s no going back. That’s why it’s important for warnings and concerns to be registered ahead of time and not merely after the fact.
Hawking argued that sheer human greed would impede us from dealing with global warming or other environmental problems; while rapidly advancing technology and digitisation of human activity would also create new existential threats to add to the existing threats of nuclear weapons, over-population and resource scarcity, etc.
Recently Hawking proposed human beings may only have as little as 100 years left on the planet. He cited climate change, epidemics and population growth as being major contributors to a revised doomsday clock, but also cited extra-terrestrial threats such as asteroid strikes (which he said we are overdue).
In 2006, Hawking asked “In a world that is in chaos politically, socially and environmentally, how can the human race sustain another 100 years?”
Again, some balk at Hawking’s commentary, particularly the 100 years prediction.
But his view of the human race’s situation seems to have become so grim that he was advocating the human race’s escape from the planet Earth – that mankind should begin moving into space as soon as possible to provide the option for a degree of human survival even if Earth-based civilisation itself doesn’t survive.
He warned that if humans don’t grow into an inter-planetary race soon and settle on other worlds, our species could die out within the next century.
Last year, Professor Hawking was very forthright with his warnings about the development of artificial intelligence and robots, warning that AI is going to quickly reach a point at which it will become a new form of life, entirely capable of outgrowing and outperforming human beings – with the likelihood that it might one day seek to replace us entirely.
“I fear that AI may replace humans altogether,” he said. “If people design computer viruses, someone will design AI that improves and replicates itself.”
Whether the professor was envisioning a scenario like in The Matrix movie isn’t clear – though that kind of eventuality is certainly one way to interpret his commentary. In 2014, he said that AI was our “worst mistake in history”.
As in The Matrix movie mythology, the idea is that advanced AI could completely outsmart us before we’ve even figured out what’s going on. “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders,” he said, “and developing weapons we cannot even understand…”
He also put his name to a letter by the Future of Life Institute, calling for a prohibition against the development of autonomous weapons that are “beyond meaningful human control”. The fear, shared by numerous scientists and experts who shared his apprehension, is that we’re not far away at all from the development of autonomous systems (militarised AI or robotic warfare) in battlefield scenarios – a development described by some as the ‘Third Revolution in Warfare’, the first two being the invention of gunpowder and the invention of nuclear weapons.
In the last couple of years, Hawking also drew a lot of attention for warning about the dangers of contact with extra-terrestrial intelligences, warning that we – as a species – should be wary of seeking out alien races, who could easily be hostile. More than that, he argued that intelligent or advanced alien civilisations would not think much of us, as we would be a primitive people to them. “If aliens visit us, I think the outcome would be much as when Columbus landed in America,” he said, during the Into the Universe series on the Discovery Channel, “which didn’t turn out well for the Native Americans.”
He noted elsewhere, “If you look at history, contact between humans and less intelligent organisms have often been disastrous from their point of view, and encounters between civilisations with advanced versus primitive technologies have gone badly for the less advanced.”
Some of Professor Hawking’s recent statements or warnings had not been well received by various commentators or researchers and in some cases he was accused of fearmongering, apeing cliched science-fiction ideas, or even attention-seeking.
However, it could just as easily be argued that he was providing a counter-balance – after all, Stephen Hawking was hardly a Luddite or an anti-scientific mind, so his warnings are not motivated by the same things that motivate most of the anti-science trends that currently proliferate on conspiracy-based online commentary in particular.
Science needs dissent and argument – without that, it descends into fixed dogmas and becomes more like religion.
I have also noticed that Hawking has been singled out by conspiracy enthusiasts as somehow being an ‘agent of the New World Order’ (but, really, who isn’t?). Their argument for this is in some of the things Hawking was coming out with in recent years, such as talking about climate change, talking about over-population, advocating mankind’s movement into space, or even seemingly advocating a ‘one-world government’.
I don’t buy any of that, however. Over-population is a thing – there’s no point in labelling anyone who mentions it as being an ‘agent of the new world order’. And I personally think a one-world-government is probably inevitable and might even some day be necessary: I woudn’t want or trust our current, dodgy-as-fuck networks of political organisations and globalist bodies to create or oversee that move to one-world-government, but the emergence – one day in the future – of some form of global government seems inevitable.
It’s just a question of whether it’s going to be an oppressive, unhealthy one or something better; and, obviously, a question of who is controlling its establishment and whether they’re doing it in a way that is likely to benefit all of common humanity or a way that will benefit only a select section of the population.
In other words, I’m saying it isn’t the principle of one-world government that’s a problem – it’s the issue of what the reality of it will be that’s the problem.
However, while it’s impossible to know whether he was wrong or right in specific predictions (we won’t be able to know for a while yet), some of the general warning is hard to argue with.
That we need to be highly circumspect and cautious about advancing Artificial Intelligence should be obvious – but a number of AI enthusiasts were non-plussed with Hawking’s sentiments.
I would suggest that the combination of increasingly dumbed-down human societies and increasingly advanced AI would make it seem almost inevitable that humans – flawed, emotional, greedy, fat, temperamental – will eventually cede more and more agency and responsibility to the more efficient intelligence of AI.
If that ends up being the trajectory, then we would conceivably end up with our entire fate resting entirely in the hands of AI. From that point on, The Matrix scenario could become much more likely.
It isn’t much of a leap – just think about how dependent we already are on the Internet, computers and mobile phones. And then think how dependent we might one day be on much more advanced and all-encompassing AI.
It isn’t even necessarily about AI turning malevolent or anything of the sort: it could be seen purely in terms of evolution, where human beings become redundant. Shit, AI might even decide it is better for the health of the planet than human beings are – and that might even end up being the case, particularly if the environmental problems continue or escalate.
In terms of our own survival as the dominant species of the planet, the key would have to be finding – and carefully maintaining – some kind of agreed-upon equilibrium between retaining human agency and developing AI, where we avoid reaching the point where we are entirely dependent on AI. I think that’s the gist of what Hawking was warning about: the problem is, if our total dependency on the Internet (in the space of a mere 15 years or so) is anything to go by, that’s precisely what is destined to go wrong – we WILL end up entirely dependent on AI.
On Hawking’s view on the dangers of alien contact, I have always thought that it’s incredibly dangerous to court potential extra-terrestrial powers or races – because any ET power capable of interstellar travel is, by necessity, technologically superior to us and would view us as primitive. And, as Hawking points out himself, if the history of human societies and interactions is anything to go by, advanced societies have a habit of abusing less-advanced societies.
His Native-American analogy seems apt: I would also suggest that a truly advanced ET power might view us the way the British Empire viewed India.
In fact, I wonder if Hawking’s warnings about ET contact were not just targeted at the mainstream scientific community or organisations like SETI, but at those engaged on the more occult side of things too, as well as the massive community of Ancient Astronaut Theory enthusiasts. One of the things that has always stuck out like a sore thumb to me in regard to the centerpiece of Ancient Astronaut theory – specifically the mysteries of the ‘Annunaki’ at Sumer (Ancient Iraq) – is that the Sumerian testimony seems to suggest that human beings were envisioned as worker-drones (or slaves) for the allegedly extra-terrestrial race that established city-states in Iraq.
I don’t at all dismiss the Ancient Astronaut theories (particularly in regard to Sumer, for which the evidence seems very solid): but I find people’s enthusiasm for the ‘space gods’ very odd, given that the Sumerian record seems to suggest a less-than-idyllic state of affairs.
That’s not what Professor Hawking was referring to in his warnings (I assume); but given some of the contemporary obsession with the ‘return of the space gods’, it’s all related.
In terms of establishing contact between mankind and some as-yet-unknown alien civilisation, it is entirely 50/50 as to whether that civilisation would be benevolent or malevolent. This image here, by the way, is from the 1960s’ Star Trek episode ‘The Corbomite Manuever’ – that alien’s face always scared the crap out of me.
In terms of Professor Hawking advocating our migration from the planet and our becoming a space-faring civilisation, that seems like it has to inevitably be a later stage in our evolution (whether or not it has anything to do with the planet becoming untenable for us). There is already well-founded suspicion that we may already be a space-faring civilisation – and that a secret space-programme has been in operation for some time.
When I ponder the possibility, however, of mankind expanding into space and getting off-planet, there are causes for concern. It might be tempting to immediately envision it as some idyllic Star Trek situation where a peaceful, unified human civilisation ventures forth into space: but, again, given our track record, it might just as easily end up looking more like the movie Elysium – in which the wealthy elites live in luxury out in space, while the vast mass of lower-class humanity is left to slug it out and fight for scraps in abject conditions on the Earth.
But Hawking wasn’t wrong to think that people seem to be getting more stupid with each passing day – and getting conspicuously close to the world depicted in Idiocracy. How much of that is by engineering (as in the belief held by some that we are being deliberately dumbed down via what we eat, what we watch on TV and how we’re goaded into viewing the world, etc) and how much of it is just our own fault is hard to say.
There have actually been a couple of studies published in recent years purporting to demonstrate a diminishing level of general intelligence, one of the theories being that clever people are generally having less babies and less-clever people are generally having more: meaning that, over time, the results are going to be problematic for societies.
Although with notions of cleverness and stupidity, it gets very tricky and very subjective: as that kind of argument doesn’t account for different types of intelligence. It also seems to suggest that intelligence is inherited or genetic, which I suspect doesn’t hold true.
This wasn’t really what Hawking was talking about though: I think he was talking more in cultural terms – as in nurture and not nature.
In other words, I would suggest the problem of intelligence and intellect not being nurtured in broad society. When we get to a point where our societies don’t encourage us to think intelligently or to value intellect – or in fact encourage us to be dumb – our collective intelligence level is inevitably going to go down the toilet.
You even end up in a situation where intellectuals are viewed with suspicion – and, arguably, this is even something actively sought by some politicians, institutions, corporations and media organisations, because the dumber we are, the easier we are to manipulate or lie to.
How much of Professor Hawking’s fears for the future end up being validated remains to be seen. I’m sure he would rather be remembered more for his published works and his rich contribution to scientific thinking, debate and understanding.
I’m a bit embarassed to say that I never got through A Brief History of Time in its entirety (I found Asimov’s and Carl Sagan’s books more accessible): to be fair to myself, I was a teenager when I tried to read it – and I probably need to try again with a (slightly) more grown-up mind.
However, one also hopes that some of his recent warnings or reservations might echo in the minds of institutions, innovators and legislators, as our human and technological journey continues to accelerate at such a rapid pace.