He believes in a good backup plan.
Category: existential risks – Page 107
For unknown reasons, the Earth’s ionosphere has weakened dramatically during the course of the last century, resulting in the collapse of the entire ecosystem. Earth has become an increasingly hostile and uninhabitable place and with no shield to protect it, it is at the full mercy of meteors.
All animal and plant species perished decades ago. All that remains is one small group of humans who attempt to resist the hostility and hardness of the external environment from SUMER, the last hive city in the world, which has been specifically designed to keep the population alive through oxygen supply systems.
“Billionaire Elon Musk has a really compelling reason to ramp up NASA’s budget: We need to become a multi-planet species to ensure the survival of the human race, and we need NASA’s help to do it.”
Someone tell Congress.
Although it was made in 1968, to many people, the renegade HAL 9000 computer in the film 2001: A Space Odyssey still represents the potential danger of real-life artificial intelligence. However, according to Mathematician, Computer Visionary and Author Dr. John MacCormick, the scenario of computers run amok depicted in the film – and in just about every other genre of science fiction – will never happen.
“Right from the start of computing, people realized these things were not just going to be crunching numbers, but could solve other types of problems,” MacCormick said during a recent interview with TechEmergence. “They quickly discovered computers couldn’t do things as easily as they thought.”
While MacCormick is quick to acknowledge modern advances in artificial intelligence, he’s also very conscious of its ongoing limitations, specifically replicating human vision. “The sub-field where we try to emulate the human visual system turned out to be one of the toughest nuts to crack in the whole field of AI,” he said. “Object recognition systems today are phenomenally good compared to what they were 20 years ago, but they’re still far, far inferior to the capabilities of a human.”
To compensate for its limitations, MacCormick notes that other technologies have been developed that, while they’re considered by many to be artificially intelligent, don’t rely on AI. As an example, he pointed to Google’s self-driving car. “If you look at the Google self-driving car, the AI vision systems are there, but they don’t rely on them,” MacCormick said. “In terms of recognizing lane markings on the road or obstructions, they’re going to rely on other sensors that are more reliable, such as GPS, to get an exact location.”
Although it may not specifically rely on AI, MacCormick still believes that with new and improved algorithms emerging all the time, self-driving cars will eventually become a very real part of our daily fabric. And the incremental gains being achieved to make real AI systems won’t be limited to just self-driving cars. “One of the areas where we’re seeing pretty consistent improvement is translation of human languages,” he said. “I believe we’re going to continue to see high quality translations between human languages emerging. I’m not going to give a number in years, but I think it’s doable in the middle term.”
Ultimately, the uses and applications of artificial intelligence will still remain in the hands of their creators, according to MacCormick. “I’m an unapologetic optimist. I don’t think AIs are going to get out of control of humans and start doing things on their own,” he said. “As we get closer to systems that rival humans, they will still be systems that we have designed and are capable of controlling.”
That optimistic outlook would seemingly put MacCormick at odds with the views of the potential dangers of AI that have been voiced recently by the likes of Elon Musk, Stephen Hawking and Bill Gates. However, MacCormick says he agrees with their point that the ethical ramifications of artificial intelligence should be considered and guidance protocols developed.
“Everyone needs to be thinking about it and cooperating to be sure that we’re moving in the right direction,” MacCormick said. “At some point, all sorts of people need to be thinking about this, from philosophers and social scientists to technologists and computer scientists.”
MacCormick didn’t mince words when he cited the area of AI research where those protocols are most needed. The most obvious sub-field where protocols need to be in place, according to MacCormick, is military robotics. “As we become capable of building systems that are somewhat autonomous and can be used for lethal force in military conflicts, then the entire ethics of what should and should not be done really changes,” he said. “We need to be thinking about this and try to formulate the correct way of using autonomous systems.”
In the end, MacCormick’s optimistic view of the future, and the positive potentials of artificial intelligence, beams through clouds of uncertainty. “I like to take the optimistic view that we’ll be able to continue building these things and making them into useful tools that aren’t the same as humans, but have extraordinary capabilities,” MacCormick said. “And we can guide them and control them and use them for positive benefit.”
When humanity needs to make use of a facility known lovingly as the “doomsday seed vault,” you know things have gone off the rails. After four years of civil war in Syria, the region’s main source of important seeds in the region has been damaged, and researchers from the International Centre for Agricultural Research in Dry Areas (ICARDA) is asking to make a withdrawal from the seed bank. This will be the first time humanity has had to draw on this resource.
The Svalbard Global Seed Vault was officially opened in 2008 and contains more than 860,000 samples of seeds from nearly every country on Earth. Its goal is to preserve important agricultural crops like beans, wheat, and rice so they will be available in the event of war or natural disaster.
To do this, the vault is built into the side of a mountain in the remote northern reaches of Norway on the Svalbard archipelago. It’s only 800 miles (1300 km) from the north pole, which allows researchers to keep the seeds at a frosty 0 degrees fahrenheit. Even if all the people left Svalbard and the power went offline, the vault would remain frozen and intact for at least a few centuries.
Neil deGrasse Tyson and Edward Snowden recently discussed the idea that encryption mechanisms with advanced extraterrestrial species and humans could theoretically render communication as indistinguishable from cosmic background radiation. With only a short period of time in a species growth where open communication is broadcast to the stars (through the sluggish and primitive nature of radio broadcasts), this could prevent us (or other species) from making contact with one another.
With the Drake Equation stating a high probability of communicative extraterrestrial civilizations and the contrasting Fermi Paradox citing lacking evidence of such, it begs the question of whether outlying reasons have an impact. In my opinion, the Drake Equation rings true in the sense that hundreds of billions of stars exist in our galaxy alone (many with their own diverse planetary bodies), setting the stage for extraterrestrial life to disavow itself as insatiable ramblings. Unlike that which is eminent in the Fermi Paradox, I believe, in this case, a conclusion based off of inductive reasoning seems to hold more water than an evidence-only approach.
Keeping in mind the discussion in The Guardian article, a flaw of the Fermi Paradox’s evidence-based perspective should become apparent: secure, encrypted communication (cloaked by design) would render the existence of extraterrestrial intelligence invisible to the prying ear. If intentional, there could be many reasons for withholding this whereabouts of a species location. An abstract theory from science fiction may itself hold a degree of truth. An example of which, is the video game series ‘Mass Effect,’ where an advanced, sentient machine-race cleanse the galaxy of advanced life every 40,000 years. The reasoning for doing so is to “bring order to chaos” and for reasons “unfathomable.” Be it for an abstract reason such as this or simply for secure communication, the encryption of the resultant transmission’s presence wouldn’t register as noticeable to any observers. As nearly all signs of outside life would be mute, it then lays in the other senses that hold the most promise of enlightenment.
Progress always seems to ride a slippery slope. Innovations generally bring a plethora of potential benefits and just as many dangers, the obvious and the hidden. Technologies that tamper with our biological constructs is well underway in the neuro- and biotech industries. Historically, innovations in medicine have usually been beneficial on the aggregate.
But these new breakthroughs go beyond preventing and healing pre-existing causes. Transhuman technologies hold the promise of enhancing who we are as individuals and potentially as an entire species, and the decisions surrounding these technologies are far from simple. Dr. Nayef Al-Rodhan, a philosopher, neuroscientist, and director of the Geneva Center for Security Policy, believes we should be acting now to prepare for the inevitable and the unpredictable ramifications.
Framing Human Motivation
Considering our mixed track record as a species in rolling out groundbreaking innovations, discussing and finding potential solutions to many of the hidden dangers, and obvious ones, seems more than reasonable. One of the more puzzling questions is, where do we begin to have a pragmatic conversation on the ethics of these technologies?
There are plenty of theories about what drive human decisions, not least because human morality is infinitely complex and our minds crave frames through which to make sense of chaos. Dr. Al-Rodhan has his own conception of what drives human motivations. He makes meaning using the lens of “5 P’s” – Power, Pride, Profit, Pleasure, and Permanence – which he posits drive human motivations. “This is my view, the foundation of my outlook…this perceived emotion of self interest drives our moral compass.”
Al-Rodhan’s view of human nature seems to make a lot of sense, bridging the rational with the emotional. Such a frame is particularly helpful when considering technology that undoubtedly taps into our deepest fears and hopes, and invokes rational (and irrational) debate. During a recent TechEmergence interview with Nayef, I asked for his thoughts on the concerns and considerations of this brand of technology in the coming decade.
The Near Business of Enhancement
Al-Rodhan believes that we will see cognitive enhancement primarily through neuropharmacology, or neuro- and psychostimulants. This concept of this technology is nothing new — the military and many other organization have used their stimulants of choice in the past, one of the most pervasive being alcohol. But this new wave of neuro- and psychostimulants will methodically target specific areas in the brain, giving way to the possibility for innovations like increased mood modulation and more cognitive ability within the confines of the brain’s neuronal population.
Neuromodulation has been used in the military, with some efforts to make soldiers less emotional and to require less sleep. The difficulties with side effects are often more pronounced when soldiers return from combat. “They are all messed up due to severe brutality, fear, and some of these agents they are given make them addicts to certain things,” says Nayef, acknowledging that this happens in most all militaries. “The point is that psychostimulants and neuromodulators will make us feel very good, but they are very dangerous because they require addictive behavior…and we need strict oversight mechanisms.”
Nayef says that technologies such as brain machine interface (BMI) are likely beyond the span of a decade, but that implantable microchips (whether bio or biotechnological) are as much of an immediate concern as the introduction of neurostimulants. “The FDA in the United States is entrusted with keeping us on the right path,” says Al-Rodhan.
Finding Common Regulatory Ground
Is it possible to put in place national or international structures for managing these new and emerging technologies? Al-Rodhan believes it is more than possible; however, the primary issue is that our regulation is way behind innovation. Regulatory frameworks are lacking for a number of reasons. The unpopularity in politics is a major obstacle to overcome. In elections, these types of contradictory frameworks are not politically on the front burner for most candidates, and the long-term outlook is limited.
Another area for concern is corporate pharmaceutical entities, which Nayef says are not as well regulated as some might think. Businesses are concerned about the bottom line above all else, which at times yields unfortunate outcomes for the whole of society. “This is part of their role as executive, they’re not too concerned about moral regulation,” says Nayef. As unappealing as it might sound to free market capitalists, the institution that traditionally steps into these frontiers to regulate is government.
A relevant and current example is the science and business of moderating genomes in China, which is already investing a lot of money in this industry. Some effects of this technology may not be so obvious at first, and it is possible that negative ramifications could occur without the correct bioethical oversight. Al-Rodhan asks “what happens if you get a piece of DNA that preludes the biosphere? Who knows what kind of mutation that may produce spontaneously or by merging with other DNA in an organism.” These are the types of questions that governments, academic institutions, corporations, and individual citizens need to be asking, considering the multiple perspectives that emerge from a framework like Al-Rodhan’s that applies across cultural boundaries.
Al-Rodhan describes the process of implementing such regulatory frameworks as a transnational effort, but says that such efforts start with countries like the U.S., Japan, and Europe, where accountable mechanisms already exist. Taking the lead doesn’t guarantee the same priorities will be given elsewhere, but it can provide an example — and ideally a positive one. “We have about a decade to get our act together,” says Al-Rodhan.
Meeting the basic needs of humanity is increasingly brought into question as we begin to resemble a cancer to the living organism we inhabit. As mass extinction continues to become an omnipotent reality, it’s apparent that more humans equals more problems. To fix this, we have to approach them in the same way farmers do: with resiliency. Farmers try to nurture their crops and hope for the right season. Although, even the predictability of spring, summer and fall’s outcome can be misleading. Nature has a way of leading things in the exact opposite direction than they seem to be headed. And it is those who’ve treaded, but still embark that truly encounter the rewards. For if farmers were to give up after an adverse season, there’d be no food next year. There’d be no continuity of supply for society. There’d be no method of feeding the hungry. No solution to ease the growing population and its rising demands.
So, with exponential gain in human births this century, how do we combat such problems? One possible solution is to build “green skyscrapers” for the sole purpose of farming, where we are able to control the environment and have multiple levels of plant growth. This could be done by utilizing an array of mirrors to redirect sunlight to every floor, while supplementing with multi-spectral, energy-efficient LED’s. With advanced humidity control and water-recycling techniques, we’d contribute towards the global conservation of water and open up valuable land to reforestation — all through subjugating the unpredictability of nature. This ensures the utmost quality and care goes into producing local, high-quality food, with the added benefit of honing the technology needed for interplanetary colonization.
A piece I wrote recently about blockchain & AI, and how I see the Lifeboat Foundation as a crucial component in a bright future.
Blockchain technology could lead to an AI truly reminiscent of the human brain, with less of its frailties, and more of its strengths. Just as a brain is not inherently dictated by a single neuron, neither is the technology behind bitcoin. The advantage (and opportunity) in this sense, is the advent of an amalgamation of many nodes bridged together to form an overall, singular function. This very much resembles the human brain (just as billions of neurons and synapses work in unison). If we set our sights on the grander vision of things, humans could accomplish great things if we utilize this technology to create a truly life-like Artificial Intelligence. At the same time, we need to keep in mind the dangers of such an intelligence being built upon a faultless system that has no single point of failure.
Just as any technology has upsides and corresponding downsides, this is no exception. The advantages of this technology are seemingly endless. In the relevant sense, it has the ability to create internet services without the same downfalls exploited in the TV show ‘Mr. Robot,’ where a hacker group named “fsociety” breached numerous data centers and effectively destroyed every piece of data the company held, causing worldwide ramifications across all of society. Because blockchain technology ensures no centralized data storage (by using all network users as nodes to spread information), it can essentially be rendered impossible to take down. Without a single targeted weak point, this means a service that, in the right hands, doesn’t go offline from heavy loads, which speeds up as more people use it, has inherent privacy/security safeguards, and unique features that couldn’t be achieved with conventional technology. In the wrong hands, however, this could be outright devastation. Going forward, we must tread lightly and not forget to keep tabs on this technology, as it could run rampant and destroy society as we know it.
Throughout the ages, society has always experienced mass change; the difference here being the ability for it to wipe us out. Therefore, it arises from a survival imperative that we strive for the former rather than the latter. We can evolve without destroying ourselves, but it won’t be a cakewalk. With our modern-day luxuries, we, as a species think ourselves invincible, while, in reality, we’re just dressed-up monkeys operating shiny doomsday technology. Just as it was a challenge to cross the seas, to invent tools and harness electricity, the grandest stakes posed by the future (and the ones defining our survival) are the most difficult to accomplish.