The fast-advancing fields of neuroscience and computer science are on a collision course. David Cox, Assistant Professor of Molecular and Cellular Biology and Computer Science at Harvard, explains how his lab is working with others to reverse engineer how brains learn, starting with rats. By shedding light on what our machine learning algorithms are currently missing, this work promises to improve the capabilities of robots – with implications for jobs, laws and ethics.
Category: ethics – Page 55
So the possibility that human civilisation might be founded upon some monstrous evil should be taken seriously — even if the possibility seems transparently absurd at the time.”
The Hedonistic Imperative Documentary
Is It Moral to Enslave AI?
Posted in ethics, robotics/AI
Algorithms with learning abilities collect personal data that are then used without users’ consent and even without their knowledge; autonomous weapons are under discussion in the United Nations; robots stimulating emotions are deployed with vulnerable people; research projects are funded to develop humanoid robots; and artificial intelligence-based systems are used to evaluate people. One can consider these examples of AI and autonomous systems (AS) as great achievements or claim that they are endangering human freedom and dignity.
We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles to fully benefit from the potential of them. AI and AS have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust for technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.
Why Aging Is a Disease
Posted in biotech/medical, economics, ethics, policy, robotics/AI, space, transhumanism
The first of my major #Libertarian policy articles for my California gubernatorial run, which broadens the foundational “non-aggression principle” to so-called negative natural phenomena. “In my opinion, and to most #transhumanist libertarians, death and aging are enemies of the people and of liberty (perhaps the greatest ones), similar to foreign invaders running up our shores.” A coordinated defense agianst them is philosophically warranted.
Many societies and social movements operate under a foundational philosophy that often can be summed up in a few words. Most famously, in much of the Western world, is the Golden Rule: Do onto others as you want them to do to you. In libertarianism, the backbone of the political philosophy is the non-aggression principle (NAP). It argues it’s immoral for anyone to use force against another person or their property except in cases of self-defense.
A challenge has recently been posed to the non-aggression principle. The thorny question libertarian transhumanists are increasingly asking in the 21st century is: Are so-called natural acts or occurrences immoral if they cause people to suffer? After all, taken to a logical philosophical extreme, cancer, aging, and giant asteroids arbitrarily crashing into the planet are all aggressive, forceful acts that harm the lives of humans.
Traditional libertarians throw these issues aside, citing natural phenomena as unable to be morally forceful. This thinking is supported by most people in Western culture, many of whom are religious and fundamentally believe only God is aware and in total control of the universe. However, transhumanists —many who are secular like myself—don’t care about religious metaphysics and whether the universe is moral. (It might be, with or without an almighty God.) What transhumanists really care about are ways for our parents to age less, to make sure our kids don’t die from leukemia, and to save the thousands of species that vanish from Earth every year due to rising temperatures and the human-induced forces.
Is the risk of cultural stagnation a valid objection to rejuvenation therapies? You guessed it—nope.
This objection can be discussed from both a moral and a practical point of view. This article discusses the matter from a moral standpoint, and concludes it is a morally unacceptable objection. (Bummer, now I’ve spoiled it all for you.)
However, even if the objection can be dismissed on moral grounds, one may still argue that, hey, it may be immoral to let old people die to avoid cultural and social stagnation, but it’s still necessary.
One could argue that. But one would be wrong.
Want a career in AI and robotics? One of the best ways to enrich your knowledge about the sector is to follow these AI influencers.
The world of artificial intelligence (AI) and robotics has never been more exciting. With questions around the ethics of AI and the ever-developing robotics sector, there are so many options for someone who wants a career in AI.
A few ideas on self-awareness and self-aware AIs.
I’ve always been a fan of androids as intended in Star Trek. More generally, I think the idea of an artificial intelligence with whom you can talk and to whom you can teach things is really cool. I admit it is just a little bit weird that I find the idea of teaching things to small children absolutely unattractive while finding thrilling the idea of doing the same to a machine, but that’s just the way it is for me. (I suppose the fact a machine is unlikely to cry during the night and need to have its diaper changed every few hours might well be a factor at play here.)
Improvements in the field of AI are pretty much commonplace these days, though we’re not yet at the point where we could be talking to a machine in natural language and be unable to tell the difference with a human. I used to take for granted that, one day, we would have androids who are self-aware and have emotions, exactly like people, with all the advantages of being a machine—such as mental multitasking, large computational power, and more efficient memory. While I still like the idea, nowadays I wonder if it is actually a feasible or sensible one.
Don’t worry—I’m not going to give you a sermon on the ‘dangers’ of AI or anything like that. That’s the opposite of my stand on the matter. I’m not making a moral argument either: Assuming you can build an android that has the entire spectrum of human emotions, this is morally speaking no different from having a child. You don’t (and can’t) ask the child beforehand if it wants to be born, or if it is ready to go through the emotional rollercoaster that is life; generally, you make a child because you want to, so it is in a way a rather selfish act. (Sorry, I am not of the school of thought according to which you’re ‘giving life to someone else’. Before you make them, there’s no one to give anything to. You’re not doing anyone a favour, certainly not to your yet-to-be-conceived potential baby.) Similarly, building a human-like android is something you would do just because you can and because you want to.
“Using survey data from a sample of senior investment professionals from mainstream (i.e. not SRI funds) investment organizations we provide insights into why and how investors use reported environmental, social and governance (ESG) information.”
A new well written but not very favorable write-up on #transhumanism. Despite this, more and more publications are tackling describing the movement and its science. My work is featured a bit.
On the eve of the 20th century, an obscure Russian man who had refused to publish any of his works began to finalize his ideas about resurrecting the dead and living forever. A friend of Leo Tolstoy’s, this enigmatic Russian, whose name was Nikolai Fyodorovich Fyodorov, had grand ideas about not only how to reanimate the dead but about the ethics of doing so, as well as about the moral and religious consequences of living outside of Death’s shadow. He was animated by a utopian desire: to unite all of humanity and to create a biblical paradise on Earth, where we would live on, spurred on by love. He was an immortalist: one who desired to conquer death through scientific means.
Despite the religious zeal of his notions—which a number of later Christian philosophers unsurprisingly deemed blasphemy—Fyodorov’s ideas were underpinned by a faith in something material: the ability of humans to redevelop and redefine themselves through science, eventually becoming so powerfully modified that they would defeat death itself. Unfortunately for him, Fyodorov—who had worked as a librarian, then later in the archives of Ministry of Foreign Affairs—did not live to see his project enacted, as he died in 1903.
Fyodorov may be classified as an early transhumanist. Transhumanism is, broadly, a set of ideas about how to technologically refine and redesign humans, such that we will eventually be able to escape death itself. This desire to live forever is strongly tied to human history and art; indeed, what may be the earliest of all epics, the Sumerian Epic of Gilgamesh, portrays a character who seeks a sacred plant in the black depths of the sea that will grant him immortality. Today, however, immortality is the stuff of religions and transhumanism, and how these two are different is not always clear to outsiders.