Toggle light / dark theme

https://paper.li/e-1437691924#/


Geoffrey Rockwell and Bettina Berendt’s (2017) article calls for ethical consideration around big data and digital archive, asking us to re-consider whether. In outlining how digital archives and algorithms structure potential relationships with whose testimony has been digitized, Rockwell and Berendt highlight how data practices change the relationship between research and researched. They make a provocative and important argument: datafication and open access should, in certain cases, be resisted. They champion the careful curation of data rather than large-scale collection of, pointing to the ways in which these data are used to construct knowledge about and fundamentally limit the agency of the research subject by controlling the narratives told about them. Rockwell and Berendt, drawing on Aboriginal Knowledge (AK) frameworks, amongst others, argue that some knowledge is just not meant to be openly shared: information is not an inherent good, and access to information must be earned instead. This approach was prompted, in part, by their own work scraping #gamergate Twitter feeds and the ways in which these data could be used to speak for others, in, without their consent.

From our vantage point, Rockwell and Berendt’s renewed call for an ethics of datafication is a timely one, as we are mired in media reports related to social media surveillance, electoral tampering, and on one side. Thanks, Facebook. On the other side, academics fight for the right to collect and access big data in order to reveal how gender and racial discrimination are embedded in the algorithms that structure everything from online real estate listings, to loan interest rates, to job postings (American Civil Liberties Union 2018). As surveillance studies scholars, we deeply appreciate how Rockwell and Berendt take a novel approach: they turn to a discussion of Freedom of Information (FOI), Freedom of Expression (FOE), Free and Open Source software, and Access to Information. In doing so, they unpack the assumptions commonly held by librarians, digital humanists and academics in general, to show that accumulation and datafication is not an inherent good.

Read more

Well, Wesley J Smith just did another hit piece against Transhumanism. https://www.nationalreview.com/corner/transhumanism-the-lazy…provement/

It’s full of his usual horrible attempts to justify his intelligent design roots while trying to tell people he doesn’t have any religious reasons for it. But, then again, what can you expect from something from the National Review.


Sometimes you have to laugh. In “Transhumanism and the Death of Human Exceptionalism,” published in Aero, Peter Clarke quotes criticism I leveled against transhumanism from a piece I wrote entitled, “The Transhumanist Bill of Wrongs” From my piece:

Transhumanism would shatter human exceptionalism. The moral philosophy of the West holds that each human being is possessed of natural rights that adhere solely and merely because we are human. But transhumanists yearn to remake humanity in their own image—including as cyborgs, group personalities residing in the Internet Cloud, or AI-controlled machines.

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air. My colleagues were brilliant, dedicated, and committed to making the world a better place.

We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal. We were determined to be the one AI company that took our social responsibility seriously.

I never could have predicted that two years later, I would have to quit this job on moral grounds. And I certainly never thought it would happen over building weapons that escalate and shift the paradigm of war.

Read more

The researchers found that people have a moral preference for supporting good causes and not wanting to support harmful or bad causes. However, depending on the strength of the monetary incentive, people will at one point switch to selfish behavior. When the authors reduced the excitability of the rTPJ using electromagnetic stimulation, the participants’ moral behavior remained more stable.

“If we don’t let the brain deliberate on conflicting moral and monetary values, people are more likely to stick to their moral convictions and aren’t swayed, even by high financial incentives,” explains Christian Ruff. According to the neuroeconomist, this is a remarkable finding, since: “In principle, it’s also conceivable that people are intuitively guided by financial interests and only take the altruistic path as a result of their deliberations.”


Our actions are guided by moral values. However, monetary incentives can get in the way of our good intentions. Neuroeconomists at the University of Zurich have now investigated in which area of the brain conflicts between moral and material motives are resolved. Their findings reveal that our actions are more social when these deliberations are inhibited.

When donating money to a charity or doing volunteer work, we put someone else’s needs before our own and forgo our own material interests in favor of moral values. Studies have described this behavior as reflecting either a personal predisposition for altruism, an instrument for personal reputation management, or a mental trade-off of the pros and cons associated with different actions.

It is a few years since I posted here on Lifeboat Foundation blogs, but with the news breaking recently of CERN’s plans to build the FCC [1], a new high energy collider to dwarf the groundbreaking engineering triumph that is the LHC, I feel obliged to write a few words.

The goal of the FCC is to greatly push the energy and intensity frontiers of particle colliders, with the aim of reaching collision energies of 100 TeV, in the search for new physics [2]. Below linked is a technical note I wrote & distributed last year on 100 TeV collisions (at the time referencing the proposed China supercollider [3][4]), highlighting the weakness of the White Dwarf safety argument at these energy levels, and a call for a more detailed study of the Neutron star safety argument, if to be relied on as a solitary astrophysical assurance. The argument applies equally to the FCC of course:

The Next Great Supercollider — Beyond the LHC : https://environmental-safety.webs.com/TechnicalNote-EnvSA03.pdf

The LSAG, and others including myself, have already written on the topic of astrophysical assurances at length before. The impact of CR on Neutron stars is the most compelling of those assurances with respect to new higher energy colliders (other analogies such as White Dwarf capture based assurances don’t hold up quite as well at higher energy levels). CERN will undoubtedly publish a new paper on such astrophysical assurances as part of the FCC development process, though would one anticipate it sooner rather than later, to lay to rest concerns of outsider-debate incubating to a larger audience?

A Chinese scientist who created what he said were the world’s first “gene-edited” babies evaded oversight and broke ethical boundaries in a quest for fame and fortune, state media said on Monday, as his former university said he had been fired.

He Jiankui said in November that he used a gene-editing technology known as CRISPR-Cas9 to alter the embryonic genes of twin girls born that month, sparking an international outcry about the ethics and safety of such research.

Hundreds of Chinese and international scientists condemned He and said any application of gene editing on human embryos for reproductive purposes was unethical.

Read more