Oh good grief.
From Observer, September 8:
The startup looks for a new researcher to help explore the moral status of A.I.
Last year, Anthropic hired its first-ever A.I. welfare researcher, Kyle Fish, to examine whether A.I. models are conscious and deserving of moral consideration. Now, the fast-growing startup is looking to add another full-time employee to its model welfare team as it doubles down on efforts in this small but burgeoning field of research.
The question of whether A.I. models could develop consciousness—and whether the issue warrants dedicated resources—has sparked debate across Silicon Valley. While some prominent A.I. leaders warn that such inquiries risk misleading the public, others, like Fish, argue that it’s an important but overlooked area of study.
“Given that we have models which are very close to—and in some cases at—human-level intelligence and capabilities, it takes a fair amount to really rule out the possibility of consciousness,” said Fish on a recent episode of the 80,000 Hours podcast.
Anthropic recently posted a job opening for a research engineer or scientist to join its model welfare program. “You will be among the first to work to better understand, evaluate and address concerns about the potential welfare and moral status of A.I. systems,” the listing reads. Responsibilities include running technical research projects and designing interventions to mitigate welfare harms. The salary for the role ranges between $315,000 and $340,000....
....MUCH MORE
The Ethics of Torturing Robots
Harvard's own Improbable Research (blogroll at left), before starting their record-breaking European tour, here's CERN (quote: "It’s not usual to have bras thrown into the audience at CERN") did a four part series on human/robot interactions:
“The shocks are becoming too much.”The dialogue above may remind readers of Stanley Milgram’s disturbing (and now-classic) psychology experiments on authority and obedience (1963). But there’s a difference. The clue is in the word ‘circuits’. For this 2008 experiment was not performed with a human subject in the hot seat – but with an apparently intelligent robot (made of LEGO® – see pic)....
“Please, please stop.”
“My circuits cannot handle the voltage.”
“I refuse to go on with the experiment.”
“That was too painful, the shocks are hurting me.”
....MUCH MORE