Via Newsweek, this seems fine:
"Scientists at the Massachusetts Institute of Technology (MIT) trained an
artificial intelligence algorithm dubbed 'Norman' to become a
psychopath by only exposing it to macabre Reddit images of gruesome
deaths and violence, according to a new study."
After exposing the AI to violent images, the researchers then tested it with Rorscach inkblot tests:
"Among the Rorschach inkblots used to test the now-tainted AI, Norman
said an image showed a man being 'shot dead,' while a standard AI looked
at the same image and saw 'a close up of a vase with flowers.' In
another, Norman said he saw a man being shot 'in front of his screaming
wife,' while the AI not exposed to sordid, disturbing images saw 'a
person holding an umbrella in the air.'"
...The MIT researchers in this study redacted the name of the specific
subreddits used to train the AI. The researchers said the AI 'suffered
from extended exposure to the darkest corners of Reddit' to illustrate 'the dangers of Artificial Intelligence gone wrong when biased data is
used in machine learning algorithms.'"
Imagine what sadistic, sociopathic, and psychopathic web content is doing to actual human beings.
What is the mental health impact, not just of violent web content, but of trolling, harassment, and microtargeting of Internet users?
No comments:
Post a Comment