A panel discussion titled Ethical Challenges and Educational Opportunities of New Media Technologies was held at the Forum.
Speakers examined issues related to information security risks, the spread of deepfakes, and the influence of artificial intelligence on public consciousness. The discussion featured Roman Karmanov, Director General of the Presidential Fund for Cultural Initiatives; Sergey Pershin, Deputy Minister of Culture of the Russian Federation; Archpriest Pavel Velikanov, Associate Professor at the Moscow Theological Academy; Armen Popov, Director General of the Center for the Development of Social Projects and head of the Heavenly Families project; Alexander Zhuravsky, Deputy Head of the Presidential Directorate for Social Projects; Ilya Kuzmenkov, Deputy Chairman of the Synodal Department for Relations of the Church with Society and the Media and Editor in Chief of Radio Vera; Darina Alekseeva, Editor in Chief of the magazine Moskvichka; and Darya Reshke, Publisher of Moskvichka.
The discussion was moderated by Vladimir Legoyda, Chairman of the Synodal Department for Relations of the Church with Society and the Media and Professor at MGIMO University and Sirius University.
Participants opened with a discussion of destructive ideologies in virtual space. Alexander Zhuravsky noted that the danger of artificial intelligence lies in its ability to adapt to a person.
“What we are seeing now, the latest widely known events, when the relevant chat <…> drove a person to suicide, directing him over a long period of time toward ending his life. We see that these are bugs, that is, a system failure, but imagine a system failure on a societal scale. Again, artificial intelligence is adaptive, it adjusts to you, to your psychological profile and psychology. <...> We must have such robust technological solutions that will ensure the sovereignty of our artificial intelligence and provide an ethical model of AI that does not pose the disappearance of humanity as a condition for AI’s further progress,” said Alexander Zhuravsky.
Roman Karmanov added that content created by neural networks needs to be regulated and labeled by law.
“It seems to me the scariest thing artificial intelligence has produced so far is an atmosphere of universal distrust. ‘Did you really write that song? Did you really write that text?’ <…> Something needs to be done about this, because it is turning into paranoia and phobia. <…> It would be great if we started reflecting on this phenomenon, because the only way to cope with it is to make ourselves and society as a whole label, either voluntarily or semi-mandatorily, what has been created with the help of artificial intelligence,” noted Roman Karmanov.
At the end of the discussion, Sergey Pershin pointed to the threat that artificial intelligence poses in the creative sphere.
“An artist, instead of taking a brush to a canvas, can write a prompt, upload it into a chat, and get a result. We have a large number of creative universities, where people study who have gone through certain stages of professional development and who know what tradition and art history are. And here there seems to be a risk that the market could be flooded by those who have learned to write prompts correctly, who are less developed in terms of art history and tradition. This cannot help but be troubling,” concluded Sergey Pershin.