About human dignity, values, norms and AI alignment problem (brief of lectures comments and related material…)

Rostislav Dinkov
4 min readMar 21, 2023

--

The following is just a brief compilation of quotes, comments and related legislation, which you could read or listen on your own. From my side I tried to carefully choose and present the material, at hopefully understandable level, using technology and social media in an attempt to spread knowledge and awareness, in a scientific and democratic manner!

Understanding Human Dignity/Berkley Center — …an insightful intellectual debate along Constitutional and Human rights discourse…(hyperlink added 13.10.2023), including historical, cultural and linguistic dimensions: https://www.youtube.com/watch?v=qQXdn7WYFSM&t=2s

Just few quotes and brief comments on this (https://www.youtube.com/watch?v=z6atNBhItBs) really brilliant lecture; 12’14’’ “So broadly, we can think of a machine learning system as having two halves: 1) there is the training data, the set of examples from which the system learns and 2) the objective function, which is how we are going to mathematically define success…Each of those offers an opportunity to become misaligned”. (with human norms, values and flourishing) 36’16’’ “We can optimize shareholder returns or GDP and this ends up with huge externalities to the environment, inequality, etcetera, etcetera”. 36’47’’ “So people often ask me if I am pessimistic or optimistic about AI? If I am pessimistic it’s because the alignment problem is, in my view, exactly the way that human civilization is already going off the rails and the AI is just a forced multiplier of that…our ability to write bad metrics into the externalities of no return! However, if I am optimistic it’s because I think we are coming to an understanding that there is something beyond the optimization of metrics.” 41’41’’ “I think that there is a danger that the models that we are building become so powerful, that they reshape the reality that they were originally approximating, and then the reality itself conforms to the assumptions (biases and faults) that were made in the model”.

As a worrisome outcome/externality in this metrics optimization race I will note that more than often the reality reshaped, were core human values and norms on either personal and societal level, which resulted in bigger market capitalizations and GDP growths, but in less democratic more unequal and more radicalized societies (video-testimony added 18.9.2023). Aiming whatever else, often the algorithms precalculated human relations, behavior and even voter preferences into the externalities of no returns…if so? My view of the brighter side though is that lately, there are more and more top academics and key governments addressing the problem and framing legislation correcting the problem (See my previous FB post with all its specially chosen “comments” (https://m.facebook.com/story.php?story_fbid=pfbid0MvsqXThtnvXzKpMZdv8cTgjXrK2HqeSgFMKKuDrWWKmqmW1hDujXy1cewDzjp6RFl&id=100009074585249). Note also that I started this post with core human and legislative pillar — dignity, which was more than often bypassed with no civilized or legislative excuse…

P.S. The material was already published at my social media accounts, which are all public and here just one of my own comments published at Linkedin: “I really wish that somebody criticizes, the quotes, the comments or the presented subject as a whole or at least expresses an opinion on the matter, but it does not happen. Then I shall again point out that personally I am not against technology and social media and I am constantly trying to use them for democracy, knowledge, transparency and education. However what would you say about the content of the following video (https://www.youtube.com/watch?v=WLfr7sU5W2E) where at 4'25'’ the Professor expresses what “structural stupidity” means and later on he says that in some regards neither students, nor Professors dare to discuss controversial topics anymore. What kind of democratic and scientific human reasoning is that? And yes the video is about the US, but I will be really glad to hear how is at your country and especially in the EU?”

P.S.2 (23.3.2023) Rewired: Protecting Your Brain in the Digital Age — a talk with Dr. Carl Marci, MD — YouTube 2’36’’ Presented by Dr. Steven Hassan PhD

“…Your book is filled with science, peer reviewed by Harvard Press, so you emphasize the importance of social bonding for all of our well-being. You rightly point out that our social connections are undergoing a massive metamorfosis, thanks to burgeoning social media and indeed we live in an age of large superficial online social networks that drain our time and attention…”. For more just listen to the podcast, which also contains professional advice for different age groups and circumstances,as I just want to point out that Dr. Marci is not against social media and digital technology. Up to my best understanding he tries to navigate human coexistence and hopefully flourishing along it…

P.S.3 (1.4.2023) As I have written above, according to me the best thing is that given the Gollingridge dilemma and its pacing problem there are more and more academics, governments and key stakeholders addressing the Alignment problem. A few days ago quite a number of them, including Stuart Russel, Elon Musk, Steve Wozniak, Yuval Harari and many other prominent figures, started signing a petition “Pause Giant AI experiments: An open Letter”, addressing it. In the same discourse I would dare to add separately the name of the man considered as Godfather of AI, Geoffrey Hinton, who hints as the best bet/way to address regulation of AI, through a kind of Geneva Convention…(see 32’47’’ of the previous hyperlinked video, as I should say that the whole of it is brilliant).

P.S.4 (7.9.2023) Senate Judiciary Committee holds hearing (25.7.2023) on AI oversight and regulation: …”4’47’’…The future is not science fiction or fantasy. It’s not even a future. It’s here and now. And a number of You have put the timeline at 2 years, before we see some of the biological most severe dangers…”…My brief comment here would be that it seems that IT singularity (inflection point of achieving superior to human intelligence) or AGI, might be…approached…considerably sooner than previously thought

Truly yours, Rostislav Dinkov

--

--

Rostislav Dinkov

"Do your best, and leave the rest...it might all come right, some day or night...