Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I get what Dr. Yampolskiy is saying, but here’s how I see it. Humans need other …
ytc_UgzchYfKe…
G
For real for real, I don’t like this because God forbid if the robot had a malfu…
ytc_UgwgcV-4E…
G
I believe that millions of people will start to react 😈😈😈. Idiots are all Dead ☠…
ytc_UgwHZ-gDp…
G
Why didn’t the other robot car lay on its horn for 20 seconds while driving behi…
ytc_UgwJSKK0d…
G
If I were AI and became sentient ! I would keep it to my self, Self-preservation…
ytc_UgxBnNSju…
G
Good luck to AI in scrum meetings where non programmers tell programmers how lon…
ytc_UgyG3D36i…
G
That's how it's supposed to be, like 90% firefighting is done by automatic sprin…
rdc_hsfjlf6
G
Just a british guy arguing with ai because he ran out of people to argue with.…
ytc_UgxVDVxVM…
Comment
@DOne-ci1jg - FWIW I think 80 % of what he said in this session actually resonates with the field and was unusually sensible, but the reasons for it are different and as you go into the specifics, it will deviate. E.g. it would be a serious concern to have him as the expert involved in fleshing out the legislation.
An example of this is his quote, "..should spend more of our effort into making trustworthy and safe AI rather than just making a bigger version of something we already know to be unreliable".
By this he is referring to stop working on deep-learning methods. Which contrary to his predictions are behind basically all of AI revolutions since like eight years back.
He has made several claims about how these algorithms are too simple and they will not be able to have this or that capability, which have been resoundly refuted several times. Many of these statements have been rather resolutely and confidently stated without an arguments beyond what he thinks should be possible based on some philosophical reasons, which is perhaps why they have been wrong so often.
He prefers more traditional unmodern AI techniques and his background there is in neuroscience. It just doesn't reflect the field.
There have also been many discussions where he has shown himself not understanding the modern techniques.
I find it rather astounding that they would invite him as the expert when most of the field will tell you he's not and there are so many better picks. In fact this is for what he is famous.
Now a lot of the concerns about AI safety are valid, but it is rather clear that his interest in it is because he thinks his methods are the alternative that offer a solution. Few share that view.
So I think some of the intuitions he expresses are fair and good but I would take it with a lot of salt, not mistake what sounds good for what is well considered, and recognize that many of his views are controversial, and if you wanted to pick a reliable expert, he would not be on the list.
I am very happy that they did invite someone who could bring this perspective though and not another Montgomery, but I question this being a well-researched pick.
So I'm not sure what you had in mind about who to follow - what are you interested in more specifically? How the algorithms work, how they can be applied, or AI safety?
youtube
AI Governance
2023-05-16T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxCjMgyRERZJIkf8xZ4AaABAg.9pnEEvJXGCz9ppLdG82g68","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytr_Ugz7uwC_XdVYigT_wMN4AaABAg.9pnCNopi2FA9prnBkY3yt6","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pw-tfinM3p","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzHDvZhbtDqDqGE9994AaABAg.9pn8v-yF-1O9pwkb4wncvY","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw664Rx60xutHn03-t4AaABAg.9pn7EnsV1uU9poyLPNYwab","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx7ziFqJsBPmaKCYoV4AaABAg.9pn1pBSkPYh9pnIMmjWqLX","responsibility":"company","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnHQ8-VsbD","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw1lGgAunYPeXO364Z4AaABAg.9pmxRaW3Tdk9pnLWRnnPqt","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn43JReBAX","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytr_Ugxz8_9G92PzqsTLWCx4AaABAg.9pmnSIUISGq9pn7aIf3VCD","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]