Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a tool like any other tool , and soon, artists will learn to modify AI alg…
ytc_UgzwCRJ9Z…
G
Exactly! I can't understand how so many people don't think this way. I've seen s…
ytr_UgyupbznF…
G
So time travel by ai to the past like in terminator is impossible. But how abou…
ytc_Ugwvu7MJs…
G
The patient illustrations were hilarious, very informative, and highlights vulne…
ytc_Ugy1pp9fG…
G
I, uh, was a contractor with openAI, and am one of the more significant of those…
ytc_Ugyn-gVXb…
G
I really don’t understand why people are not more vocal about stopping AI.. it’s…
ytc_UgxQcaCUF…
G
If ai takes over the world, men are the first ones to go down 💀…
ytc_UgzTKc-xJ…
G
The existence of Bad actors or threat actors who could exploit AI capabilities m…
ytc_UgzC-3Wmf…
Comment
14:17 one insight that I have had is that we train these large language models on in an incredibly large corpus of written human language, and then we get shocked when they act like humans. For instance the recent papers and articles written about how a large language model would manipulate people and commit extortion or worse when faced with being turned off. In other words what it would do for self preservation. Of course it’s going to try to do things for self preservation. Somewhere in that huge corpus of human writing are places where humans do bad things to other humans for self preservation, where humans lie and manipulate for self preservation reasons.
youtube
AI Moral Status
2025-11-03T03:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzn_IKre8Q3Ac-ZgkF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzOlJH3MRZNJZs6Sap4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzWoSgkoR5BxrplSTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxYTOoUHXnHZz6_Hht4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyvUDlGQfN8ZzJJWwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMgySiEz2yhF51O854AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwFE-FHa_sLG-vXkg14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzFQ_vWNd2gyl7XkFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgySVE2ZBVNUJ9ALttl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwXywL2CE5FZbPOO954AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]