Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm fully against generative ai but I don't think this is the "own" people think…
ytc_UgzI_sJnJ…
G
As another YouTuber called Pagemelt so eloquently put it: "Tech oligarchs see ou…
ytc_Ugzdt8RLL…
G
@patchesthreewhy are these children crying about ai taking away their imaginary…
ytr_Ugxzi1c2O…
G
Difference between robot sophia and human sophia is that sophia knows whent to s…
ytc_UggcfE3mE…
G
AI training is definitely a topic to discuss, but I feel that ai art is no diffe…
ytc_Ugz0qiujv…
G
Just my opinion. But I believe that a lot of the AI creators for predators.…
ytc_UgzuYp57A…
G
Hayao Miyazaki was a WW2 survivor affected by Hiroshima(or Nagasaki, i forgot).
…
ytc_UgwJ7Dh3k…
G
Hey dear
Crazy 10min of my life with that clip, shows us 1:1 how fast the world …
ytc_UgxDYp0Pe…
Comment
It’s completely true that the top AI leaders are a bunch of guys who know each other, and they just want to win. But it’s also true that it is highly unlikely that China would just step back, even if everyone in the US stopped AI development today. Again, Eliezer has it right - we would essentially have to be willing to risk WW3 in order to stop it, and it definitely doesn’t look like this will happen…at least not fast enough.
youtube
AI Moral Status
2025-12-01T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxwk3tmMDv7CKwv5I54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZo05xoQ-Mnjxegl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6uUggqCeFxvMUlyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPBf2fn8xgifhFjJV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJQklkoy7-1oKel6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwWWcTTDJw2ntGmYl94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUPF526z1W85vWCR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxI7a__RUxhTdMR2RJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwM6Nqsah0pYFdAnh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyLoF6DNfxOvQ0g8PN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]