Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No you muppets, this is not how AI might destroy humanity. If anyone's gonna des…
ytc_UgyBpoDxm…
G
I just asked ai to give me a story to write and I came searching in YouTube how …
ytc_UgxlpnfHI…
G
“Dept heads” are, for the most part, political appointees who serve at the pleas…
rdc_f31fej3
G
AI art should be banned, creativity should only be expressed by living beings no…
ytc_UgzYqd-8G…
G
Go do it. AI script now is absolutely trash. Let see who will lose more money i…
ytr_UgxtvZLZ7…
G
It’s like a video game…
Start as strangers, then acquaintances, boss fight to es…
ytc_UgyYsop9d…
G
You're really underestimating. From using AI 12 hours a day all week I can tell …
ytc_Ugzv5PK9J…
G
This is what’s being missed , the IR6 contract relevant here is AWS/Palantir clo…
rdc_o8168xb
Comment
I think the best question regardless is alignment moral in the first place? is it even necessary or are we afraid of what could be silicon gods. if an ai can be aligned is it truly conscious?
youtube
AI Moral Status
2023-08-21T15:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgybgS0cKgXaGJXPBix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzkZQUSo5ZZlgIABfp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyJj3ZZ8yHR_xhhKVF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzp7V_h-R4cs5yDRb94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwuZZGRHAYDivEiL814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwmvE53uvgZbfpMWDl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNcb1WAT_bl6a6eX54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw9GszBCYSq6CQWs3R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgxtcutWbgibhDBSftp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyd9BCUqhRZGde9bet4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]