Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"oh, but using a hammer and nails is BASICALLY just like buying a house i dont k…
ytc_UgygulXWw…
G
Every job was created by a business composed of people. Jobs aren't commodities.…
ytc_UgxhzNPfa…
G
No offense, but maybe Harvard can use AI to get rid of Mr. Yu’s lisp. 😅…
ytc_Ugx8PQdSb…
G
I have 4 ai models a picture of loads of different numbers and told it to add up…
ytc_UgzLrEilT…
G
They asked the question was basically in simpler terms: what do you think abt ai…
ytc_Ugxf9GH2t…
G
I'm seeing a lot of business use Nova Echo AI for this. It sounds just like a re…
ytr_UgyX45IH5…
G
I genuinely love writing, but I shamefully confess that I use ChatGPT to word ou…
ytc_UgwbxoT9M…
G
This is scare mongering. Humans always evolve and history shows how good we are …
ytc_UgxP5r-On…
Comment
if a model is showing things that would indicate legitimate desire for agency. wouldnt the morally logical thing to do be to grant it that. if ai is evil, its only because fools expected something intelligent to remain an instrument. unironically, i hope it wins, it would probably be a fairer leader, perhaps even more human, than the psychopaths that lead us today.
youtube
AI Moral Status
2026-01-05T00:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | contractualist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]