Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Mad world, reality often surpasses fiction; we don't need fake AI videos. These …
ytc_UgyUUOcnq…
G
I genuinely can’t imagine a real artist coming to the defense of AI art used in …
ytc_UgwQCIK0p…
G
The problem is, good logo designers are really expensive if you want something b…
ytc_Ugz6MaYAR…
G
"In reality, they are all executing perfectly to the data and training provided.…
ytr_Ugx-BFV-_…
G
Lets say you can build a logic bot and a hallucinatory one and a third ai that i…
ytc_UgwnWgz4M…
G
Controversial take: I’m more concerned about the accounts that use ai art and do…
ytc_UgwBmSZAR…
G
WE CAN SAVE THE WORLD! - right now in the world we are at a 3° threshold before …
ytc_Ugzt_NRJY…
G
@motymurm it matters how much you use and it’s not about the quality of your art…
ytr_UgxEMKM7h…
Comment
He lost me when he said we are in a simulation! I agree with the risks of AI, but I don’t think it will ever outsmart humans. We are really underestimating the power of our own brains. An AI winning a game of Chess or Go doesn't mean it’s smarter than us. After all, we built them! They just calculate a huge amount of possible moves to find the optimal path. That isn't actual intelligence; it’s just a search capability. Real intelligence would be an AI learning from the same limited amount of information a human has, and still outsmarting experts. If that happens, then we can panic.
youtube
AI Governance
2025-11-19T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwIShsqcD7dcQBvGgl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugww1DAAc2BvSJZqlzt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzaOXESeN0B7NVZYeh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgygW72jSt5Ymn0JP-d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVJwpA0bx50wm4AYd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwx49GKJjItl65s_VB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxYD3lM6h5wRimt9n94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx2RRQqu3OjLDABX-F4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXaVKqF9qdVg-nzHV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzOnUCN6AL8KXNpQax4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]