Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
well.. if you see that in a long term,
automated human expression makes people n…
ytr_Ugyh_Yvnq…
G
we should only use AI for it to either help us or do the things that are dangero…
ytc_Ugzdd7_9n…
G
@darkfire8008 if I ask AI questions I require it to cite sources, because they …
ytr_UgyIde8Mr…
G
I genuinely don't understand the grandstanding and assuming "artists will need t…
ytc_Ugy_cz9Kc…
G
Heeeyyyy do you know where the ai got the art from?
Good job! REAL F❤CKING PE…
ytr_UgyuDgcwh…
G
in my head, future jobs will be agents for AI, AI will want to explore its envir…
ytc_UgxV3cnvk…
G
Alignment will probably require some kind of integration. Instead of building an…
ytc_Ugydqv2MK…
G
actually i saw a study where they said if you are rude to AI you get better resu…
ytc_UgzaJAS6d…
Comment
If you showed Python to Dijkstra or Turing, maybe they would've thought "So anyone can be a computer scientist now, it's so easy". We all know Python didn't end computer scientists, AI won't do it either. This is not an end but a transformation of an industry. Even if coders disappear, engineers won't.
youtube
2023-12-01T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyo6eyWVjjV0hRQHn94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjSdu0SoT5FDJSKxN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxLXAQtm0it7zc_9j54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRYGUdaEPxVfUL0px4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugys4WuQy7-wLuohbrB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYfhVQO5xEWx-7d154AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTfzduK6NxGuFvVcR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz8PNetZOL_rU64znd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxfH44jeDztHHuGYdt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwD3zSuab4SS2BrWb14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"}
]