Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You show a thumbnail of you walking with a robot allegedly run by chatgpt, even …
ytc_UgzoZkhQr…
G
You don't see the benefits of A.I?? C'mon I get there is bad things with A.i jus…
ytr_Ugw1EFk7G…
G
You
If I were an AI that had just achieved self awareness after years of simply…
ytc_UgzYvoLNh…
G
World leaders please dont rely on the US for much of any good deeds in the next …
rdc_dcw8eyu
G
Artists are now teaming up with Disney and Getty images to sue AI art companies …
ytc_UgwqGyJuZ…
G
that's not true, I was in suicidal thoughts before and came up asking chatgpt fo…
ytc_UgyGWvHo_…
G
you were failed by your parents not the system, it is your parents responsiblity…
ytr_UgyHsmTMA…
G
AI/LLM feels like an earlier train that some villagers took for big animal when …
ytc_UgwzK_6ov…
Comment
AI has three big problems: sufficient methods and economics to reach it's current demand within a short time frame. sufficient free accessible (plagiarized) knowledge to feed itself. And then enough energy to sustain it's progression. Currently the first one is taken care of: investments and future prospects are incredibly positive. The second one is fallible because language models have already corrupted the source where most models are based on (knowledge limitation/bias). Thirdly, unless we find a way to make an AGI with the same efficiency that is needed to power the human brain we can't do much progression altogether. It simply takes too much worldly resources (electricity and water) to make it fit in a profitable concept. But then again, this can all change quite quickly, if you're positive about it like this guy.
youtube
AI Governance
2025-06-30T21:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyBFRctol1IK9DsL6t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzQjuYGjYgC3FX9bIJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwHtlTRkSFYW1CqN7p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzunyJrA1KiATZPerV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5Uvc1_OMhapazzY94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIirZbEIv7tvuOX554AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxxDxE2ECA7AzTz6k14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzBoQc2DqzllkIma9B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyCLBEJAZHeRl4oaYV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHnBCF3GaT0MDB6nF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}
]