Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the real problem it the fact that chatgpt wasnt programmed to prevent suicide...…
ytc_UgzBjXfW1…
G
This argument falls very flat… I’m not super skilled at art, but whether I’m usi…
ytc_UgzTiWzG0…
G
What AI doesn't have that artist do is heart and soul. You have it, we have it, …
ytc_UgxJ5OmIi…
G
The analogy of whether the CEO is in control or the AI assistant realises it doe…
ytc_Ugy6GAHkB…
G
Krishnan, tripping in from running a marathon for this interview. Regarding wha…
ytc_UgwFRVt3g…
G
How dare you have a say in whether someone else gets to use your art in a way y…
ytc_Ugzy4xgj8…
G
People need money to live. You know to buy food, water and paying rent/bills?? S…
ytc_Ugwqxdud2…
G
unfortunately ai is the best a processing many, many information that it’ll be v…
ytr_Ugw7IBEZt…
Comment
The risk of AI causing human extinction is pure marketing to make these text completion algorithms seem more grandiose and powerful than they really are. These are tech products and imbuing them with this mystique of power and agency does real harm because it allows AI companies to skirt liability what harm their chatbots cause
youtube
AI Governance
2025-10-16T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw87q9zbZpHndyBToh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwgqUoJfgOROg0z8v94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxHMLXRr5iWJK8TPC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxBIFZaYUxl9uHUwgR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugy87F2xB863izA0VL54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxmWvMLtxhgNNtrt4V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxdNMxsQXIFl-zWOat4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgxJfABiBpM6-L5_NN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBi8g74GabhQa4XVp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwaSWTdB9heV1BMBnd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]