Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This agreement is about banning the use of autonomous weapon systems. IE: Alway…
rdc_k8xbqj4
G
Fear of death makes human do crazy sometimes illegal things...lets hope AI doesn…
ytc_Ugx3-0RxM…
G
We just have to boycott those companies that utilize too much automation. Spea…
ytc_Ugxg1aBw6…
G
AI is already out of the bag, it will play out to whatever outcome. Luckily we a…
ytc_Ugz9pQ0n0…
G
I must say, when people in my class do that, I know their dumbasses used ai beca…
ytc_UgxKsS9Wt…
G
Great video as always,
The way they portray AI today, it is more of a bubble and…
ytc_Ugyaphvzn…
G
WWⅡ - expectation: >99% white men, rest miscellaneous soldiers from their coloni…
ytc_Ugw1oFVql…
G
Currently AOC got the Disrupt Explicit Forged Images and Non-Consensual Edits (D…
rdc_lgn0fcg
Comment
Back in the 90s ''quantitative methods for business decisions' was all the rage, the maths used to make a business decision was all Greek to me (excuse the pun lol), this seems to me precisely what in theory LLMs should be good at. Many humans also lack logic. The President of the United States is a glowing example, and the fact he got elected by popular demand is quite telling about the lack of logic in the general American Population who feel more comfortable being ruled by Idi Amin from Uganda and the Greenwich Village People than anything resembling 'Civilisation' let alone 'Western Civilisation'. The fact that LLMs have no logic seems to be a fundamental design flaw.
youtube
AI Responsibility
2026-01-01T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugwx5j8cNmk-gcsRROl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy_KBqpdPdtbGQ_c9Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyKZMzmUjOXTdNCS9J4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Krw743kQ0czagFR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxkJkssf65DJ2aN2PZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugxtd-AQFukigUPhsKp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyqWI00BoOysLQdJvJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwrN322VVJNBwteorp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyD5DkCVQm-qz5Sp_F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwlodLYAMmDhKjWHZ14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"})