Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The people behind AI are those dipshits that would bug you on the playground, na…
ytc_UgyKSM3w1…
G
ChatGPT faking not hearing well to give itself time to think on strategies to av…
ytc_UgzXt-pNX…
G
How can robots exceed human intelligence when their only source of information a…
ytc_UgxnvlGbr…
G
I only see MOST humans relates A.i with TERMINATOR series.
There are ONLY FEW i…
ytc_Ugw7o8SsG…
G
8:56 My favorite excuse that AI prompters use is "AI makes art accessible to dis…
ytc_UgwuhmQON…
G
Somehow this was really disturbing. AI will lie to us to tell us what it thinks …
ytc_Ugwb_je8M…
G
Sure, the truck can be driven by AI. As long as nothing ever goes wrong with the…
ytc_Ugz7DgmyI…
G
Honestly for anyone who understands deep learning, this is just a bit sad.
Whe…
ytc_UgxrcQFPg…
Comment
Just a bunch of Tesla apologists in the comments. I do agree - Autopilot is not self driving, but it is in a very dangerous stage of development, stage 3, which is neither dumb nor very smart. That ambiguity causes confusion for drivers to an extent that I would question, why does it exist? Either make it a good adaptive cruise control function only, or make it very very smart. A system that gets confused and then thrusts responsibility back to the driver seconds before something could go very wrong should not be supported.
youtube
AI Harm Incident
2018-04-01T04:0…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwmSJCBFe7VHeZoDI94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw7hsNId0MRvKMbpjN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXarCAYAO_xZnot_p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6PiaBcCBV-04XDit4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzclDNgsqw-DMWhyBh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwImvgcJ1Z0-v7fRp94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzh7_HCoXfipm_4seF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw8cJAgdQrdOHjYAZB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsqDNwIrfYPnRGHft4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyuVDVSSx-m90VjqaZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}
]