Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@prestonrobert2625 All enforcers(LEO) in our country are trained to hate the p…
ytr_UgxGVeypu…
G
If you haven't tried this, you should... The answers that ChatGPT gave me were f…
ytc_UgwnqyPfA…
G
I got in trouble with HR for farting in a meeting. I was hungover af though. HR …
ytc_UgxwjT4c3…
G
This is not close to the singularity in 50 plus years where we will realise tha…
ytc_Ugytnuwxf…
G
IF AI Takes all the jobs how is Amazon, Walmart, Tesla, Meta, Apple, going to ma…
ytc_Ugxewq9h9…
G
I would love to know what bad things this guy said elon did, he mentioned 3 good…
ytc_UgwW0wlgR…
G
The fix is simple, for the sake of humanity, the giants of the AI push in China …
ytc_UgwBb9_qt…
G
Disabled fiber artist here. One thing worth mentioning I think is that art and c…
ytc_Ugw2aCCZK…
Comment
Everyone’s obsessed with making AI smarter, but that’s not the real issue. We already know AI can out-calculate us. The bigger problem is that AI has almost no wisdom—it doesn’t know when not to act, when to lose gracefully, or when protecting people matters more than winning.
If we don’t build wisdom into AI, we’ll just end up with really clever systems that still make terrible choices. Smarter ≠ safer. What we need are AIs that understand balance, humility, and responsibility—not just how to “get results.”
In short: AI doesn’t just need brains, it needs wisdom.
youtube
AI Harm Incident
2025-09-13T02:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwcCPRE8AbtQXFw87F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwofSm8-R-LqC5qaOJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyHScYMWSNnpoZAEGh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyUwFwBu_zzVsSoNCp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDAia-kocHbn4fHk14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzBg0Vpe4Poj_0mEGN4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz7ZcG2yLuBIQ5Y6Hl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxRJy5GUPaMdYaCpLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx36D-Sn_aU2j_Ob5p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxmq-7Vroil4nXYuNx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]