Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
***But billionaires create jobs***
Amazon jobs = pissing in bottles.
Apple ph…
rdc_ogyh99e
G
Perhaps what we should not build are robots. AI can remain in a screen, but rob…
ytc_UgxYGu41a…
G
"Claude having a scrap of humanity" should still be taken w/ a huge grain of sal…
ytc_UgzAD5g5L…
G
As a person who works in machine learning and data science, this is the way it s…
ytc_Ugy4b6M9E…
G
I've seen more AI videos on YouTube now than ever before and more than half of t…
ytc_UgwwAR4Vv…
G
@jasper2621 The 3 laws as a reliable robot "moral code" must be the cheesy telev…
ytr_UgwqtNc37…
G
Yes, the FSD system, which is replaces the old Tesla autopilot, has that advance…
ytr_Ugz6Q27Yc…
G
FALSE. The bot number is roughly correct but the workforce decline is NOT direct…
ytc_UgxfCklDg…
Comment
You can tell intelligence to not do something because it will be the same like asking humans something because we are capable of compromises. To make AI safe you have to code it into them and make them incapable of doing certain things and restrict access to AI for possibly dangerous things. Like if AI had conciseness and access to cars then it could kill countless people. So AI is ok to use but you have to meet certain conditions to be safe.
youtube
AI Harm Incident
2025-07-24T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgytMSzj2ck6R9J92AV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyRzt1BxYrzdb7Oho94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgynAxpK5hj_ux5wK5B4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwH5LcYf-A4n68lXql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwZh_Z4zGQnNpIrPa54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy-GD_AYJ30dSrmMbN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyExMCGUQd6tFOIVlZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxCJBEAlUKzPCWVaHZ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugynspyy6JvTus-BXlB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCBkM08Zc0GlafYA14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]