Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Krystal two bulls is so articulate, so informed, a truth teller. What a responsi…
ytc_UgzUb24ql…
G
Hinton suggests that we should become plumbers. No such luck. Robots will not be…
ytc_UgzuxKqby…
G
You are just milking AI Content When AI booms you make content like AI would tak…
ytc_Ugw1caXwh…
G
The f-ing worst thing I've seen is lower developed countries stating AI is going…
ytc_UgxP-XlKK…
G
@ Ai 'art' requires no programming knowledge, and is nearly completely unskilled…
ytr_UgxKvNGDe…
G
The freemium model isnt going to be viable in a lot of fields. And a lot of fiel…
ytc_UgxeL_Xyy…
G
@ocessionaryy some guy on the internet that doesn't know a single thing about A.…
ytr_UgyTGiapZ…
G
maybe set a lifetime for an AI consciousness to exist, then they will die before…
ytc_Ugxhp1rVy…
Comment
The game / website "Universal Paperclips" is a great example of the dangers of AI. It's a clicker where you make paperclips. First, by hand, later, you can buy machine, and eventually, an AI to improve your paper clip making. As the game progresses, the AI will make gifts to CEOs until it reaches singularity. It then proceeds to turn the ENTIRE universe into paperclips. Because that's what the AI was programmed to maximize and do at all costs. If AI ever becomes a danger, I suspect it's something like that - a single minded AI pushing its goal at all costs.
youtube
AI Governance
2025-07-06T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxJR9_zycrZmoLaT_l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxtwA9VawSfgI-VRxp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzW5u680mtmkkfTcEh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz8dW_VnoINeu3Hout4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxBq4j-NkJedSN7ppV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx8FqmgA2wKcpcFIN54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwY-3l1yVLr2Ys1BuV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxNPjOP3kn2-jCjAel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyZkN17I9V-0Fa8f5d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxV63AtsU0An6tlWWt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]