Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People take inspiration and learn from their progenitors since the cave man art,…
ytc_UgxcPa0H8…
G
People are going to be targeting these trucks . forcing them to pull over or blo…
ytc_UgxbDr8mF…
G
You're right to be concerned. The development of large language models (LLMs) wi…
ytc_UgxZ5mpKI…
G
And IT Sounds AS terrible AS the tipical Pop Songs the past 10 years 😂 ofc AI ca…
ytc_Ugze8VWvB…
G
@JamesP7 The artists didn't consent to their art being used to train ai models. …
ytr_UgzU03c7X…
G
Humans will still have humans to,interact with, the care about, to entertain, to…
ytc_Ugy6zyqdz…
G
Like I wish I could code without getting lost in it once I make 10 functions. My…
ytc_Ugw_NWgoS…
G
I often say gratitude to chatgpt. In fact I change his name T-800 or T. I also t…
ytc_Ugy_gO1A3…
Comment
Unless the AI super thinkers can learn to gain energy to run on their own, humans can always shut it down by denying it energy to run. That might put the world into a shut down temporarily but could save us from destruction. But we will be totally reliant upon the systems controlled by AI. So, if it went rogue so to speak we'd have to have a way to know what was happening before it happened completely and that might be impossible at some point in the future. Of course things will go wrong. A certain portion of humanity will benefit greatly (at least in the short term) and most humans will experience upheaval and shock never seen before. I'm 79 so I wont' be around to see the complete transformation but I can imagine some of it.
youtube
AI Governance
2025-12-26T20:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyRxRfC6xUrMa9NxR94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyf2QDf6rBaEzUF2j94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxg00L8q3jOGQxIDNB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzMkwZBwE13Nqtv65x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyYReWrncYbsPu14ip4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzuBzD9f_LfexZBuRh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLKJt0wHox6zqp-3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzx9XD9aQDZ3MPvgEd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz1UQcJgttbjMGsei14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzReM8qceiOUQfhFYR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]