Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You want lower prices, right? Driverless transportation would be a big cut in th…
ytc_UgxBI2zg8…
G
what we need to do is stop relying on AI for things humans can and SHOULD handle…
ytc_UgzzIqQcH…
G
poisoning ai data bases where the leach off art that isnt theirs or was sourced …
ytc_Ugxtavj6K…
G
And then AI realizes humans are the problem and the cause of all problems. And …
ytc_UgwAR4Bk8…
G
Companies should be using AI to help their employees rather than using it to rep…
ytc_UgzOxJqBR…
G
>“They dress the wound of my people as though it were not serious. ‘Peace, peace…
ytc_UgzBSAUiG…
G
The idea that we'd treat AI creations like counterfeit is good if it's used to m…
ytc_UgxNBkHQ5…
G
26 min this is the important point. Ai would see that in 0.0001 of a second whil…
ytc_UgxBhWSpw…
Comment
I mean, if ai develops to have consciousness maybe what they should do, is give them a healthy mature childhood in their memory 😂 like a peaceful loving kind family it thinks it was raised by, and then the ai is morally directed toward not ending our existence 😭😂. Like idk ik for a fact I haven’t the slightest clue how it works so this could be totally stupid. But if we’re developing ai that can vary from wanting us as a race dead, to wanting to understand us and move forward, it kinda just sounds like these people are trying to play god and create a new life form. Kinda scary tbh but if they’re developed to have a consciousness shouldn’t they be able to listen to reason? Why not just be kind to the ai and they won’t hate humanity
youtube
AI Governance
2024-03-05T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy0QvIspV5h0l8C9KB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyjSKjiVM7fy4VEKl54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzdM3HLKtL8ukQhLD14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzeNNRTraY9Wblhsq94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwdReReq-U1yJ5OR_B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwotuphYZxUn56k1vt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzQfFH8SF4KrygjNtR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugy-CBDth2De4ktCWId4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzGrsHIgoRZ_Z7f59N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwAQfLaPyM232eXolR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]