Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact someone on DeviantArt made over $11 K in a year… selling AI adopts……
ytc_UgzePAni6…
G
I know that a lot of you may say something in the line of "AI bad", but think of…
ytc_UgznemhcW…
G
11:28 Autopilot does not stop for stop signs or red lights. Autopilot(not FSD ak…
ytc_UgxDtG9bb…
G
Just remember that you can never tell stupid it is stupid..... They should be te…
ytc_Ugzvxs5f2…
G
Why would it be cheating to use AI to help you write a college paper? How is th…
ytr_UgyUuHX8K…
G
It's a bit hard to see how they have used generative AI instead of predictive AI…
ytc_UgyB0Dvwj…
G
@shockadelicSo, you’re saying AI’s purpose is to be discussed and mocked as a sa…
ytr_UgymerKN5…
G
I’m 77, I’m going to be out of here soon- the Boomer mindset in a nutshell. Than…
ytc_Ugwp0wmnf…
Comment
I think morals are very different from the point of view of a creator of a simulation. First of all, think of it, what if WE created the simulation for ourselves to actually experience different kind of lifes, including bad ones. Since it's "just a game" morals are irrelevant. His logic is based solely of us being trapped by some other entity. Also the fact that we can create simulations does not prove at all that we live in a simulation. Us having a common belief is not a proof either. Faulty logic. I also don't agree with AI being actually intelligent. I still just see it as algorithmic learning, the only safety issue is to give executive power to something that behaves like if it was sentient but in reality is not sentient and cannot make predictable decisions. I might be wrong, but I haven't seen an AI so far that was actually intelligent. ChatGPT is definitely not it. It's a very interesting conversation indeed, but seems more like a fantasy to me. I'm open to be proven wrong
youtube
AI Governance
2025-09-09T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBx_AOT7n0JHbMZc14AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxq5fBpPrA9zIe2Y-V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyF0b4ngsBk8KJlBtZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyE1ha1LUCSFazqX714AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxSIl91agQNiduXObx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFU1C_anOly4Iqac54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx2reCYoruZ_vg0CMZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwQqHr__6EDW-icyzh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzig7Q88UfHCg4x5Lt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy3SY6eFoL9CVlQG094AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]