Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The only way humans will survive with AI is if we incorporate with AI. Otherwise…
ytc_UgzoXVVhE…
G
@freerangesimp If you made and distributed a photocopier that came packaged with…
ytr_UgzbGTgkQ…
G
The only way out for humans would be to send a terminator back in time ,and thre…
ytc_UgwqLWi-m…
G
AI is here to stay, and expand. Like most things there is good and bad to it. Ca…
ytc_UgyOMPW3e…
G
I think it's only a matter of time before the first AI CEO. If shareholders are …
ytc_UgzaLowu4…
G
Recently, I called a customer care number and pressed 0 for live rep. It turned …
ytc_UgzR64o5Y…
G
If grid ever go down. Say like a solar flare EMP from sun. All charge battery wi…
ytc_UgzNEbQtN…
G
@8xottox8 If I were seeing honest, informed criticisms of AI art, pointing out …
ytr_UgyoPUrHl…
Comment
I think the simulation hypothesis is genuinely plausible. it’s an extremely efficient scientific method for an advanced intelligence. If a post-human AI wanted to understand life, evolution, culture, or failure modes at scale, running billions of complete universe simulations and observing which parameter sets produce thriving life would give far deeper empirical insight than any single experiment. Those runs would reveal not just whether life emerges, but the probability distributions of success and failure, and the causal chains that produce complexity. It reframes “why simulate?” as an information-gathering problem: simulate many worlds, compare outcomes, and learn the mechanics of life at scale. if simulations can be nested, how do you verify reality. Maybe whatever it is is asking that question of itself and trying to find out. Alternate dimensions indeed. those dimensions are just other programs being run with different variations.
youtube
AI Governance
2025-09-21T10:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzQxktB4DDPZ5UdTTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzcN00KI7_S2BofkUF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwiT4fA3ieewY0zzBx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxl_4z-maGfJ2TeyXx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZHi6K1m8qcK6p54l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwePZ3kv6azZef8MtV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZ3I1kycfO2ifVssJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyDkMtP4Hg-j7dW_rR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz3tPGVVvf3FCWK9H94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7NKROTWGjFuay35l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]