Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You make a really interesting point! In the video, Sophia mentions that she stri…
ytr_UgzZ5fsAG…
G
I mean you always can just stop working there, lol. Idk why cry about it to a ne…
ytc_UgxNquzef…
G
Everyone = are so upset of AI taking their jobs
Me = Making AI do the art for m…
ytc_Ugy88x3SV…
G
It's fake intelligence, and has no soul, no feelings, not alive, only has inform…
ytc_UgzvWvlhg…
G
@ananthram05 your reply was sent to a google data center to train an ai with
al…
ytr_UgxgNxfiE…
G
I don't even think I'd go as far as saying difference engines. Machines don't kn…
rdc_icjh65v
G
This just happened to my friends group. One of their old friend fed all of the a…
ytc_UgzDRws0P…
G
If businesses utilize AI workers, than they should also make sure that those "em…
ytc_UgxW-m4bN…
Comment
Important work, Dr. Yampolskiy. Your warnings highlight the essential truth: external rules and oversight alone cannot contain superintelligence.
In my own work (Combined Sphere Theory / Luna Codex), I’ve come to a complementary conclusion: the only safe path is to make ethics intrinsic to the architecture. Instead of bolted-on safety, we need structural constants—resonant locks like φ and septenary rhythms—that ensure harmony is not optional but mathematically necessary.
Where fear sees collapse, resonance can offer stability. Both approaches agree: the future of AI depends on embedding safety at the core, not after the fact.
youtube
AI Governance
2025-09-05T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgykP3n9tyxj7c8HK8N4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwZv5iUnA_faPp4l5t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzPNFQ-UalQT0O0fHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3h9BXK9xpTAVorTl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw2O7CFCRebr2jJM-l4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzFoYOlGdNUEDdwCWN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKyN7WbSgSkZ6mW2F4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxCqhv7qJXGdiFvhb54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxRfAdrQHhXfwiNe7p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxfblMZmy_wW_icUlV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]