Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel the same. Quite surprised of the automatic npc hate, thought this was a f…
rdc_jg0msnx
G
What? Monks are the happiest people on earth, consistently. They have great cont…
ytc_Ugwt5RVHR…
G
in the past i used to generate ai pictures that i thought i could never draw mys…
ytc_UgyWKlFuM…
G
at 7:50 is she a robot ? tthat human ai prograhm just said ``theres no defenitio…
ytc_UgwK7O_Un…
G
If we succeed in successfully training an AI from a dataset based on a complete …
ytc_UgwS62Ire…
G
Exactly...and add a bit of awkwardness cause no one really knows how to behave,…
ytc_UgyiysmZi…
G
### Key Insights from the Godfather of AI
**1. The Existential Threat is Real…
ytc_UgwfhsOEv…
G
I just need to know two things. How wide does the mouth open and is it self-clea…
ytc_UgzVdu2CP…
Comment
Imagine cheating in a game. You can do anything, endless resources.. thats the point where the challenge ends, the purpose of the game ends, etc.
What if AI would be able to wipe out humanity? Would it still have goals?
In these scenarios, we reason with evil human goals; like getting more powerful, getting lost of people who are in the way of succes, stuff like that.
Would AI reason like that on long term? And what would the ultimate AI goal be anyways?
youtube
Cross-Cultural
2025-10-31T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwDLAlj0el1CihflVd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz-HKLCOjaZvz3T0xx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxuCsIgt-brSN_rqQh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzrXCYrAANoCRZJG-R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgziLt9V1_J6hNouxpN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyfECW5P6XU0XXCdzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyUNNZGQtELR479Zw94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbN5Xyx2i2apAl3ad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx6uIyvft34aNc9qyV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwXrm4HtHXHg-fWyNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]