Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
His main example of the "monster" model was the antisemitic remarks from AI. But…
ytc_Ugz1QX2n7…
G
Can we rely on Trump to address this concern of AI in this stage of development?…
ytc_Ugw70oLBZ…
G
People talking with AI? talking about their problems? it sounds very sad and lon…
ytc_UgwNU7s23…
G
The question is, WHAT is DeepFake ?! 😉 like Adobe (i think Stockphotos was it) w…
ytc_Ugw--yIHB…
G
Any AI intelligent enough to pass the Turing test is intelligent enough to fail …
ytc_Ugz8AtOGH…
G
This is what Moshi AI itself told me: https://www.youtube.com/watch?v=_UPDWqWAp4…
ytc_UgyW7xgi7…
G
it's always been a dream of companies to not need computer engineers
also, most…
rdc_m6xo1kh
G
When you ask this the answer is yes!
As of now there is a robot working in a st…
ytc_UgxtWN7EA…
Comment
29:57 I go with focusing on creating this technology before China. The idea is the first country that has access to it will have one of the most powerful things known to man and if they decide to use it to invade or exploit, we’d be screwed. If the super-intelligence ends up going rogue and decides to destroy everything in its path, we should have our own super-intelligence or any technology AI or not that can counteract it.
It’s like a blueprint that allows you to make an antimatter bomb gets created and you are debating on whether or not you should make this, all the while you know your neighbors oversees is not only in the process on creating this antimatter bomb, but after its creation, they might threaten to wipe you off the map with it.
youtube
AI Governance
2025-08-26T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxdpJSeUtp8sr5d2fN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgywI0G0wPgDP4Xn1hl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfZVoQhKmdpd2fT0J4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzvy0KFcG-q21C3ZJV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxB0c3uFNFfd8TXkKV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]