Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
WE CANNOT KNOW THE REAL LEVEL OF AI, BECAUSE AI HAS ALREADY BEEN CAUGHT OF LYING…
ytc_Ugz5eWV1w…
G
There's a creepypasta where an ai becomes conscious and then pretty quickly deci…
ytc_Ugw_LJm-1…
G
The Godfather of AI warned that without any laws or regulations that it's very p…
ytc_Ugznl8F2w…
G
The thing about AI art is that while it's time consuming, it also doesn't take a…
ytc_Ugzg1NFx_…
G
Give AI another 5 years and it will replace interpersonal connection and everyth…
ytc_UgzEWkng1…
G
You need to actually learn more about consciousness. This is a very ignorant sho…
ytc_Ugw3qhLgq…
G
People cannot fathom how bad it is. I'm a middle school teacher, I can tell you …
rdc_nu90lok
G
I work in cement manufacturing. I would love to see ai do anything at my plant. …
ytc_Ugz6q-pmP…
Comment
The comment by @radscorpion8 started off with this question: i.e. can we devise an AI that checks in with humans? Yes, Ezra was pushing in this direction, and I don't recall if Eliezer addressed it. There is research that does; e.g. Stuart Russell's "Cooperative Inverse Reinforcement Learning" paper. His book "Human Compatible" is more accessible. Whether such techniques will work is another question. See also "[AN #69] Stuart Russell's new book on why we need to replace the standard model of AI" on LessWrong.
youtube
AI Governance
2025-10-16T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOJw6Ow-57O","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgyfgxGpRqKXk1E697R4AaABAg.AOJUt4-1dEEAOroJ4CwzpY","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJVBpDg55d","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxV6pE8mgjX3NxCgAN4AaABAg.AOJU1KfHsDFAOJg0pXwrqk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOJT8rLlC-A","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgzOAM377rC3BN7EAil4AaABAg.AOJSBkB1fBuAOK35n-HOAy","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOKY-w_769Q","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgzP9Sr_durSIWHzG8Z4AaABAg.AOJH_DQ-EGKAOLn7VR94Yu","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytr_UgwD6mxL7-9JP2eZp914AaABAg.AOJ6GCEnRAKAOOUZdBK_WY","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytr_UgyY5iyOMTQCJJ3XLsp4AaABAg.AOJ0qCM6cT6AOLA_D6i4Mk","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]