Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We're seeing this come true more and more each day. People went from being inter…
ytc_Ugzb06eDr…
G
I love swinging by places that think ai won't replace their specific jobs and re…
ytc_UgxAVirb0…
G
Open ai treats its user the same way billionaires sycophants and hanger ons trea…
ytc_UgwOeGjVj…
G
If AI can do a better job than you do, or should. Films have been terrible for y…
ytc_UgzyYoRz_…
G
They do. The expression or application of human creative skill and imagination, …
ytr_Ugw1H1qOJ…
G
@donnel5516they were to the old man, just like the computer to the man in the 6…
ytr_UgwKsWd1A…
G
@dooterinoin 5 years people who aren’t using AI tooling to assist them will be o…
ytr_Ugx14M1hb…
G
I've always thought about it...but if I have to be honest I think that artificia…
ytc_UgzagdUA9…
Comment
First of all, no LLM is conscious or self-aware because they all respond purely to prompting and don't have a constantly active neural process. Second, all this sky is falling stuff is predicated on the unfounded assumptions that (1l) superintelligence by itself can create things like superdeadly pathogens, (2) that conpanies like OpenAI can embed their agents into critical infrastructure and (3) that there will not be multiple ASI agents that are adversarial to each other's capabilities to affect the real.world. These assumptions are super naive.
youtube
AI Moral Status
2025-11-11T06:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyD_vVgK4lU66Lr9q54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzC5ci0oXYUvBqFe1B4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZQjSzkiOzmnrTb454AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgziVby8mv9JCe3Ii9R4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz5vty5u3LBNGmPlqh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgzTgAPXXot1H7fSba14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugz9aRh5H-dWDzkCLvV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_Ugy-YPCOCebMWJ9NcuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy86aQ-y1DSo4yqC294AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx_cFH_A9RtIjRcBJJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"}
]