Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Very insightful man , these billionaires are investing there money in to Ai , th…
ytc_UgzDLEowP…
G
you don't need AI for this...
Idk about your country but each election here ther…
ytr_Ugy2v8W63…
G
Humanity needs to work together with A.I. There is no way forward but this. Dipl…
ytc_UgzGNf_qb…
G
The appearance of line hugging which terrifies many in the beta test is due to t…
ytc_Ugw9YTbTG…
G
I found this very disturbing. Nothing ChatGPT chats about should be trusted, cha…
ytc_UgwtMQ5f6…
G
@abekebe2714 the cost of growing a business is much less using AI than it is u…
ytr_Ugw3meEqD…
G
So, the A.I. will always be racist unless you make it stupid and incapable of le…
ytc_Ugy4hsbtz…
G
It's too bad psychoanalysts aren't invited to these panels.
I have little hope…
ytc_UgwZTRrts…
Comment
23:27 ...
My question is, do LLMs even have a concept of bad?
Like if the individuals who are training the LLMs to communicate are simultaneously posting on sites that post memes that joke about things that are "bad" that AI is following to get a the jest of human language interaction... how would an LLM be even able to differenciate between lies and what it's supposedly being programmed to do?
Unless this kind of programming is _intentional_ ... led by accelerationists to have a tool to destroy humanity without accountability???
My apologies if this is the direction that followed my timestamp.
youtube
AI Moral Status
2025-12-01T18:0…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxwk3tmMDv7CKwv5I54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyLZo05xoQ-Mnjxegl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6uUggqCeFxvMUlyd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzPBf2fn8xgifhFjJV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzJQklkoy7-1oKel6F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwWWcTTDJw2ntGmYl94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUPF526z1W85vWCR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxI7a__RUxhTdMR2RJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwM6Nqsah0pYFdAnh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyLoF6DNfxOvQ0g8PN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]