Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon musk is dumb he want to do an evil plan we should put him in jail for makin…
ytc_UgylF-mjB…
G
Bro why are these AI 'artists' fighting to generate crappy image like bro, IT WO…
ytc_UgwHjN4N-…
G
Elon want to keep hes steps forward with X ai thats why he say to all who are im…
ytc_UgzSOEGk2…
G
Human: I am wanting to cancel my subscription. AI: I'm sorry Dave, I'm afraid I …
ytc_UgxaYUgWZ…
G
first you are being a dolt. your ai didnt speak a false sentence other than your…
ytc_UgzJNWpHo…
G
The next question is: If ChatGPT lies, is it really the AI that is lying, or the…
ytc_UgxfMrz-h…
G
If businesses save more money by hiring fewer people because of AI, then there w…
ytc_UgwMSRPKm…
G
@NotAdachiPeoplethis story literally has no relevance to the statement he was m…
ytr_UgwsJauTj…
Comment
Sincerley have to disagree with some of these AI experiements, especially the Anthropic one where the AI tried to escape/blackmail and other horrible actions. You are correct in saying they didnt expressly tell the AI to take any of those actions in the prompts, and i forget the exact wording but they did tell it to accomplish its goals by any means neccesary, and to ignore morality while doing so. They told the AI to act immoral and were surprised when it started doing exactly that. I dont believe thats an entirely fair test of the AIs capabilities.
youtube
AI Moral Status
2025-12-16T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzfZAAN6FEmHCAL_zN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyqQiXE3iUAn-ib9-94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz3sxkwInrroDaBT0h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyoxxraY-qjHPiRcBR4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgylMdJ6dK1vwD0T4iB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlV8_cSJg-A_O4VZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxQDg74duZmCE1M3KJ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuWB4bEhMu3hf9YLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"disapproval"},
{"id":"ytc_UgyHcHPslbZPGH7x9X14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzOuveFTNv-yJcZAkd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]