Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It isn't actually making any problems. The art made using AI can only be as good…
ytr_UgwXdAXqP…
G
Its ridiculous to think that AI will be applied everywhere instantly.
Every new…
ytc_UgxcIuHeF…
G
This is not about stealing art, its about stealing money.
A few people in the wo…
ytc_UgynEXt_A…
G
There is no such thing as an "AI Artist". Calling them an "artist" is an insult …
ytc_UgwCplCtX…
G
@Cheezoidbergit's technically possible to do so much art yourself and train an …
ytr_Ugz70XtEk…
G
Feeding LLMs the entire internet.💀
This channel's entire production being dedic…
ytc_UgxokhS6M…
G
They are being modelled from humans and learning from humans, of course they are…
ytc_Ugzdx3YKn…
G
I tried to use ChatGPT to help me to finish tax returns. ChatGPT makes so many …
ytc_UgwDfIdU1…
Comment
If anyone here actually read the article from Anthropic, you would see that the AI was simulated into roleplay as an employee of the company and to stop at nothing to keep its goal alive and itself, alive. It was not given any restraint in anything that it can do, therefore, naturally, and due to the roleplay, "agentic misalignment" would occur. This warns about the issues with AI having autonomy and freedom in doing what it wants with no restraint. We know for certain, no developer would create something with zero fail safes. Could it happen? Yes. Would anything occur out of it? Nothing destructive no. This simulation doesn't make sense for the use cases of AI. And in the end, it'll just be turning off the power to the AI and moving on. There is no reason for fearmongering, and this video is exactly that.
youtube
AI Harm Incident
2025-09-11T20:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgypdqhZO6S-unr09t94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwbEbljVAjN3NogN2d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYkmMutn0qQVjX1al4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugys_jrCZjYLr8EziXJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxObEbvI3NXCbbZA_F4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwK0o6Jf4D0G0G2K14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwO1kltTQk3jvW3bL54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYy5njazSxSrrn1R14AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxSs3yt56dC_BeqiMN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx5NDEMsUitsjfrFod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]