Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I feel like in a years time we will reach that true low, we will experience erro…
ytc_UgyxGxC8Q…
G
Just wait for the mass suicide epidemic ai will cause I promise in 5-10 years wh…
ytc_Ugxw_2N3m…
G
AI is a danger to artists and writers by stealing their work, to crafters by pub…
ytc_UgwdRbthL…
G
Anyone who played PlayStation horizon sees it coming with unregulated billionair…
ytc_Ugz75BRy4…
G
Man, artist alley booth is so hard to get in my country.
If i happen to see AI b…
ytc_UgxdYOtFa…
G
And elon musk said that AI would replace every job, what a load of crap.…
ytc_Ugz2j-ytE…
G
AI worshippers race after their false god — a god of illusion, a god of destruct…
ytc_UgxYEp0DU…
G
So essentially
Not knowing how to draw and resorting to AI
is a skill issue I’m…
ytr_Ugy3hzIOv…
Comment
The only way we can prevent our extinction is to take matters in our own hands, fix the governmental and business system, Technocracy? good. Modified and Updated Technocracy with modular Democratics, PERFECT.
and then Achieve another RENAISSANCE. In this one, human intrinsic capabilities will be extremely improved using exosuits, nanomachines, brainchips, advanced optics and ability to use internet via brain. The only trouble will be cybersecurity which could be dealt with by using advanced anti-malware software. We can embed AI everywhere to improve performance, but the greater good of humanity should always be considered above all else, even in the low-level LLM code of the AI.
youtube
AI Harm Incident
2025-10-16T05:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbIv7UsdoT6oMjKjF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwPk-Nq71M5kGsulEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzsgEqed3oJqD5jn_94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy5JrZudLuIB-RAwr54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxQgsP5I9kYS1oTyCd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgxNdBD04s13G834ret4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzj2ccPA2l6MAQdkkh4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXBI643wF8ygaMVnN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyFCVcyiMj7Zggzha14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugx5DQpQGYElgoRFQgF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]