Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
why does it take ai 10 seconds to generate an image but Glaze hours to edit one …
ytc_UgxVRDw-i…
G
They also are using ppl for porn. Selling your Ai nudes to scam men. Extortion o…
ytc_Ugx6udlo-…
G
I can think of a perfect reason which we can use in order to convince AI's not t…
ytc_Ugx2HVVXU…
G
Have a ying and yang. Backup to what you feed it. No wars hostiles. Ask how to f…
ytc_UgwdPbuzy…
G
You are missing the point. Its not that the AI will be tuned to you and show you…
ytc_UgyRj1exs…
G
lol this world is doomed when ai truly takes over prepare for something humans h…
ytc_Ugx3Pgad7…
G
In the meantime ai is also gaining accurate knowledge beneficial to us from God'…
ytc_Ugxd1lu3C…
G
I feel a lot of discomfort surrounding the AI copyright debates. I am pretty fi…
ytc_UgwFkKNMr…
Comment
The research is clear - humans are terrible at staying alert when they are merely monitoring a process. Concentration is far better when we are actively operating the process.
At the current level of development - where assisted driving systems are still woefully unreliable and dependent on human override - the whole concept is highly questionable.
Especially when it is being irresponsibly overhyped as a "Full Self Driving Autopilot" system. Why Tesla customers are prepared to pay a jaw-dropping $15,000 to be Beta testers for an unfinished product defeats me. It's a triumph of unethical marketing over normal engineering prudence and common-sense.
youtube
AI Harm Incident
2022-10-02T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyD01XuCc3TMxZvTrl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwEE3H8wEeY7SJ6C654AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAa2OAHEIeT6tBx_N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwmXleybRBST2NUUDx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgztsG6Eu386q_0W8qp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyNaqX4kEjDzeQAomB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzvRSZ5UMTp5W4I5qR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzf5TmnZOvSJgMtxGh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzp0vvrjwNT-jYIzHJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugyd94zeSEsiHzQrWaJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"}
]