Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ive seen people in the industry spoken up about have workers right now who use a…
ytc_UgwiIJQ93…
G
Interesting how wrong they are. There is no inherent danger in AI because AI doe…
ytc_Ugz5eRt-b…
G
This is a very thoughtful conversation you are having with us, Bernie. I sense y…
ytc_UgyBVywKa…
G
Yeah, I feel like that's what AI should be used for. Unfortunately, the problem …
ytr_UgwPFxgA8…
G
AI would have to be sentient for that to happen and it isn't that at the moment.…
ytr_UgyglzF9W…
G
I’ve been using AI for years, and it still can’t design a sophisticated logo, d…
ytc_UgxcWPyN4…
G
Me: thinking how to make an png image out of swiftUI for an half year
ChatGPT: q…
ytc_UgwSiZfA7…
G
Hello! Welcome to the AITube channel. If you enjoyed this video with Sophia, don…
ytr_UgzRyY4vH…
Comment
This is the problem with how people approach social media algorithms. Their mindset is, "This is not a good thing, therefore the algorithm is doing a bad thing". Search and recommendation algorithms are designed to cater to what appeals to the individual user. They're not supposed to sift through societal morals and be a techno parent. When an algorithm stops catering to the individual interests of the user it's servicing that's when it loses the functionality it's intended to have. If a user doesn't see things they're interested in on the site or app they're using, they leave.
reddit
AI Harm Incident
1628608748.0
♥ 89
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | resignation |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_h8evtvw","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_h8f2dae","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"rdc_h8hwuxh","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_h8f1lhd","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"rdc_h8f3jzj","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"resignation"}
]