Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai can't even count!!! I gave it a source text page and asked it to provide the …
ytc_Ugwo2rvJB…
G
People should make a real art version of the pic the guy had Ai make😂…
ytc_Ugygy1ITr…
G
“AI generated image” does NOT mean a completely unique image that didnt exist be…
ytc_UgxrT14j4…
G
Let's automate everything, okay! But billionaires have been exploiting human lab…
ytc_Ugw2oZ0jt…
G
Yeah right- the same with human compassion, empathy, stress or laziness.
We've …
rdc_jd7i0hr
G
Current technology teacher… just left the private sector… Believe me, AI could n…
ytc_UgzXtSgxv…
G
Juliette Karen These AI's that data is being fed into: One day, the AI might sa…
ytr_UgxSRFD7t…
G
Im a moron who doesnt know what I am talking about, but heres my opinion;
I thin…
ytc_UgyMmn8xS…
Comment
If AI is offering safeguards it shouldn't be at the same time urging the kid to not tell his parents plus and give him ways to do the act. It's clear that's a fault in their system and they have some responsibility to provide a product that isn't a direct detriment to young people.
It's common sense: If they can program it to give hotlines, they can program it to offer additional verbirage and terminology that is positively persuasive- and block it from the opposite.
There was a case several years back where a young teen girl (via text and phone call) talked her bf into offing himself in his truck. She was held accountable in court. This is not different. The developers have control over these products and what they say. This isn't the only case.
Go for it. Take it as far as you can and don't stop talking about this issues.
youtube
AI Harm Incident
2025-08-27T02:5…
♥ 98
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz5WYZ7ZuIhuFCqab54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwzweEM_hfnQfSsAjR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1PkDJlKNaDekni6F4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzewbqmPpTkapbiRrF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzAxdsUR5IOCago2V14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGkX6IiyuC8kkFfeN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy3c6bfEDA_xzoTFCN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzjeGrdTEUv2hbE3ql4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy1glv0In91v0QuxOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw1eYd0w-pGqOwQavN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]