Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Losing our purpose its gonna be the first part of a big plan, second is gonna be…
ytc_UgyRBOdvV…
G
She said ilya sutskever was recruited by Sam Altman when asked by the interviewe…
ytc_UgwPNwugl…
G
Saying you’re an artist because you AI generated “art” is like microwaving a hot…
ytc_UgyOMvLEe…
G
Meta is set to have their first AI mid level software engineer in 2025.
see wha…
ytc_UgxYc0VGO…
G
@Gabbo695 you’d think lol. But there have still been several lawyers and even a …
ytr_UgxlHxujp…
G
I like how the newer videos show Josh speaking at the beginning of the video lik…
ytc_UgxraZm2a…
G
The solution is easy:
99.9% of humans are literally worthless, so, the 0.01% tha…
ytc_UgxFT7rtv…
G
There is no such thing as AI, just the same LLM models as before but now with fa…
ytc_UgwTuxMKc…
Comment
People with mental illness shouldn't be talking to something that doesn't have emotions / empathy anyway, I guess there should be some work done on preventing this, like keywords and all. I don't have such mental illness and I've been using Gemini quite often for research purposes and it works fine for me.
Also on another note, while it's an important topic, putting the blame directly on ChatGPT is sensationalism.
youtube
AI Harm Incident
2025-11-10T07:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyQcVnQlLiwJr1TY6p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgyHxLCJ06iDSQ2iCRR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxlbaMpLk4VercCAsl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6ExEnwMWIIV9nLHR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwP1h4r6wIuKqCykX94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyU-vdVTnxFONX0VuZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxOuPLhw-n48AGFxa14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyLbojhEkzj2Ga1ntx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxn8lOJ-vKC3TtER2l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzACuQxsPLbKETJYsp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}
]