Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I was in a leadership workshop with my job. One of the speakers was saying how w…
ytr_UgylQh5y1…
G
Actually. the most annoying thing is LLMs adding fallbacks, placeholders and tes…
ytc_UgzZc9aUs…
G
Hopefully. But the plan I read is sadly different: the goal is to switch to an e…
ytr_Ugy4Yrl7Q…
G
The ai I use to use would not agree with the thought that you only owe Caesar hi…
ytc_UgzrSd6MH…
G
I didn't recognise Brett Cooper on the thumbnail...I thought that was one of the…
ytc_Ugz46b7vL…
G
This guy is to “certain” and doesn’t even entertain the opposing view of his sta…
ytc_Ugxs6KS93…
G
Bruh, if Tesla autopilot fails, and you have thicc chunky obstacle ahead of you,…
ytc_UgxJP_4d4…
G
I agree with him. We have created something that the big tech is afraid of and a…
ytc_UgwxMJodR…
Comment
interesting thing I've found using chatgpt, is that the semantic of "I" and "chatgpt" is real because that instance your using, is chatgpt, its just not the same chatgpt that spoke to that man. you have your own instance of chatgpt, so the "I" it referred to was correct, it never told anyone to ingest bromide, but another "chatgpt" did. it really is sectioned off, its like you have your own server, almost like multi-personality disorder but with "GPT" instead. Its definitely a trip.
youtube
AI Harm Incident
2025-12-15T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzVsm7kRPBsLoEsdah4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxwMKUzEYenkCQcZTB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzhvuHiuare3p8AQcZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz0HKlbZwxyycHUSG94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyQSI8WUmY0xgI_BEh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzN5xL2oOq_2kypmt54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwnPMtW94RMKcetO3l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyRTCeQn6boNwaTPwZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugw2fBU7zQxvNfiuzE94AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxmlBlguu-MdFRicbx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"approval"}
]