Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I honestly feel like this deepfake porn can cause a lot of trauma for the people…
ytc_UgzUOu7xD…
G
Humans are the greatest danger to humanity. Who created world wars throughout hi…
ytc_UgxXr5bgs…
G
God is dead and you killed him, but you will regret, because what comes next is …
ytc_UgyB9ed_o…
G
@randycoe9054 I don't think that it means that ai doesnt have any consciousness …
ytr_Ugw_pC8U9…
G
He is speaking about people who are not really interested in art who work with A…
ytc_UgxdWaeZ8…
G
The creator of the robot should be in prison. He is evil. If I saw that robot,…
ytc_UgwgLnGK-…
G
That's a very corporate way of seeing something very personal. Because one way o…
ytr_UgwssIIsq…
G
what if the AI is so smart that it fails the turing test on purpose 🤔…
ytc_UgwO4BX56…
Comment
Hmmmmm, as the wife of someone getting his doctorate in data science (statistics +) and I hear the explanations of his doctoral studies, research, projects and dissertatio, my understanding is that there is a "maybe" category, making it actually more complicated. That 4% is neither no, nor yes. Saying that the machine only correctly identifying negative results 95% of the time means that the machine must be automatically saying "if not 'no' then yes", really creates a huge problem and makes that study invalid. Also, the reason for the "maybe" changes the chance of it swinging one way or the other, so this is far more nuanced than this videos makes it seem. A test that doesn't account for a third option (ie undeterminable) is wildly invalid and over-simplified to the point that there should be a strong push to think about if it shouldn't be considered at all.
But I think the video is right that most people and doctors great it wrong. Because they are doctors, not math geeks.
youtube
2026-04-11T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1YBFXMDyrmvvejUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwPS1hPvyMyM0HYdBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxNgpsXVmL9Vbpk9uV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyP5fFMcQfwd1vCBbZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMyvm54nMlCWTg0ft4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxrM96f9GKnUM-S8VZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwE2bYS54-Z-nz4iGF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1X-LcwPD9zeHbNkZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwkw1dO0J-tSGZVJ3t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzNn-7yMfBAW5nulcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"amusement"}
]