Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Sue every single, guard, ai software maker, casino, civilly, then the police de…
ytc_Ugxr3Htvi…
G
Sam, Youtube has now AI dubbing, sadly as users you cant turn it off. Can you tu…
ytc_UgyQqRo3d…
G
The artists are insulting them selfs.
What's ai art?
Ai art is art combined fr…
ytc_UgwS-bTH5…
G
I know you put small segments of you drinking water into the video to prove you'…
ytc_UgwdTjCbx…
G
Oh, no, not Yudkowsky. I'm not sure I can watch this video out of fear that it's…
ytc_UgwV58WTv…
G
Nightshade is great, it forces AI to learn to see more like a human and less lik…
ytc_UgwdhiERj…
G
Funny reading comments, people trying to figure out and understand this!!
You a…
ytc_Ugy9uYD5P…
G
Your girlfriend should see a doctor. Not kidding. That is not AI related, I gues…
rdc_m2bd7p8
Comment
There are many ways to catch ai hallucinations. The way I use AI, I'm always testing for hallucinations regularly. It's just the way I use it.
It might hallucinate on the first prompt, but if it sounds off and you want to double check, it'll usually correct itself on the second prompt. And if you don't catch it on the second, it should become obvious by the fourth or fifth.
The more important it is, the easier it is to consult a second AI model. You can even arrange an agentic array of experts to find a consensus, but I think that's what already basically ChatGPT and Gemini do behind the scenes.
And that's how they have already been able to decrease their frequency of hallucinations.
I feel like the concern over hallucinations are people who simply do not know how to use Ai well.
The limits of AI are with the users. You get out what you put in. So if you're putting in slop, you get slop.
I'm not an expert on this though.
reddit
AI Moral Status
1765317918.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]