Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Seems like a rather silly argument that we need to "figure out" how to keep AI f…
ytc_Ugww27oyu…
G
It sucks that this happened to that man but if you are this stupid and reckless,…
ytc_Ugz1LnkzB…
G
Let's use this committee to create a computational neural network that proactive…
ytc_UgycLwZ2w…
G
While mankind is destroying the planet, it is a very bad idea to make AI follow …
ytc_UgxxJ6U17…
G
This is so freaking sad.. Tbh you CANNOT call yourself an AI Artist.. never.. yo…
ytc_Ugy5pKwQE…
G
The apolocyptic AI stuff is NOT far fetched. We're more likely to fuck it up tha…
ytc_Ugy59kE-x…
G
The AI we see is behind the AI they have. Everyone keeps talking about a virus, …
ytc_UgwhXserm…
G
Grok can create inappropriate images of people, even if they’re a child. Whereas…
ytr_Ugw0AsD0h…
Comment
I don't remember with which AI, probably several, and I always won, because facts are facts. Once only it happened it recognized I was right (by not telling anymore I was wrong). In another instance, it simply gave up replying. And in yet another instance, the IA was looping telling the same mantra, meaning it was in complete cognitive dissonance. It was on chatgpt, claude, grok and gemini.
You should try with Grok.
youtube
2025-04-18T16:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzc93GbSnG0fptsYD94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyZFD11ZvoHV1wCDFF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAEa_5jZaoZtGna-54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzSK2m3-y8bUmGr9Ex4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzPXCFP-PmbzafnUx94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz22YpCkeqrEb9Qk7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy0bQzaOkdS1_QYXZh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxAcl9XcWPIOLP9WDZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxN4IDYgAneLf7AXWh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwbAlmiFQowkp0wRzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}
]