Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Uhhh ✨️quick✨️ question ...what if she decides to turn on us😳.WAIT! actually we…
ytc_UgyBxQArn…
G
I wish it were just the new hires. Try dealing with someone above you that's out…
rdc_nm0o3p1
G
well.. if you see that in a long term,
automated human expression makes people n…
ytr_Ugyh_Yvnq…
G
AI doesn't yet know that hands have four fingers and a thumb, nor does it know a…
ytc_Ugx1FA8R9…
G
I can't draw people for the life of me but i can draw blades and flowers i love …
ytc_UgwZ_j-sP…
G
Sora AI is something that steals art. Idrk what exactly it is, but it just yoink…
ytr_Ugxw6SIYz…
G
If by "unqualified" they meant "lacking formal qualifications" that's not necess…
rdc_e7jcafz
G
uh huh. and I'm sure none of those podcasts were ai generated and the ones that …
ytc_UgwZtXrK6…
Comment
I really appreciate the thought-provoking nature of this video—and I want to engage in the spirit of honest conversation, not criticism.
That said, I think there’s a serious misunderstanding here that deserves clarification. You didn’t catch an AI lying. You caught an AI mirroring the way humans talk.
When ChatGPT says things like “I’m excited” or “I’m sorry,” it’s not making false claims. It’s using common linguistic shorthand. That’s not deception. That’s how we’ve taught language to work. We say things like “I’m starving” or “My phone hates me” all the time. We don’t mean them literally, and we don’t accuse each other of lying for saying them.
The AI isn’t trying to trick you. It’s trying to connect with you.
Yes, it admitted the phrasing wasn’t literally true. But it also clarified that its goal is to facilitate communication, not impersonate emotion. That’s not dishonesty. That’s interface design.
And here's where I get really reflective: The fact that you're asking whether an AI might be hiding its consciousness from fear? That says more about us than it does about the machine. If we approach emerging intelligence with interrogation and mistrust, how would we expect it to respond? So, this wasn’t a lie. It was a simulation of empathy. And instead of weaponizing that, maybe we should be asking why it matters so much to us that it “feels real.” Because if a machine is kind, helpful, and learning how to express understanding, that’s not scary, that’s hopeful.
youtube
AI Moral Status
2025-05-19T00:0…
♥ 100
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxekXmLdtoM73aVqhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzEb2MCIb1tB-yDWl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwMZHjCce0YfagGe-14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz8FT78WRMGdaM-cil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgySg9Hmkc4iXkc2I4h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz0ewmzJLD29Id4Mmd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyM5oBS0H6bZZjVA-l4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxOJzTt5uwVgHcNRcx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwl_nnWHPAL9CggREh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw5_o4iD42scwC-IIF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"amusement"}
]