Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do not Trust AI 100%. Do some research and find multiple source. I got a friend …
ytc_UgywfGhm6…
G
first we had god. then we made people worship other people. then we made ai to w…
ytc_UgwTw0Vjp…
G
AI cannot provide IT technical support ... a Computer is a dumb machine & imperf…
ytc_UgwytkjWR…
G
I'm sorry but when a robot says "world domination" this is a artificial sentient…
ytc_Ugyj-QVW6…
G
@user-pi3hd2bt3f did you not watch the video? It’s in the first sentence. Seriou…
ytr_UgwCFEuzZ…
G
Exactly. That's why this scenario cannot happen. If we get to the point that ris…
ytr_Ugx6BlN2l…
G
It feels like they're trying to train us to not be able to tell the difference b…
ytc_UgxckssWY…
G
We don't adhere to anything similar to LLMs.
An LLM for example, is unable to c…
ytr_UgzyiNhUz…
Comment
We should not give A.I. enough intelligence to think, but someone's going to do it anyway. If it does happen, we shouldn't be allowed to tamper with it unless it physically threatens a human. We don't want to lobotomize a robot because it offended you.
#RememberTay
youtube
AI Moral Status
2017-02-24T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Uggbtq-WGdMdsngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggAjot1l7w9IngCoAEC","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggEmH3Lq4V_vHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ughlh2BiQzNAdXgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugg0tBq-Ha2NR3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghbXQbC6Eut-HgCoAEC","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiaJXOE27QNsXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UggWMgkXXwlosXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggATgq0eeHyfXgCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UghF5eT9DDh8F3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]