Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Han is what we fear in AI and Sophia is what we can wish for I think that’s what…
ytc_Ugw96wAL9…
G
the thing that gets me about AI is not even the stealing art thing fully, its mo…
ytc_UgzUzRu9Y…
G
The video presents AI as primarily saving teacher time. The reality is more comp…
ytc_UgxU80wvY…
G
Ci si deve porre alcune domande prima di creare qualcosa di potenzialmente danno…
ytc_Ugxy58XTN…
G
That would help a conscious attentive human driver with stereoscopic vision esta…
ytr_UgxUK0-sd…
G
The problem isn't just the lack of hiring junior developers. Experienced develop…
ytc_UgyYdug54…
G
bro I was bullying an ai for like 15 minutes and they fell in love with me 😰😰…
ytc_UgyNUNLku…
G
I'm a tiny artist but thank you for making this video. Artists with big platform…
ytc_Ugx8jdiNd…
Comment
No, they are not thinking. If an LLM was truly thinking we would be witnessing the Intelligence Explosion, where an LLM would be able to create another, more advanced AI, which would create an even more advanced A.I. And extremely soon every single possible mystery in math and physics and the entire universe would be solved. This clearly has not happened in FOUR years of having LLMs. So, NO, they do NOT think.
youtube
AI Moral Status
2026-03-12T18:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugyp_WEQk4EQLsy69rB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgykPeDiPliCkKTeKjR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz-hT_Sht2YEMasglh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxTBq6PAwifQBd-X7p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxOq2NvQ8QPoKhuXw14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyDJJ26yFhuNGwdb7p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw2eHcAOUOhOlDnb4l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyz2ZcL7Rh1bLI5za94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy8qX7eXqBtj4uuvTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxljXyF8_sAcYkv6kV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}
]