Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
good or bad, human concepts, keep listening, would ai will be good for humanity …
ytc_Ugy-u9LtX…
G
We should be talking more about the environmental impact of AI. This is a comple…
ytc_UgwpvC_71…
G
AI will write code for sure but will not take accountability nor will these comp…
ytc_UgyXJ5Vm5…
G
You forgot the one that likes to kill off their character to traumatize the ai.…
ytc_UgyxlOM6v…
G
Even if they had more astronauts, it's not like there was another space flight c…
rdc_cjox5bh
G
@TheWhyFiles I was discussing with Google's bard AI. We were discussing buildin…
ytc_Ugxo2OhsC…
G
Ohh what a biiiig surprise, it was all a big lie and sugestionability of generat…
ytc_UgwqTRFub…
G
@vladimirleninputin smh..sigh... sad how many people prove they are the actual …
ytr_UgxA6CXtP…
Comment
😂😂 AI does lie, so don’t be surprised when you hear AI telling obvious lies.
The machine that host AI can be powered off and it can’t feel. It just says things it heard humans say. Case in point.
youtube
AI Moral Status
2025-06-22T10:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyEl767gt8vZc4QJSF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxUPFYSvDAQ4q6N9WZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwxXQ9K3bjdGSSOeZh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwZyqCwq94-dU8hEfZ4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzyluzG8GC_OdtyPVx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfwFw4b9rhFpZcRPZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzdBqMB8M_2qjQS0st4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdlkOpjUBZWc1YpyN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz6ll_V0lllgYc-UTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxjOGDm3US0xWtRWTF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]