Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is revolutionary, and can be used in so many ways, but us humans always seem …
ytc_Ugw0ojS9d…
G
Hey @jhiowilder2271, thanks for pointing out the shadowless robot in the video, …
ytr_Ugxkqg__M…
G
I'm kinda sceptical about the progress of AI. I think we have reached the pinnac…
ytc_UgyRfuw6E…
G
I be honest a.i isnt the enemy its human steal ur ideal cause a.i can steal your…
ytc_UgwV1vDiV…
G
When AI gets to that point, and it will... there will be a global upheaval. The …
ytc_UgwPZ17SD…
G
How bout we just STOP defending AI before we end up killing ourselves over it…
ytc_UgyQ87qwu…
G
How would a sentient AI hedge risks without pain? And choose paths that are the …
ytr_Ugyc7iVFU…
G
Wow. Soooo many auto tune singers are gonna be out of a job.
Big whoop. Now do …
ytc_Ugzu2L8_8…
Comment
My question is and has been, to what end would AI do all these things. What would be it's motivation. Comparing it to human motivations like the need for money to buy food to survive, kill someone else so that they don't kill you in the case of wars etc, what would be AI's motivation to wipe out humanity? It does not eat, cannot be killed, has no emotion. So wipe out humanity then what? Unlike 'aliens' who have biological needs for survival (just giving an example) AI has no motivation other than what it mirrors of its users and/or creators. Please make me understand. Thanks
youtube
AI Governance
2025-06-16T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3zzyEG5V68b3yGjh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCRgWpo1KFa49Zaj14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxLmHOY9xh-ckjFXrF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxSUpib1hcRSwdrVQ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxmoRo7KfUvI8YKBQ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwbkmUCxPIA6RSM2OJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwocjrzEwsqLjn51814AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQ9iocCvn77xmtO3V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwp3htsGjG1Y9fKDph4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy_tRF9MjH7Kx8szIJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]