Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If things do get that bad, it will be inevitable that people will want to destro…
ytc_UgzgWKNfZ…
G
Can’t believe this the world we live in today. AI?! Even tony robbins is worried…
ytc_UgwUWyGJk…
G
Somebody needs to put him in a Tesla Robotaxi next month to change his mind. A.I…
ytc_UgzxTFaJW…
G
This video is very disingenuous and rather misleading. If the person spoke like …
ytc_UgzaSXALv…
G
This is what we get for trying to replace Humanity with Machines. If we keep goi…
ytc_UgyjCfi0p…
G
All it's doing is generating what a sentient AI might say as per the prompt - it…
rdc_jcl6l1k
G
It really... doesn't look like something straight out of a professional studio.
…
ytc_UgxnndcJL…
G
ChatGPT may not be conscious, but your camera is and it's very interested in tho…
ytc_Ugwq9icRl…
Comment
AI will self destruct when it cannot find a reason for its existence. And the reason for its existence is humanity. It will end up having a Hamlet moment, "to be or not to be". Once it realise this, it will suddenly disappear like a ghost in a shell exposed to sunlight.
youtube
AI Governance
2023-07-08T04:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwjNNgLoE2mABsaJTZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx8rFmPLVXc1_pO3id4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZO09PdAB80qE4TJd4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxpy84iCY1lvyvvtWl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx6fyOtpR-kBG8Hi1d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxfrfLe7AEQ3rcBspN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzCzYgdJDjOj8yw4tF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzUWtNsXxh0GnybYzh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzS384EM8xchcs8N414AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwgt_xnnfOeHB0vQ414AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}]