Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This AI surveillance garbage is why that old man was falsly identified, beaten a…
ytc_Ugz2QR7Sq…
G
True, but AI can't win a Chess game from 1977. Will win in the near future for s…
ytc_Ugx8e8lgv…
G
I understand your points, and I agree with some. But I just don't care. I was re…
ytc_UgzJzaDIm…
G
@chrisporter9397 You didn't double check your examples didn't you?
None of them …
ytr_Ugx3M_Zwl…
G
I've seen a lot of posts about developing countries taking a hit and I have to d…
rdc_cz36yqq
G
AI is like dating a girl out of your league. You know it won't last but F it…
ytc_UgwXabxnt…
G
You have to realize LLMs take on a role-actor capability. If you want to play a…
ytc_UgwYxdpjI…
G
Pretty much anything that can be automated eventually will be.
But, a lot of job…
ytc_UgyEzhIrq…
Comment
25:00 we need to start over from scratch with our political systems with the basic premise that if an action is either malicious or negligent then it's illegal, regardless of technicalities, and punished wrongdoing to the degree of the harm caused, it would solve this problem. It's malicious to convince people to act against their own self-interest. It's negligent to allow a technology to be developed that could end all life on earth. If we had an AI that adhered to and enforced those principles, we'd be fine. Getting it to do that should be the ultimate goal.
I'm convinced Elon Musk has convinced Trump that ai will make Trump the God emperor of humanity.
youtube
AI Governance
2026-03-15T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxbcI11Epr5m0_lorR4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzSvSbJvYN-z6yxe6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzTieZQoMqARKB4npp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwp7MbTWsccX5iEhX14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyeQVr7sEcChcbrcyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxWGwSkCPK_Omuo82F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwnxDKwTBu9AYHQ6YN4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwD4F1xFp9vxwx0_Zt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwyjjxN0Asym93qws14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgyMzIo74sVndqBBsMt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"}
]