Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
you can see the human driver actually slowed down because the waymos looked like…
ytc_UgzXBGz6b…
G
I think what he said is most important A.I is a baby an infant barely capable of…
ytc_UgzIpVW7u…
G
Since we all will be unemployed, we will have much more free time. This means mo…
ytc_Ugyud99hE…
G
“What Everyone Is Getting Wrong About AI And Jobs”:
Debate Extremes: There ar…
ytc_Ugy2mQA49…
G
I'm waiting for the day for AI to take control over humanity and over nuke.... …
ytc_UgwVG267b…
G
Is designing an architecture using help from AI or just using AI to do it vibeco…
ytc_UgzqEH6sB…
G
Blumenthal and his fake Vietnam combat history is fascinating. Anyway the federa…
ytc_UgxWgspHl…
G
Having a perfect partner, of your design, is the dream of many. Could you only i…
ytc_UgwRZxtRr…
Comment
I'm only 6 minutes in and I've already spotted a mistake. He says he's worried about AI being smarter than people. I don't mean to be a Debby downer but chat gpt is smarter than 90% of people on earth. What he's really worried about is when chat gpt becomes smarter than him. Google is already there, chat gpt is already there, grok is already there. Olamma is getting there. Let me ask you a question. if people are so smart then why are they using AI?
youtube
AI Governance
2025-06-24T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwTl3m0AXxzXTjih0h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwelU_5kpWvO0TAKIV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwDizRUkOTGyRP-S-94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyShMfp1bNNLbdU-KF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxSSTFQ9D916LhgV-94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0AoDBGXVt8HJ9qlN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYU_lZT-3PXTwWxUR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzq2O1caxtU9oLtTZt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzQaz_bw9YKP1SfwfN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugwn2tQBuMi381Garht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]