Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't figure out if Elon is genuinely unaware of the dangers of AI and his neu…
ytc_Ugx-4YxTg…
G
People sharing pictures on streets is problem being this naive that you share yo…
ytc_UgyCgt6dh…
G
That's an intriguing perspective! The interaction between AI and humans does rai…
ytr_Ugx14-YFU…
G
Well I came from India to LA
I’ll say waymo worked just like it was expected
I…
ytr_Ugw4sh8ee…
G
Content creation already relies on ai, plenty of ai generated articles flooding …
rdc_j42nu3a
G
Shouldn’t more of these AI centers be close to cleaner energy, such as the Niaga…
ytc_UgwNGlp6h…
G
I don't believe electrical circuits can spawn consciousness... but just like tha…
ytc_Ugw_j0jP6…
G
I was on board this video until the AI did your ad for you. Makes it all unbelie…
ytc_UgxLQL_to…
Comment
Had people argue about AI being safe and don't believe the movies etc before in comment sections, it's just mind blowing how people can't identify the dangers of AI. It requires such minimal critical thinking to figure out some of the dangers.
Humans have proved throughout history that any large breakthrough that can be used to profit, for war or to just get ahead of others in some way will be utilised often in malicious ways. As the Geoffrey said, having AI in the hands of massive companies isn't a good thing and should actually be terrifying.
youtube
AI Governance
2025-06-26T13:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyzMO6Yav3xEoh8Y754AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwuXjB61tiFqDj-cvB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBg3tYIm4IP9olICN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu1ZKTpimcgSE82iV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxTtYMkkDAgRQEyYeJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwdYx05LDk7Ut9RukV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwer29hRPUpLEhIyFx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugwp0Sxv_mB55NaANXB4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV5zprxMKIKoVv1rh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwbw9ms4SgCmlVuuEB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]