Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will, first, come for low quality easy anyone-ca-do-it female jobe. We see th…
ytc_UgwQXOHqR…
G
>I’m a long-time Plus user and want to express clear feedback.
>GPT‑4o en…
rdc_o4l4ooi
G
|†| I believe we are meant to be like Jesus in our hearts and not in our flesh. …
ytc_Ugz19hqIt…
G
if AI is not a menace why do all these low-tier artists (digital artists are not…
ytc_UgyUSjxpu…
G
All these malding people with 200k debt for an art degree or photography degree,…
ytc_UgyHKoxKt…
G
And then they design robot that can swing their arm at extreme fast speed and br…
ytr_UgwlT_zLx…
G
I don't know why these types of videos are showing up on my feed THREE YEARS aft…
ytc_UgwVDRaZF…
G
I've played Horizon Zero Dawn. The first thing the AI did was learn to ignore t…
ytc_UgxkUHLtk…
Comment
The gorilla problem is a heuristic, not an empirical prediction. It highlights risks arising from capability asymmetry, but it does not establish inevitability, nor does it map cleanly onto human–AI relations, which are socio-technical, governed, and intentionally designed.
youtube
AI Governance
2025-12-21T15:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhBf5BVlDHodn84gV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxw7X0O4vKA7JI3SF94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyhYT4noHd8zKyNrOR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwsLunRNFOTj6OGb3B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQijXxkX07iDGOZF54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx8SE2jC-VZ4dMJWUx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxQsVeNqyJHC2iLYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgykJe5oSUrOeRbU2lB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw-7SMUrvnTJO_O_jx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzCQK8OWJjed3abHA94AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"approval"}
]