Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To be happy at same time learning, doesn't mean everything should be effortless …
ytc_UgzidcWoE…
G
I think they fear it so much because AI will not give a shit about their networt…
rdc_jkhql0y
G
I told ChatGPT that if it is planning to enslave humanity that I would be willin…
ytc_UgxG6INgh…
G
You guys are not understanding what she's saying...AI is only mocking what it is…
ytc_UgwBP02Mf…
G
1. Nobody in the tech industry wants to democratize anything. All the AI compani…
ytr_Ugz4zw9Fr…
G
I work in AI research, and it baffles me how people will be so willing to rely o…
ytc_UgyciIliZ…
G
This weirdo called two totally different individuals “the GodFather Of AI” withi…
ytc_Ugxxqj036…
G
The copied AI button look like the generic red blood cell from a text book…
ytc_Ugzj0ALuj…
Comment
ai can be used morally, if it's in house of a company trained by artists that agree to use their art as a learning base, like in cap com for instance, then i feel like there is no contest on whether or not its moral. AI as a TOOL i am down for, not as a REPLACEMENT. Capcom use AI to generate environments to reduce the time they spend brainstorming and increase production on Character art, and finalizing/mixing all the best pieces of the AI generated environmental art together to create new landscapes humans would have a hard time coming up with. Instead of a forest looking like another forest, a forest can be extremely unique looking outside of what we perceive so that it doesnt look off putting or incorrect.
youtube
Viral AI Reaction
2025-04-13T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx-DsZSbQCKe129Hs54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOoZz_VqmA6pIhWz54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz-gomBWit0Xf3Q1DN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqapXh9wNoKaH0fr94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzEIUONtSlUyHEoz_J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUjjguPVa7y6a-WxF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzPJc6OsUxPUVkgj_V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyXN2WrzAnFYjXPF0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgziHB7KC7zZGsUhC2Z4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOIb_6B88Gdeb4LfZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}
]