Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The great equaliser is time, our feelings, behaviour and future events are all i…
ytc_UgxeLem0Y…
G
I think... We have to boycott this AI . Because our upcoming years will unsafe i…
ytc_UgxuCOyQ4…
G
Yeah, he tried to warn us, all the while creating X AI, the largest known AI net…
ytc_UgzcwhO9e…
G
Sasha Luccioni regards AI as dangerous—not because of speculative, far-off exist…
ytc_UgyxYCyu1…
G
Yea, I think so. Framing it as a dividend is the right way to think about it...a…
ytc_UgzbHJlck…
G
US has no choice but to win the AI race.. if we don’t, our enemies will win and …
ytc_UgzIlv3jl…
G
All true, but we don't have to accept that this is the future. There is consider…
ytr_UgywNE5nz…
G
My kids will not be this dumb over technology. This isn’t AI fault. Parents need…
ytc_UgwCz5fJP…
Comment
Greetings CEO: I am a everyday person,so to be brief. So far as a minimum I think AI robots should NEVER be created with WHOLE bodies and should be programmed with the 10 commandments and or the (40-50 don't know the exact #) confessions. Thus far as of my knowledge.
youtube
AI Governance
2025-12-15T00:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgymfosdgaFcK2ln1CN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwr1wysXpsopSZus2V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxXRbyIGMGyRGQ_FaN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz7LgdTpFacSL9TO1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwHm11tHBOphAHdAMV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2oBWbYpPFQu8P89Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQ77Uvpc_h2sK6Dtx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxxeAjxLA91YGZEDn94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwp93fVr_QGjV7YALF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyQypjx3dwkB2_WNg54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}
]