Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Right below this short, YouTube serves me an ad titled "Build Trust in Your Use …
ytc_UgztSeZgj…
G
The idea of using ChatGPT to fool AI detectors is intriguing! I've been explorin…
ytc_UgxXTirVq…
G
The video is about side effects of deploying AI ..but the video itself is made u…
ytc_Ugz0AxXvU…
G
Funny thing is that the CEO's and other executives are probably the easiest peop…
ytc_UgwGabklj…
G
Regular search with AI is good. It's more efficient. And it will make short work…
ytc_UgxAhv0CV…
G
@blackprop9393 ohhh! I forgot we are in China. I didn't realize this wasn't the …
ytr_Ugz8Y64-5…
G
> then those same ignorant team politics morons will blame everyone and every…
rdc_e2vrme1
G
Wtf?
Why does the youtube algorithm always send me to this stupid BS from bloomb…
ytc_UgxSngMY8…
Comment
AI must program not to harm any human , just only can act like cope to catch human no matter how sinner human are, because if you put negative or evil thought on AI it will be uncontrollable, the only way to survive AI is to give compassion programming , no matter what happen AI just a robot that serve human and cannot harm human
youtube
AI Governance
2025-05-21T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzwODDE0SOHqAcapOl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"confusion"},
{"id":"ytc_UgxmQ6U3kFq3wog7sb94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzwFie7uW70uYQXc4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzVXQ7bNBCy2F9daqF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUgdjkm_2kzNyuYuB4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy7U8DsoKpjcuzUEeZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzaCyrGouZWWM-HK914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzFB66FwQ8bOpXDYeF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzK8PuaoIyBSUEY8O14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0jG-HG6ylk3u5aGN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]