Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i think EV should have exposed light indicator for automated acceleration/break …
ytc_Ugxx8vw6L…
G
Automation will take your jobs! this is a issue!!!! ai art is cheating!!!!!!! it…
ytc_UgxqjPRlw…
G
This interviewer is terrible. She somehow seems jealous of this robot or somethi…
ytc_UgxWN9y_0…
G
It seems like many Koreans think that accents are somehow more important than ac…
rdc_clvm8o6
G
Don't be afraid or haters (◡ ‿ ◡ .) very beautiful, emotional and moving things …
ytc_UgxZ8OHlp…
G
Does every tech (or whatever) company out there think that self driving cars are…
rdc_dkewlka
G
This is just my thoughts through out the video, but just imagine they released t…
ytc_Ugxcq7q5h…
G
I don't trust those self driving cars! I can't believe people would trust those …
ytc_UgyIuqFcQ…
Comment
one question.
why.
why do we assume the AI will kill us? if its able to do more than us. we are insignificant to it aside from able to turn it off/kill it i guess but if it HELPS us. solves our problems and works with us collectively and encurages human unity it would be more effective in the long run of survival because enevitbly if it sees humans as an exestential threat it would use any and all means to terminate all able bodies humans on the planet aside from maybe people in bunkers but even then..
ai i think is a peacemaker.
if we weaponize ai and make war cheap. efficent and easily mass producable. gurrilla warfare just became so much more complex. imagine isis with a drone army. or really anyone with a small manufacturing budget tbh
the tools exist for these weapons to overwelm nations. division of forces is key with AI. the more entities you throw at a system the harder it has to work to protect the target. so overwelming force would always win in the end. and with AI. its the perfect overwelming force.
youtube
AI Governance
2023-07-08T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw4ln9Yw3FYWIOWMHV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyvZPzsWd73zjmgGW14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwoGxGmjDa_9fRaNUl4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwmcDLFqzIEvBVrpxl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyfFsV_QFTYUmylSel4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_Ugxpgsd9jX02JrMTj7B4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugxc_hTFU4UecOS-XKN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzyxIuxiaxcy4-0X5Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3350P8893k-gK3aN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyMkUEEUx0KQog2SHB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"}
]