Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah oddly the same ones who want to control AI, censor free speech and take awa…
ytr_UgzsMny5G…
G
Hi Jennifer, you got the right answer. Kudos.
The contest is over and winners ha…
ytr_UgwwWFud6…
G
sora ai when you give it a prompt and it doesn’t have tens of thousands of stock…
ytc_UgyIZPA7I…
G
“Yep Everyone rack up on food and ammunition and robot invasion will be soon” my…
ytc_UgziAq5MS…
G
@DiogoVP7 what kind of comment is this?...omg. it's the AI working over time.…
ytr_UgytHpNjk…
G
Cant wait for the entire internet to just go down because AI breaks it, and we'r…
ytc_Ugyl7wVYr…
G
What? You do realise the only reason you found this video was because the algori…
ytr_Ugy6hku3y…
G
The solution is for the public to control the capital and the AIs (IE Communism …
ytc_UgwV7VlpS…
Comment
I think this is a question we will have to find an answer for soon since AI technology is improving constantly. I think we shouldn't give household appliances and power tools intelligence, but machines that look and act similar to people are probably coming sooner than we think. Of we teach a robot the laws we need to make sure they follow those laws like us and we need to treat them like people so they have a reason to follow those laws. I think the androids in the future should be able to experience pain as to keep them from doing something that harms them twice and to help enforce laws on them. They will also need to be protected similarly to how we are, if a robot is attacked on the street we should be able to look into their brains and see what they say when it happened and find the person who hurt them, otherwise the machines will not be as willing to follow our laws if they aren't guaranteed safety.
What do you think?
youtube
AI Moral Status
2017-02-28T12:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgiKYV8v9JQYg3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjCkPtC30Z9mngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjmL9PTUYn27ngCoAEC","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UggSRmUXxp_mdngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgjhwwXIci4w4HgCoAEC","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugjo_2qmwrEy2XgCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugi2Yut5usR3QHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgjcyN9r0FMRwHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgjSd-41hV6ELXgCoAEC","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgiTz-lvV3YGIHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}
]