Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah … let’s make AI like aggressive monkeys … that will solve our problems ..…
ytc_UgwtO52xv…
G
HELP MY MOM WAS LOOKING THROUGH MY APPS AND SHE SAW AI CHAT SO SHE OPEND IT AND …
ytc_UgzH8q8Tp…
G
As a person who works with AI this looks so funny to me, a language model traine…
ytc_UgwLFteWX…
G
nothing wrong with it, its natural for people to build a connection with things …
rdc_mlgo95h
G
How is it that no one is talking about the loss of payroll taxes, FICA taxes, et…
ytc_UgyFGJWUK…
G
Well, I don’t have a formal education in the same subjects. Well, with the excep…
ytc_UgyjDRmYH…
G
I am an artist who is pro-AI. (To be clear, I draw and color my own art.) I agre…
ytc_UgwtbGrRy…
G
AI has the potential to turn into a totalitarian instrument for oppression. We a…
ytc_UgwkQd-KU…
Comment
Robot life differs a lot in human life. They can be repaired and upgraded easily and thus can pose serious threat. I seriously doubt will allow for development of advanced AI with strong synthetic muscular or piston driven robots or system. Bigger chances are that in the near future people will be genetically and externally modified with various special suits.
There's also another thing that we never put into consideration, robots just like us need source of power. They most likely will need much more energy than we would do and that just might create a war for resources. Bottom line is, don't create something you can't control and if you must create it make sure you also make a weapon that would easily destroy it.
youtube
AI Moral Status
2017-03-08T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgihQiqZZ6JUtngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggCdskvXvNx-HgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgjSLngEyU8yhngCoAEC","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugj4vS6AR6pp2HgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugh_lFikQJi-dHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghiE6mj80ENY3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgjLYJhHPMsUEHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg9uEuu-2tWY3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UghDOVqB_cYCqXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UggEj0A2BFEXxXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]