Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When the AI CEOs have depleted all the water with their toys how do they expect …
ytc_UgwQcFmWj…
G
AI needs to read all of Ayn Rand's books. That will make it understand morality.…
ytc_UgwDj1Sr3…
G
I do not hate or like ai, from one point the ai is like a human artist but way m…
ytr_Ugxz_e2_L…
G
Truck drivers are the most likely to be replaced. People will still load and un…
ytc_UgjnzyeHp…
G
I once asked ai to make an exact copy of bohemian Rhapsody word for word, note f…
ytc_Ugxe10RcY…
G
Boss: "We need you to train an AI that looks, talks and acts like you." Two week…
ytc_UgxZ2yLNt…
G
it took me about 1 minute to figure the conversations were not AI
[
the respons…
ytc_UgwvmTDiw…
G
I don't see any problem about it that ChatGPT would write books about real world…
ytc_Ugyt5Q8cw…
Comment
There should be a Action Safety Measure since AI only use facial expressions & actions to decide. People that are abusive or psychopaths / mental disordered AI Owners would feed the AI clouds with Negative impacting data that another AI robot would use in a different home. Also humans are becoming less and less human, becoming less patient and becoming morally corrupt on world wide scale this could create a issue due to the fact that the AI is learning from their Owners & Cloud and not from a controlled and domestic program. More thought should be put into this.
youtube
AI Moral Status
2021-08-23T08:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwrJtXjKCVEXJcAtUh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxxJ6U17OWS8tNjwv54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx8soE6Ykpr40ylGX54AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwYzC0kJM7dLYIoCe94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz7P8rH7-OTJx7-UN14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxiTwSt11XYZdoFBRd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgycyuZWYe4cMQmGVUJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx9im6_AXAP8Ggg-094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAhy20KSPaoDc6nDx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyMbs81vcsa_q1bzrB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]