Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI makes mistakes and invents things to aren't there potentially to a very deep …
ytc_UgzFoxlhG…
G
If digital art is so op, maybe ai bros should just get into digital art! Cant be…
ytc_UgwrXslx3…
G
It's because of the way the internet works, these deepfakes are getting better a…
ytr_UgyLeN1ps…
G
It’s called robot broken leg so the thing is the motor in the ball in the head. …
ytc_Ugyoiz-gM…
G
Give it a better example instead of blaming things you create for the problems w…
ytr_Ugy7ySzaW…
G
2:49 Why wouldn’t AI be willing to do an interview in a maximum security prison?…
ytc_Ugz6e1G93…
G
People are already using chatGPT at work for troubleshooting and prototyping sof…
rdc_j1yw0it
G
It’s not going to happen. This is still the realm of science fiction and, even i…
ytc_UgzGGq2mZ…
Comment
It seems to me that since the AI is based on the neural network modeled after the human brain, the way you make them so they won't want to hurt people is now figuring out how to make them feel like humans. Guilt, elation, depression, happiness, contentment, etc. These things aren't just emotions, they FEEL a certain way which, in my opinion is 100% of the human experience. So I have to disagree that they will have emotion. They can't unless they can have the feelings they are named for. Logic is not emotion. Being able to assess a situation and recognize the need to flee or whatever the case, that is not the same thing. I think humans have these emotions because of our bandwidth issue. It's how we feel that help us make decisions and if we had to rely only on logic, our ability to process data by way of exchange would be a hindrance. Our intuitiveness to pick up on things we don't otherwise know is part of the human experience that a computer can't ever have. Not unless we figure out how to make them feel, and then they will have understanding of values and moral code and a deep understanding of the human condition. This is how you ensure relative safety to the human race.
youtube
AI Governance
2025-06-16T12:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyAiTOedrBS8WNTDGd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxqLOJHMpGxwaQbFtZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyY04CCzB8EuCV5_bF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxTvpJrg-VRAsZ6zpJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf_7ygdN7dVADAw6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKQ8402Egi5bDRRfF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz8_ThM8byOBjplkQR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfkGfYHhmjfE6sTPd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzB8GwtjR1rjEJbOhR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwnAFwiAX2Nn3_VMhV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]