Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Doinkky what i mean isnt that the art itself is bad, im saying that the usage …
ytr_UgyY1JWUj…
G
Waymos want to make money easely (without driver)... so neighbors should sue wa…
ytc_UgwyAve_V…
G
Promoting "both writers and AI" is often referred to as "hedging your bets", NaN…
ytc_Ugy1xCvky…
G
1. It would be yours because it’s actually the image from your brain. Ai just cr…
ytr_Ugwij_STT…
G
i understand that you want to protect jobs of ordinary people, but i think the s…
ytc_UgwNbgHbt…
G
20:32 "Buy food for him."
I don't think AI will care to buy food for a human.…
ytc_UgwhsqFfK…
G
I get that fear, I'm an amateur writer, and seeing what writing AI can do makes …
ytr_UgxYD6kNG…
G
thoses robots are not really ai
they have a human with a controller that control…
ytr_Ugx5GORAs…
Comment
Teaching something right and wrong is not the same as having an understanding about it. It all comes down to knowing, don't do bad things to someone, unless you wanna be treated the same way as well and know the feeling of hurt and pain to avoid doing it to others. Then there is the idea of consequence. If there is no pain to be felt, if there is no mortal consequence to be given to an AI, what is to stop it from deducing an extreme level of punishment towards a human it may consider stepping on a bug accidentally the same as killing another human purposely? An AI may deduce the idea of "Those who take life, must forfeit their own." This is where common sense comes into play for us humans. This is why we have courts of law. If you have no experience feeling pain, how can you know how much you should either give or make sure you don't give to someone who can experience it?
youtube
Viral AI Reaction
2023-05-26T09:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwrcUgddo0HTdnhxqx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy-lx21pjTXemXca0R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzIFXbXspUDb5QYwG94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyaT6S9zc2xvUJgMrB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxcjNHA4jDWzN4qzJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzv2qjm3Hl4SLsEVcd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"unclear"},
{"id":"ytc_UgyubAXcGb9muAxlDIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwiHGqtJVDY_hA53l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgweZHAE46_taLdHliN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugxm9O3JeEAedJ8iWN94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}]