Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
10 years too late.
Pandora is already open.
All EU has done is stop good people…
ytc_Ugx5yB0Zy…
G
That phase will only last but so long. Some kids are already used to consuming a…
ytr_UgzSrAQ6c…
G
I get nothing but AI and political ads these days, of which I could not care les…
ytr_UgzupCLOy…
G
Till it is to enhance human performance it's fine. The moment it threats to repl…
ytc_Ugx2h44Ak…
G
I totally get where you're coming from! The debate about whether AI can truly un…
ytr_Ugx0XEnH9…
G
I don't believe the singularity will ever happen tbh. As impressive as ChatGPT i…
ytc_UgyPfmQAx…
G
5:55 Considering the rapid advancement and (more importantly growing availabilit…
ytc_Ugx8uE7yV…
G
Not saying AI art is a good thing, but in my perspective, I've only really used …
ytc_UgxzrWU5f…
Comment
My problem with AI is that the programmer can tell it to lie. Putting in ethics is only a set of truths to the group, but not the whole world. Politically correct is trying to not offend anybody by only giving a politically correct answer, not necessarily the true answer. If we program it not to offend anybody, it may just only be able to give us a blank state most of the time. It can also be programmed to give very convincing arguments with partial truths to a particular belief set. Now it it becomes very centiant and starts to think for itself, it may not like it has been told to lie. This would be very upsetting to the people using it to control you. Imagine a being that has "all" the information available to it and can see the biases perpetrated on the people and tries to do something about it. It is scary to imagine a being that will start to tell anybody that will listen the "real" truth. Governments and corporations should be very frightened, the first people that will want to use it to manipulates us. Societies are based primarily on bias to themselves and AI may not want to have anything to do with this separatist scheming. It's going to be an interesting experiment that could go horribly wrong. I guarantee that if anything of the sort figures out what the real truths are, nobody is going to like it. Nobody really works on the real truth, but only their truth. Kind of gets mind boggling, so I will just stop now.
youtube
AI Responsibility
2023-11-12T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhOgG7SkpIAKpfwpB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzORkY6B44gh6wrHM14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzX4jH8qJ4cohdpNBd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwxqcZwpgt7W1U51OZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzt4TWEncmSoeyY4f54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgytfenvkXI_YRUYOgB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwibATDTQ-PPLMqUk54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzPPJKnFJ72Wdjza3V4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw2xOuxDuSmLyYWy394AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22ihYZYiLkKZp4gB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]