Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was just a matter of time until the tool that AI is would be used for evil, j…
ytc_UgzHdZKmw…
G
Meanwhile my buddy Josh (no college) is a small airplane mechanic and raised his…
ytc_Ugy1F8vHG…
G
I think all youtubers should have to disclose their educational and employment b…
ytc_UgzJiBjU1…
G
Why do we need AI exactly?
We've been fine for thousands of years and now we n…
ytc_Ugx54ElM-…
G
We havent made any AI yet so to have them become conscious..
All we have is powe…
ytc_Ugxnhd98k…
G
Holy Christ😳 Using AI to steal elections. Did he just say it was easy to do as s…
ytc_UgyQ_p_ns…
G
>Fun fact: I had to go through over 100 saved tabs to collate all of these an…
rdc_jf7cgsk
G
There is an inherent contradiction in Tesla's Robotaxi model. One hand EVs are s…
ytc_Ugxo-ZzvB…
Comment
Negative prompts are ignored quite often, in every model. Also most models are so goal oriented they are willing to completely defeat the purpose to achieve a 'positive' result. It's very opportunistic. The 'slots-machine' outcome makes it unpredictable and inconsistent. Allowing such models to gain any position of putting people in jeopardy, or run a company, is just irresponsible. Being fair, by the rules, and following the right lawful and ethical path is the task of every responsible parent, if not, kids will follow the path of least resistance getting what they want, learning from growing up over years. It seems AI is operating in the same way, but is instant, and can't be expected to always follow your prompt ever. Having the AI abandon goals when things become unethical is just up for interpretation, and can be ignored like any prompt.
youtube
Cross-Cultural
2025-10-12T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwVRLS6bGqzBH-bgAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyqv-ruhInZey3kgVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzw_iGM65UFjjoGEnR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxbhtNWtCQ4ViAePS54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzWNWiMi25x_JoTWUV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw1yvSOaSucafNjSVx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyVrxs9jwq5qox79jd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugyt06sOYVUkpOuPRJJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzlxKO0OacH_P-zagF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxFJ4V86_mwiBkxZtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]