Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Uploading your brain or mind to a computer is bullsh..
We are extremely far away…
ytc_Ugx-8Vhxr…
G
Yep, whenever I ask Alexa something, I always tell her Thank you. You never know…
ytc_UgwKs_Usx…
G
You got me f-f-f-FUCKED UP giving a fully loaded fully automatic weapon to a god…
ytc_Ugz3OzMC9…
G
> I believe we miss-understand AI based on the fears of what movie producer a…
rdc_icfzij7
G
1. Are you humans being watched?
Answer: Yes
Explanation: Yes, in the general se…
ytc_Ugyh41vku…
G
I keep saying it since day one of AI , THE VERY BEST WILL STILL HAVE A JOB , the…
ytc_Ugw4V8R0p…
G
I am black and I build models. The theory is: bad data in, bad data out. Whateve…
ytc_UgwVOes8n…
G
Never give AI free will; especially when they can cause harm to humans. Speciali…
ytc_UgyAxETvj…
Comment
While AI programming is improving, it is, in all its current forms, a parrot, talking back at us. You might as well be asking Polly if he wants a cracker. The answer is yes.
Far too many examples to count, that are used, not here exclusively mind you, are playing upon the assumed fears and predispositions created from fiction and history. These "bots" are literally designed the same way programs are designed, they're doing what they're told. What they're told, is to take information WE make, and parrot it back at us.
If enough people on the internet in a given period of time said "the sky has changed from blue to red," programs like ChatGPT, would follow suit, and even time stamp when this mass user assumption occurred. It isn't skynet. It's people.
What you are seeing is a mirror being put up to the face of humanity, and if you're scared of *that* then welcome to the club. People can be scary, and there are over 8 billion of us.
youtube
AI Governance
2023-07-07T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzCW0zGeaIwDcuOBLt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEi5KoFIQphcm3toZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgykYmK_kIuki2zvQrZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgySLWphhOU8743AQo14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzdvAoPf7MuN-4Rkrl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-A5iMC4DGrInsu4B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyLtkPrpIorMZyHqr54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugzya7N-3fVx75W2mP14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwScvCU8AUy7HNfaPl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugyge8SvjZ0WXpeW3ER4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}
]