Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@ronald3836that is a very balanced and level-headed way to handle this.
In fac…
ytr_UgzJsY4YT…
G
Damn being a white person is way better than being literally anyone else in lite…
ytc_UgwgjQgTq…
G
these ai generators generate "art" by learning from stolen data. hope the shareh…
ytc_UgytLmISf…
G
It’s more of the fact that they got an ai to do it. Robots can’t just look at so…
rdc_i6s6o1y
G
I get that statistically they get in fewer accidents, but programming bugs happe…
rdc_nt078e5
G
Triple checked there real controlled by people wearing sensors just behind the c…
ytc_UgwTwDCVq…
G
We need national legislation that all AI generated images must contain a waterma…
rdc_ohxpwgx
G
AI does not purposefully pick biased Data. It uses all Data to create an algorit…
ytc_UgxlV9i_B…
Comment
43:43 The study with mice proves that our reward centers in human brains is directly connected to working towards our goals... so, by having AI do everything for us, we become lazy, complacent, and, as a result, less intelligent. They called the 2 mouse control groups "the city mice" and "the country mice." The city mice, which had everything given to them without effort could not move forward in the trials to drive a car (designed for their physiology) while the country mice could not only drive, but were also self-correcting so they didn't bump into walls.
The mistake we're making with AI is that we want it to be a product to make our lives easier rather than making it to compliment the lives we want to have, so that we can create more meaning in our lives instead of replacing our efforts.
Just some thoughts to chew on...
youtube
AI Governance
2025-06-26T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugw0123WmggVm1nXlVl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgzJeATu9h3q3uNmh154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxWvQJB5fWqcL0GycF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgznJmIoD2G_qXqZ8MN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyNdvtJnreRknpaeAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy87PIOtMTOWU1Yrhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxZrWB5b2tpVAPRMJJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgytArc62Va0Ep83oFh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSV9vr9jkezSIyC0F4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBDGqqde2X68VSj1t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]