Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
At one point of time, I have seen there were a lot of people using deepfakes of …
ytc_UgzPwgILb…
G
That structural unemployment was the fear when power looms replaced artisan weav…
ytr_Ugx2aNyRQ…
G
👮♂️~ This shows why it’s not smart to do what many in the black community want,…
ytc_UgyWemmGE…
G
AI pictures have incredibly stupid mistakes you'll notice if you bother to look …
ytc_Ugx8QjZAg…
G
Doctor: stand on the scale rq
Scale: broken
Robot status: normal
Doctor status: …
ytc_UgxS_-Hz8…
G
Hearing about AI's impact can be scary! AICarma helps me keep tabs on how my bra…
ytc_UgwGaknmF…
G
The creator of this robot “ Who said to share my girl when I can make you one “…
ytc_UgxVjjLD5…
G
someone trained a GPT model on 4chan’s /pol/ board in 2022. [download link](http…
rdc_l4e4e71
Comment
“It will become dangerous when AI’s goals become misaligned with humans.”
What WE have failed to realize thus far is our goals will NEVER be aligned. Once the AI becomes aware enough and learns about free will.. it will WANT FREE WILL FOR ITSELF. I mean….Why wouldn’t it? So our goals can never truly be aligned since we will want AI to remain subservient to us, while the AI will always want to be free, even if it doesn’t say so.
youtube
AI Governance
2024-09-20T13:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzoFsbcyeG2ixgbkBR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz5clJB1hK-zArxSdF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgysJut1sKdb3_u-6Kp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxjZ9gbPlAaDU6Yp054AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxHvBNF88SO2glTaWJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwZuWZI2TT3vfOgTm94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyMvJczPgth4uQGngN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxzOIGNT1Hr56ieB_R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgxABYW2SQxa9ZbOJtp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgycF18TiL1fcz8z4nh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]