Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To use AI without sacrificing your intelligence is to …. avoid using it and turn…
ytc_UgxyzTzQG…
G
Biggest problem for a driverless car would be the GPS system being used. I've ha…
ytc_UgymROTrV…
G
So now the left wants us to point out the "social injustice" of our robots and f…
ytc_UggIjizx_…
G
What a scumbag. Entering AI art in a competition and winning it taking a 750 pri…
ytc_UgzrRR4Bq…
G
People complaining that these workers lost their job, and mad at amazon. But you…
ytc_Ugyjcqm1i…
G
I appreciate you backing up artist, but I have to push back on the thought that …
ytc_UgwclSeMF…
G
"People don't need to work..." in certain industries -- white collar jobs, espec…
ytc_UgwStHNNq…
G
I mean all of that is irrelevant. If someone wants to make a bomb or do anything…
ytc_UgxdiqLkW…
Comment
The problem with Generative AI i that it already hit it's hard wall. That hard wall was ChatGPT4. AGI and SuperAGI is not possible because we neither have enough compute power nor electrical production capability on the planet to be able to do it. Sure if you spend the next 20 years building nuclear reactors every place where it's possible then maybe then you'll have enough electrical power to feed the datacenters for that AGI experiment. There's still no certainty.
What we know about larger and larger models (which still don't even begin to approach human level intelligence) is that the bigger they get the more power they require (power requirements do not scale) the more they hallucinate and with GPT5... it loses track of the prompt topic mid-response. In a sense, it's going senile.
youtube
AI Governance
2025-09-04T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzekckGXkaCTgbTs614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwF_o-H6m_3GlmsWC94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxP33Wd04yyLo9RkIR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGwAiiQDP_N06QDyJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzqu2UrO6YsSsFqLzx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwub2tQ_qKi3W5cNlp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgybMqmyp1hpWM2Pjpp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwNAuW9c7iBS0TCxp14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxNh7E89snZlLgPsyZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwufUzn05FUi1hCPRR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]