Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Myself, and multiple women I'm in a support group with, are experiencing our abu…
ytc_UgyKb3L8z…
G
Enrico Coiera discusses the advances of AI, how it has already led to changes in…
ytc_UgxVOgAji…
G
Figures. This is the same company whose shopping algorithm decided that because …
rdc_e7jsmo2
G
So it's illegal to not have someone in the driver seat of a driverless car…
ytc_UgzPbJAEE…
G
It doesnt matter what we program into it.
If its self aware, its self aware.
I…
ytc_UgzDAY5if…
G
Siri is the Ai most likely to kill the most amount of people. Siri is sadisticly…
ytc_UgzidnHxo…
G
AI had always scared the crap out of me. All of this technology scares me. Going…
ytc_UgxdB0QDV…
G
Remember, this will be a "thinking" system.
Imagine if "A military group" (Gover…
ytr_UgzUvcNmW…
Comment
But how!? What math other than the probalistic word-calculater that we have now will result in RSI to come up with super AGI!? These CEOs and researchers either talk from strong economic incentives or from a somewhat disconnected promise of scientific breakthrough. I agree that poorly implemented AI's can/will have harmful results and we should definitely have strong regulatory practices in place, but implementation here is key. Though being told otherwise by the owners of these agents, they are still unreliable, without good use-cases (less than 5% according to a new MIT report) and just frankly shit, and thus won't be implemented wide enough. These machines will not come up with new stuff when let loose on its own research, it will just come up with other versions of the data its trained on. Current LLMs and AIs cannot make up new stuff. They don't have imagination. They are just regurgitating whatever it has in its n-dimensions. The AGI-dream will fizzle out when the money does and investors are already getting nervous. Much sci-fi tech is just plot devices.
youtube
AI Governance
2025-08-26T20:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwLIEkqV4EGx3tkYPR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzRAZrW2cEydr0kGAl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgxtBwZxuWDysCfEKzh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwQWpMxTRk_XmCJBWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugypv6UqLxzw-qvYtfh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]