Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
the same reason why people like MMOs. if you are in a zone with others and even …
ytc_Ugw2lc6j6…
G
At this point I imagine 2030 as ZOMBIELAND
There are no jobs. Those who have wi…
ytr_UgzTvS1pz…
G
Imagine being so fucking entitled, and infected with a misplaced sense of self-i…
rdc_fwh9lx6
G
I think this is a question we will have to find an answer for soon since AI tech…
ytc_UgjmL9PTU…
G
Human and robot, - they are both becoming like each other, but which one will ul…
ytc_UgxubKUyT…
G
In TV/Radios/Media they say that AI might not be recognized as more intelligence…
ytc_Ugx3sxSMy…
G
The real scifi moment is when the video pretends AI can or will ever actually pe…
ytc_UgxKVOap0…
G
No, we do not have unlimited resources, but a fully-automated economy will lead …
rdc_d3xsvnh
Comment
i dont think it will be a matter of AI just being smarter than a human. it will be a matter of the AI being more resourceful and significantly faster at processing than a human. an AI can look up, corroborate, understand, and execute a plan based on said information, all significantly before a human even thinks to reach for their phone with the intent of googling something.
Edit: and you dont even need a superai to do that. narrow ai can already do that. i think superai will really come into greater more widely applicable presesnce when it learns how to predict accurately future events, years, decades, centuries in advance. if a superai said that a meteor that will wipe out humans, and by extention AI, will hit the earth in 237 years (calculated to the millisecond) it can prepare and execute a plan to prevent this from happening. or at the very least prevent itself (and possibly a few safe human companions if we're lucky) from succumbing to this fate. im sure by the time something like that happens, off-world options are more realistic.
youtube
AI Governance
2025-10-03T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]