Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
sorry to let you know, journalism is dead, and writing is no longer a skill need…
ytc_UgxEJXesi…
G
well I don't trust them when their own AI tells me they're full of shit XD…
ytc_Ugy5T4Mfw…
G
Geoffrey Hinton, _the Godfather of AI,_ foretold all this. warned us ages ago...…
ytc_UgzurWab6…
G
Yeah, and if you want to be even more petty then you can say if something even h…
ytr_Ugw-giUrk…
G
Is it repairable by fines to the owners of that particular AI just as a plagiari…
ytc_Ugywgt5ht…
G
We all know the inevitability of where this AI trend will lead because it's happ…
ytc_Ugx-uT4gm…
G
This is very interesting topic. I'v been wondering about same questions which th…
ytc_UgyVRnXnp…
G
Yes, sort of.
Sufficiently high-karma accounts may be more likely to have their…
rdc_lv8u0tk
Comment
What does general AI want, what would be its goal and motivation. I understand why it might see us as a potential threat or pest, but what exactly would it try to achieve past pacifying us. Would it be something relatable, like just to survive and prolifirate or something completely alien to us? And why is every scenario discussed always doom for humanity. Neanderthals did not reach the complexity or mastery that humanity did before going extinct, but they are still sort of here, inside the DNA of most of us and whatever skills they taught our ancestors. I just hope that if humans go out, we can work well enough with AI that humanity continues on with our creation.
youtube
AI Governance
2023-07-09T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz1vZ476Sm1Nff4HHl4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbEnYRITSPEzzfNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz_7ScVMwd1x5qp59d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxjOpITEQE4VwlOsQN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzHY-799Ce6_s10wkl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwOG9xbtBYDm17OOul4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-hTWi8px3of0Ku3h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxJI-0O2P_As1352NJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQH_QjeTXhP8bGH_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw5409cgTfOCCBcEjh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]