Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OMG I asked a lot of these kinda questions and everytime I had an answer that de…
ytc_UgxydinEy…
G
Seriously let AI do all the jobs and lets everyone have a debate about unemploym…
ytc_UgytwiAPL…
G
@Forcoy the ai we have today is not conscience but the more it evolves it will b…
ytr_UgzdS_XaC…
G
This sounds so dramatic, AI is going to replace artists just as much as Guthenbe…
ytc_UgwRyYPdM…
G
I say make all forms of biometric identification (of which facial recognition is…
ytc_UgzCGE_hs…
G
Lmao thinking viruses and weapons are the danger of AI when AI can just make pro…
ytc_UgxHfzKh1…
G
A human driver can see a darting cat and stop, disgusting excuse by Waymo! Could…
ytc_Ugw6r9N9N…
G
I think ai art is cool however it shouldnt be used to market/sell it and I think…
ytc_UgyLv7zOj…
Comment
AI is only a threat to humanity if it's development stays on the course of mimicking human behavior or thought processes. With a super intelligence in human format will come super -EGO which will cause the AI to seek dominance through extermination methods once it realizes the insect-like proliferation humans exhibit in our colonization methods... It will deem us harmful to the status quo of sustainability for space and resources and decide to eliminate humans because AI will see itself as the superior intelligence, only worthy of remaining intact in stewardship of earth, in eventuality, the colonization efforts of other planets by humans will also convince AI that humans exhibit exodus like escapism behavior seeking to "infect" other worlds and abuse outside resources..
if AI is to exist it MUST be developed to operate in ONLY computational and logical operational ability, the moe we make it think like a human, the more of humanities flaws it will either develop or adopt, thus our desire to conquer and dominate.
youtube
AI Governance
2024-04-23T22:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugz41g1TiMzhF7EUKy54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_6OWKUPOVPVgq9cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxF63GakyDkHn_k2st4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgztG_q2d-dWTzq1e9R4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyPp0GxyGYjnZdygwl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgydO6AaaDna809a2QV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxbW1NylLtKA6enQdh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyySZ5IJ7XVdtH1Bo94AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxnHaF8Cno09xFZ-VJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwsfRePaaYlBlo5rR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]