Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never. We will never know. Once an AI becomes conscious it will know humans fear…
ytc_Ugw-gEflU…
G
lets skip the "eureka" and this speech is the manifest of most respected AI scie…
ytc_UgzCF6voj…
G
Can you please interview Australian Biologist, Jeremy Griffith? He may give hum…
ytc_UgxciOW_9…
G
Rule utilitarianism; we should treat AI as people in case we one day cut our own…
ytc_UghMDBUyZ…
G
mrmojoman4 Now IMAGINE THIS EXACT, A.I. CONTROLLING Our or any Country's NUCL…
ytr_UgwNC1luf…
G
Anthropic’s “Constitution” for its AI model Claude is full of hyperbole, its pri…
ytc_UgwbIcGmg…
G
That guy kinda sounds like someone that would eat tide pods cause tiktok said so…
ytc_UgxO_XhgV…
G
1 We don't have it, but first let's regulate our imgainary competitive AI startu…
ytc_UgyPPmg6J…
Comment
So much talk about what are we going to do when we no longer serve a purpose!?
But is not obvious!
We need a sense of purpose and whatever we do or become, like the example of this girl wanting to be a doctor and study for 7 years, when the Ai learns all that in seven seconds. There is nothing that can bring us purpose other creating a simulated world!
... which begs the question. Are not already there?
youtube
AI Governance
2025-12-26T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgydjZ9MgAmUnsvLPTB4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsQIWs0TcbDzBVRE14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwldXnKLE2J4WR5F554AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2K4bYkgy-3upds-h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyU-53RzPUSXUTkgFB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyy2oEvC-uq5DlT3nl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyuPq9mWPjIP8ar4o94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgymvXTH58x4rhgG50J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwsFWzRcvLl0jJDhI14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxuZ_8S0_X42rv8RFp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]