Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So many redditors were shitting on a early prototype laundry folding robot. Swe…
rdc_j1xy3x7
G
LLMs give these vapid statements and people seem to think they are insightful, i…
rdc_m00h7cu
G
Ain’t no way ai is gonna take animation jobs, in a world where there would be ai…
ytc_UgzoMzvSd…
G
Maybe we should ask another question.
What would the corporation do if their pro…
ytc_UgypCoR8t…
G
Autopilot is a completely different thing from FSD (Full Self Driving)! It canno…
ytc_UgxaQJAsb…
G
All this and Trump just fired antropic because Trump wants to beef up autonomous…
ytc_UgxDYlj4Y…
G
If this is going to affect the world why is America trying to make it a competi…
ytc_UgzU00XZf…
G
I knew it was AI right away.
Too airbrushed and the eye is fucked up…
ytc_UgwrU3Ago…
Comment
The first thing is that they should not make a single robot in humanoid form, they should not look like us, we should never confuse them as human, they are not us. The biggest issue is job loss and what humans will do for money and how we will function having no purpose. The companies developing AI will become quadrillionaires, and many of those who are developing AGI have no interest in saving humanity. Why would they want to give us universal income so we can survive? They have exactly zero motivation to take care of us, it does not benefit them, in fact ultimately they may view us as the insects using resources they need for themselves. If they cared about the survival of the human race they would have chosen to do this only when they put every safeguard in place first. We can all thank Sam Altman for safety being completely abandoned. They are far more interested in their own salvation so they can witness what is to come, we are just something getting in their way!
youtube
AI Governance
2025-12-08T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxU_zhG_Jo59YxLJRJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhyhMkmGf8kCJK1RB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxM728SphNwsfrOr-d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyy0ZIV6sTro2cEUf54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxkrnOJh5y8fnIp-th4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy8jUXR8BLjZlxC_a14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw6mt-dDKEoqBj4pOJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugys_yRWukI_tTyMULB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwpEHZYFZqdlwhOxbx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugyq0OHXDF5CIU60I994AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]