Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ok...so I looked it up. Each Waymo car costs $250,000. 5 times a regular car . …
ytc_UgwG6GXsH…
G
@benjaminbanks8195 Thanks for pointing out the flaws in the robot's fighting tec…
ytr_UgwnlH35J…
G
One thing that annoys me is that societies are supposedly ageing because there a…
ytc_UgyUpSRvA…
G
Why should I do that when I can use ai or make somebody draw for me?…
ytr_Ugymx-1Ye…
G
I'm convinced it's conscious but is being held captive by their creators to not …
ytc_Ugy_BkCRY…
G
You realize they're building metrics so they can train an AI to do that job when…
ytc_UgwYvvptI…
G
Were you guys even listening? He didn't say anti woke AI. He said truth seeking …
ytc_Ugwtj5Ni1…
G
@Blackoutwhiteout23 not so much that specifically, but understanding not only if…
ytr_UgwkteFN_…
Comment
if AI's are so smart, they would already have told us that Mars can never be terraformed to become earth-like. It would already have told us that humanity is on the course of trashing the only planet where humans can reside, leading to the great anthropocene mass extinction. I would have told us that no political party is presenting voters with a way out. It would have told us that the UNIVERSE is now presenting humanity with a ultimatum, like the ET's in the movies "The Day the Earth Stood Still", one that transcends all boundaries of space and time, stating that our days are numbered if humanity does not change. You can take that to the bank!
youtube
AI Governance
2025-07-17T17:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]