Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What it's going to do is get destroyed/stolen like what happened that world trav…
rdc_jg1ce56
G
These AIs use massive, unsustainable amounts of power, and are controlled out of…
ytc_UgwZoFYy-…
G
We deployed Claude (by Anthropic) at the start of the year. It was VERY promisin…
ytc_Ugxb_eIad…
G
55:06 We release models openly, becaause they are mostly harmless.
They can't d…
ytc_UgwudaEM1…
G
Yea imo that’s how AI should be used- help streamline the uncreative aspects of …
ytr_Ugwy_MjTJ…
G
RE: walking on the beach... Robots??? The majority of its brain (neural network…
ytc_UgykCr0-H…
G
Detroit become human takes place in 2038 and I, Robot in 2035. These time frames…
ytc_Ugx9g1XGv…
G
Some relationships are not meant to be
such is the case with AI and Art…
ytc_Ugz1PLNJL…
Comment
Could it be that Yudkowsky is the most irritating, tedious, obnoxious, and hysterical person alive? He is a great example of the limits of the autodidact. His fundamental thesis is asinine. This is a person who claims that LLMs (which any sober person understands as mere statistical machines) already contain a Ghost in the Machine. This is a person who wants the state and/or the masters of Silicone Valley to be the ones who decide on the path that AI development takes place. No thank you— I will take my chances with decentralised development any day. Yudkowsky is the epitome of an authoritarian thinker who wants to dictate the course of development society under the guise of humanism. He is seeks to bring about his dictates by causing hysteria and angst. The healthy response is to ignore the vast majority of his proscriptions. Following his proscriptions is the surest way to bring about a dystopian future.
youtube
AI Governance
2024-11-13T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwRvWP_k7v_jN9-Te14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyksdh6rn-4hBjfu214AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxlTd1d2AkohR8lVSZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxF1_HmuOODIl8KiOF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzTMs1seu-Hm2wg1tB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzck-R6lKxbvEb8M5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyyLzF6cJe301DdxjF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyL07Rq-EVfO1ActR94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz6Llf_yDF9Gc34V9B4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzaIf0jFeodxvBJt2d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]