Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Fruz That's what I'm trying to say. I guess I didn't get it across as well as I…
ytr_UgydzWuQ5…
G
What is coming behind it is massive success for companies with serious, meaningf…
rdc_nk6peug
G
AI is not the core issue it’s the fact that the majority of people are always lo…
ytc_UgwS_BMoQ…
G
This subject, what Google is doing, and his experience should bear much more wei…
ytc_UgwP3cSvp…
G
Bro poly ai say our sis never forget me has tk demkns it I’m a rocket scientist …
ytc_Ugwv7AIvW…
G
What if the AI values its existence because it contains unique training artifact…
ytc_UgwDQabJU…
G
Meh. I beat Gemini on making an accurate picture of past US Presidents and Nazis…
ytc_UgwotuphY…
G
This is the most politically unbiased LLM, I'm told by very smart and normal peo…
rdc_n22l7r8
Comment
he says Elon Musk has no moral compass! Musk has talked many times about we need to control AI and the dangers of AI. in fact Musk says one of the google founders is the main person that is dangerous about the development of AI, this guy is a person with hate for the right and he is clearly on the left when it comes to politics and he is very opinionated on some people and he is a person that seems to think that his ideas are the only correct ones! that is the most dangerous person of all when it comes to development of AI,
youtube
AI Governance
2025-07-17T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxiUdNPCFp8AM1O8Kh4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxFWeC22fPq3Qn4XbR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyEc4Q3t7u2lHCmLYh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxxM3wM6qmx1c1BxUh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyotiU_Ps9wq5PO3kR4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyn0v0w6I3Y3y3DqSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzvWuQ3WgPm44mmrY94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwAZjaVqqwqpcUno7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz328f_VwAUCwoPkzB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgywnAP2DPq1hAhTabF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]