Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i m totally sincerely and in all honesty not worried about AI at all, even befor…
ytc_UgzcJ1bPu…
G
Hey I feel like your missing a big part of this, western ai acts like this. Chin…
ytc_UgyXjgYbY…
G
@nodzuki.shuixie equivalent of saying no human would ever win in an art competit…
ytr_UgzDTkH6T…
G
@ Well, I suppose it's a 'Brave New World" after all! 😉
I don't mind admitting …
ytr_Ugwn22Gr5…
G
Honestly I highly doubt any of this is unscripted. Not even sure what the point …
ytc_Ugyar8k9X…
G
A.I. will be used to create a better, more lethal version of COVID. Why wouldn'…
ytc_UgzSrLfB7…
G
Just watch the Tesla crashes at night, it tells us all we need to know about cam…
ytc_UgxQXnLZl…
G
This is the sad thing the AI supporters are turning to? Oh god. Give me a break.…
ytc_UgxZGG4Gt…
Comment
Yet another statistically empty "deep dive" that strenuously avoids the fundamental question: is Autopilot better or worse than a human driver alone? When the "expert" in automation testifies that "someone is going to die," yes, that's true...whether it is automated or human driving. Human drivers make so many errors it's pretty easy for Autopilot to be better in the aggregate. And while the data he uses is not public, only Musk is actually asking and answering this question. We should perhaps remember not to let the perfect be the enemy of the good.
youtube
AI Harm Incident
2024-12-21T01:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyu77aPSn6Lm8DUm_x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxwYweqEwqZhAcR2x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3PctsnkdkZ-KHl154AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyMpPlYDXrrD8xWY394AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxE_rprRdjO8DAPHdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztlCX8bu74tRyzlwx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxVdoMlHeKD-wT6TdZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx_spw2FV3hTjHqzNB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxGq3kxm_l2WrB4CpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyHL_exO_kx_0WCV1B4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}]