Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does anyone know if Youtube is using Ai to enhance the audio too?
And is this on…
ytc_Ugz8WRzRY…
G
Can AI extrapolate what I will be in the future based on all my past experiences…
ytc_UgwGIaumV…
G
I've first read about this in the book "Weapons of Math Destruction". A major pr…
ytc_UgwsT0OPQ…
G
Stop talking about alchemy. You don’t know what it is. You were completely talki…
ytc_UgzPzpx6k…
G
And that doesn't even address the energy demands of developing all this AI/AGI, …
ytr_UgwCQBPrZ…
G
Thanks for the feedback, I edited the OP to try to address some of your criticis…
rdc_clsif6k
G
I'm gonna shamelessly admit it. I'm a an AI art user. I know, SCUM!!! I have no …
ytc_UgwR7Yvu_…
G
The difference is that they still posess a lot of human agency over the finest d…
ytr_UgwKx-tmZ…
Comment
Cleo cool video, but I don't understand why everyone has this insistence on claiming AI is just a supporting tool. That's not optimism to me, just a misunderstand of computer scaling and society. Sure, right now GPT 4 is only as good as barely expert doctors. So of course we'll go to a human for treatment. But what happens when GPT 5 can beat every doctor on that list? What about when GPT 6 or 7 is has an error rate 10 times lower than the experts? Are we really going to pretend we're going to be ok with doctors just using that as an aide in their decision making? This, of course, happening while the LLMs evolve their interactive abilities and become capable of performing surgery, comforting patients, doing all paperwork in seconds, etc. I just don't understand the game everyone is playing in saying these AI will just sort of be a little tool in the (input industry expert) kit.
youtube
AI Harm Incident
2024-05-31T15:0…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyeRIUzWibLNMrEnQd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyx60GwAmPsAACY1-R4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxHlgO2QKiD6XKFUo94AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwFuP_2V-b8gy5017d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwyTaQbGoahOm5Q5RV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyBlBOAfDnOr4_wim14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy9b7IQyWfxsPsVeWF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Nf-jAVJkaIWrX2h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKpp4tSrs88CGZ8zR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx4rC1BorCZ5sAmW814AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]