Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Though that is true, this hypothetical student would most likely have to put in …
ytr_Ugy7efh-E…
G
oh nooooo wahhhh what ever shall i do without 500 sora ai slop videos on my shor…
ytc_UgxzvDcrw…
G
He keeps referencing the AI in an outdated model of people working in exchange f…
ytc_UgyP441GI…
G
That's what the video is trying to highlight: when we start to "trust" that the…
ytr_Ugyd2NaGb…
G
ChatGPT said the US should be tried for crimes against humanity :( and it is mor…
ytc_UgzY7S0If…
G
I'm just gonna say it: If someone starts making deepfakes of serving political f…
ytc_UgwdpwsLo…
G
Will the different ai be racist against each other after they select what race t…
ytc_Ugy4ihfRT…
G
And this is why God will reset everything again he is the only thing you need wh…
ytc_UgxCCtFr7…
Comment
Real intelligence is doing the same thing. We take everything we've ever read and spit out original material but everything we make is influenced by what we have read. However in certain situations we have to reference sources if it is a non fiction research type of project. Most AI is not citing its sources in these situations which is plagiarism.
youtube
2024-12-14T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyi6W9YMC-vYV2odzx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzRkBYCrMxbZ58YEEl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy_IJbI2GtY34zswb94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgymbFpSmoJbH3Gn9X94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQGI59raeU72UqYpl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwBj1JuVkbZAVIlB6B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoIAmEA-L6yCIO81J4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzqflEMqWAuhtuCmkl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzf0dBifoxvX3BbKC14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzepK88p3MY3hEIVet4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]