Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Approx. 9mins 55secs to 11mins 10secs: Some thoughts:
1.) Government regulation …
ytc_UgylNLwQj…
G
Brave to assume AGI can be achieved in 2030 when OpenAI can't even set GPT strai…
ytc_Ugw6ksW6O…
G
I would never in my life get in a driverless car. Why anyone would do this and r…
ytc_UgzGgqPJD…
G
Wait what, you release a video about not trusting AI then you plug in an ad of a…
ytc_UgzU3gAwJ…
G
Lol I really don't know how a robot gonna bring joy to people. If it's doing you…
ytc_UgzXWfcWC…
G
My wife called me paranoid for saying AI was already controlling what we see onl…
ytc_UgxtymHCP…
G
i like AI to use it as a tool, like I asked it to design me a basic design and t…
ytc_UgxaQk79t…
G
Communist Socialism Democrat Nazi Party , Henry Kissinger , Joe Rob Ballots , Na…
ytc_Ugw4ckioZ…
Comment
Hello, I do A.I. as a hobby.
DO NOT take a LLM's word for it. It must provide sources for it's reasoning, not only that, you must check these sources as well. Treat the A.I. like an enthusiastic friend, verify that the articles it referenced aren't also AI, this is known as Inbreeding, where AI is trained on/uses a source from another AI.
AI also hallucinates, and is confidently wrong, my best advice for using AI is to run a local model for free using LM Studio and feed it data you know is trustworthy for it to catalogue. AI is only scary if you don't understand it. DO NOT let Palantir/open AI fool you int believing that open source AI is bad. It's good because it stays local, and it protects your data.
youtube
AI Harm Incident
2025-11-24T23:1…
♥ 52
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzvEQBFddO2ZzN_uVR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlAyFmRledJ1QLv2t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgyL3uG2mI5TiV8QopF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz8QY02Xlbx-xHhYjJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugxcmd9QbUzaAu-00254AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwezohvsoCOkKp-ViV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCotIilwr6e7UYjH94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx95-17o1tKI9zHfb94AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwp-7mabXsHxvJA5NZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzxaY0SzvQ7y5YSPAh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}]