Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
You guys are trying to play God in your conversation. You're forgetting a very i…
ytc_UgzJZrLtk…
G
This is true. I got into cyber security and information security because it was …
ytc_Ugyk3zd5L…
G
there's a difference between "vibe coding" and disciplined AI-assisted software …
ytc_UgxLTvDtB…
G
@Daniel-yn6dd I'm not into drawing, so if I want some cool art for a book I'm wr…
ytr_Ugy9kD8np…
G
I wonder if Dagogo need to keep showing his face is due to youtube keep flagging…
ytc_UgzjDzJ6v…
G
Or you could use LLMs to suggest fixes or look for issues with clarity.
LLMs su…
ytr_UgzPRZokB…
G
@eddiemin4312 really dont care how you classify it. Tesla FSD is obviously the …
ytr_UgxdwpNMJ…
G
I thought ai was gonna help us do everyday mundane tasks so WE can do the art…
ytc_Ugx5x0cLp…
Comment
They're not aware of when they're being tested, it's a different tip off, that they're trained in a certain way for tested tasks. You'll never get caring with them, they don't have that in the model. What is being hand-waved as philosophy here is really important. It's a substantial distinction whether or not something is thinking or not. If something "thinks", then it's responsible as it's own entity in the world, cognito, ergo sum. These models don't have agency, they don't have thought. Everything they say or do is as a software with fault fully attributable to their authors and users. A latent space isn't a memory space, it's a lossy compression. Yes, you can sort of take out hallucinations, then you have to do a lossless compression instead (much more expensive) and then you're not going any generation, just look up. You're doing search rather than "AI". So you know... not exactly something that's going to attract investor hype these days.
youtube
AI Moral Status
2025-11-02T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdAOIw0vC2w_SXVel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu7U3JsjE2Z72cTRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4AGe2FeVh54njl494AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz10iK1QouyETqmQR14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwD-S2aY4BOf3U2Exh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-4HfaBMiOAiRX6yx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxbns0VwxHsfe7e4fJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzmsr9lspFLoWJm5gJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnO6auS0yaYgzQgPB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztsmRIcAbBSheo4-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]