Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Does any one else find it unsettling that a person can continue to do something …
ytc_UgzKj1Tvb…
G
I think that ai can be a useful tool when it comes to programming but it really …
ytc_UgxLJubHP…
G
We should pause. But we need to be real here and admit that once we step past th…
ytc_Ugy0DdZ4A…
G
if you thgink your AI is becoming sentient, you are a nutter, if your AI is beco…
ytc_Ugw6Chc9P…
G
this just means all artist need to start using filter so that ai cant read it ti…
ytc_UgwCVCHBe…
G
When humans do it they are assholes but if AI does the same it is appreciated 😅…
ytc_UgwY7XP0j…
G
‘They’ might be in the time when’ robots look more like humans’ -but personally,…
ytc_UgxSL0x0N…
G
This is what LLMs are good at though. They're excellent at spitting out related …
ytr_Ugw2ODPQ0…
Comment
Remember when they said computers would make life easier?? The reality is that it’s not perfect, they do have glitches, and are problematic… I think the same will apply to AI, we’re always going to have a human to supervise and monitor it. Tesla and other tech from Musks companies have issues. We shouldn’t heavily rely upon them.
youtube
AI Jobs
2025-10-08T07:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugya7n4dulBcwceU6-Z4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzZ86DC-27tB21CZrp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyqLAyp2VoRpsTb7v54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyz2hgoeI7h38SJvyh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwso5QY9qGCmCDi_bh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz2tFQ9nGtqXaCTCaV4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_dbFdie16cjRtX1d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy2JXCBejLn5ANfyXV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwTvre2IiY6uin90Zd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgyygeikxHHIXDzJBqZ4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"}
]