Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Keeping tabs on an AI agent will be like riding a bucking bronco that got loose …
ytc_UgxWTOKVQ…
G
question: what makes you think a machine can't be conscious though? 4 years ago …
ytc_Ugy9KzqHN…
G
Face recognition tf every case I see it’s like the bad guy and the guy there arr…
ytc_UgwZjv0K3…
G
CAN robots do real
HEART WISDOM
if they're only machines?
Flawed humans,…
ytc_UgzTMwuru…
G
Very informative video. To add to it, I also have a huge problem with the sentim…
ytc_UgzSj1qIm…
G
AGI is more about being number #1 in the AI race now more than anything. The lio…
ytc_UgxGFeN7S…
G
It only looks ‘realistic’ because most women wear heavy makeup! This face with n…
ytc_UgyXBHR6X…
G
@thelifeofdclay5815a personal trainer they can definitely replace 😂😂, also nutri…
ytr_Ugz_Su_89…
Comment
Yea honestly really concerns me that OP is using an LLM for therapy. I get that it probably knows what to say and when to say it to make some1 "feel good", but its not real therapy and won't actually solve the problems the person comes to it with. It may do more harm than good in the long run.
reddit
AI Responsibility
1754674821.0
♥ 9
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | unclear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7mzdc8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"concern"},
{"id":"rdc_n7nlpfl","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_n7o01cg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"rdc_n7kl9af","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"rdc_n7kbibl","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"indifference"}
]