Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I don't think being polite, by itself, will necessarily make AI safer or more mo…
ytr_UgysPlTAt…
G
I don’t trust self driving cars yet either, but I trust human drivers even less.…
rdc_nsz0xob
G
Also how hard is it to type what your concerned with in a chatbot, what training…
ytc_Ugxu-GjOu…
G
1. it's not as simple as "writing a prompt" on the side of the people making AI …
ytc_UgxR6n8HS…
G
The expert gets sick of the AI's breaking constantly and takes a break from time…
ytc_UgzMt4LzF…
G
AI is AI but you don't need to hate on it, but this was a good one since it got …
ytc_UgwdmW6M8…
G
This missed the entire point. It was poorly thought out. As an author who's work…
ytc_Ugznfta42…
G
Do you think these AI experts might be playing up the whole AI sentience thing a…
ytc_Ugzi5b8dk…
Comment
I’m a pharmacist. These jobs are secure because each state has laws that ensures job security of pharmacists. Many laws haven’t been updated for decades. Quite a bit of inertia to change them. Main driver is the liability pharmacist carry for making a mistake regardless of fault. When AI makes a mistake that results in a patient death who get sued? Who’s license gets revoked? Who pays monetary damages?
youtube
AI Governance
2025-07-09T21:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxI2Vv-Lr1060VgMuZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzUUYRbUrA_pCgS1LV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy037J7sRZ4oLTK7xZ4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx1ky-K9WcsFfH3mVl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz3OPQfAX6kxfSI7RJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJbJNzDm1UEfyx3rB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw9yx1GPtzvtKOupo14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw3GcvzNbxMAfpgfol4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxDSva13kyLH_IEbvp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwG3FVdQn0k5Km0m694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]