Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
With youth unemployment at a five-year high, how can the job market adapt to pro…
ytc_Ugxt01DkN…
G
No claim of consciousness is necessary. Simply 'AI' in the sense of a complex co…
ytr_Ugx_ixl_Z…
G
Ever since AI got really popular, less people have looked at my art. They often …
ytc_UgywW2ILd…
G
All these people saying we are going to lose 30-50 million jobs in the next 20 y…
ytc_UgwhfnWKM…
G
Sounds like a bandaid for a leaky funnel. Why did this guy fall out? Was there n…
ytc_UgyU6QKH_…
G
I'm almost finished with my first year on my road to a bachelors in systems engi…
ytc_UgwIhiQdh…
G
i think ai art is dumb but also i feel bad for that person since they probably g…
ytc_UgwuZhcES…
G
AI is doing something but it is not doing exactly what a programmer needs or a m…
ytc_UgxW43dD8…
Comment
1:20 As a Tesla owner myself… I will say this, I use self driving EVERY single day… in San Diego… some of the largest & busiest freeways … and it 1000% drives much safer than I do… however I’m not an idiot that depends on a car to keep me alive, along with following the instructions of the car such as…when I press on the accelerator(because it’s not going fast enough… ) I get a warning that states “car will not auto break if accelerating manually” now if you think about this logically, this would make sense. Us as humans wouldn’t press “go” & stop at the same time, correct? So why would we expect the car to stop if the “human” is telling the machine go? That in itself could pose dangers which is why they warn the driver that auto break will not operate in that circumstance…. I’m sorry but I am SO SICK of people blaming Elon for his inventions when we as the operators need to take accountability for unrealistic expectations and poor judgment/driving. Dont get me wrong, he’s far from perfect, he may over promise on performance but it’s up to us as the operator of these items to read the fine print and take safety measures to ensure our own lives aren’t at risk.
youtube
AI Harm Incident
2025-08-19T00:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxDvF4LaK3efIQAbDZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz7wZSiy6jrkNAfWN94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx6FOs0BxYdvmfP5854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwvFaeO3VFKxKeLmHV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxBMLfoGXt3bhZa2pB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyf9NbHDOySAzkNsG94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwOU0KPVHGNgSOVUlJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwz3zozfzA-jTUK8rF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw9XV88eNz5jkZTCCJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwziCpzMTDqs-vWIkp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]