Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think any of the drawbacks are absolute worth it. This really is the next tech…
ytc_UgwMK-RPB…
G
People worry about AI taking jobs, but OSVue actually helps me perform better an…
ytc_Ugx7v2yp0…
G
I think AI art has its place. But.
1. It should be completly original and not ba…
ytc_UgwPTe3FS…
G
One way that you can "trust" AI is that it still operates on binary logic system…
ytc_UgxVGMTbC…
G
An 'ok I will destroy humans' statement from a robot is not funny at all.…
ytc_Ugy1nguXv…
G
I don’t even know why people are even trashing on her Ai outfits. They look fine…
ytc_UgwZODXxM…
G
Ai generated content is probably not copyright protected so good luck actually m…
ytc_UgwKc6Cod…
G
If one does not know the true words from which the Three Rs of Education are der…
ytc_UgyOFmBU1…
Comment
I think they should let the regulation take it's time, but no experiments should be allowed before that. I don't think the regulators will do everything that I think they should do even if they take their time. Before any self driving system is allowed on roads I think there need to be ethics rules for self driving cars: e.g. if a car has to make a choice like crashing the car to avoid running over school children crossing the road how should it act. Ethical rules need to be considered and then manufacturers have to implement rule sets for the vehicles.. which probably excludes the Tesla model based on an AI model that may be hard to control in such ways. Then they should consider responsibility when something happens. I think manufacturers need to have full responsibility it they claim "full self driving".. and I don't think any companies can do that for a long time yet.
youtube
AI Harm Incident
2025-10-22T22:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]