Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The man doesn't understand that Smartphones aren't all from Apple. That's his le…
ytc_UgybDfOq4…
G
he said plumber because humans will be forced underground in sewers by the A.I. …
ytr_UgwxLOGHF…
G
One thing I find very difficult to fight back is that I am unable to download ni…
ytc_Ugy2GWk_j…
G
I genuinely want to see someone hack the police ai program and make the program …
ytc_Ugy3AjEg5…
G
A.I is built off of humanity, it's not a void or a terrible monster. It's OUR re…
ytc_UgywOiY5M…
G
So, the guy who spend his life creating AI says AI is dangerous.. He should have…
ytc_Ugw2uqVw_…
G
To them saying “I’m too lazy.” GET OU-
Like that’s their problem. You don’t nee…
ytr_UgxI6cCPt…
G
Ai is equivalent to adding 1,000,000,000 Ph.D.s to the job force— who gladly wor…
ytc_UgyZ80WtX…
Comment
Excellent thought experiment, but the video ignores some of the possible outcomes. It is quite likely that most of the other cars on the road will also be SDCs (self-driving cars), which means that they may be able to react in time to avoid a collision all together. Consider: an SDC could actively keep track of what other cars on the road are automated vs. driven by humans at all times. In the event of a potential accident (such as the falling boxes) the SDC could put out a "distress signal" to the other SDCs on the road, and then swerve towards them. The other SDCs could then accommodate the swerving car by moving to the side, accelerating, or hitting the brakes, and they could do this with the knowledge of what other cars around them would and would not (i.e. human-driven cars) be able to react in time. Thus in many situations an accident could be avoided all together.
That said, there is still a chance that all the cars around you are human driven, so the video's thought experiment is still relevant.
youtube
AI Harm Incident
2017-06-25T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggQprZepafZ1HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghRftAajpYgC3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiVNu11IS5PiHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugis3gL-vgXrpHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj3h2tFVAqPSngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugj5A0pJm2zcoXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggIGhHRenxDK3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgjS60trIUKAvHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UggCEcSJA552hHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgitMxhB_OZFhXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]