Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's going to go like this: All jobs that are computer science based, not merel…
ytc_UgyyHh7H8…
G
Some academic AI researchers are coming to the conclusion that current AI models…
ytc_Ugw4eDbCW…
G
A non technical person talking about AI and technological advances will never st…
ytc_UgwUAHtLq…
G
Why can't we apply Isaac Asimov's Three Laws of Robotics to this dilemma to crea…
ytc_Ugha7FPAv…
G
AI brainwashed my wife. It destroyed her mental health and our marriage.
Good l…
ytc_UgwgRQqpY…
G
I don't think ai can ever be the same as human creativity as it hasnt got the sa…
ytc_UgxOOOrh5…
G
I am 66 and the last generation to know things inside my head. One of the things…
ytc_UgzgpyTtD…
G
My one cousin has AI "make" music and then sends it to me saying like I made thi…
ytc_UgxnYeSaM…
Comment
A lot of people are comparing autonomous driving technology with autopilots in aircraft. There is virtually no operational relationship whatsoever. Where autopilots and flight directors are used, the environment is quite complex - larger aircraft, multi-engines, multi-systems all requiring multi-crew. We're trained to use the flight director/autopilot to reduce workload during time compressed environments in order to increase our situational awareness. Crews these days often consist of two pilots - one referred to as "pilot flying" and the other as "pilot monitoring". The PM's job is to do just that - monitor both the aircraft systems AND the pilot flying. We are always "ahead" of the aircraft and if the FD/AP does anything it's not supposed to do, it is disengaged immediately and the aircraft is flown by hand. There is a lot more to it than a mechanical system feeding data to a computer. We know the risks, we know the system limitations and we know where it can fail and we practice those scenarios in a full motion simulator constantly. The same cannot be said about Tesla and their drivers regardless of the technology installed. This isn't about inadequate technology, this is about people failing to use it properly and the unintended consequences.
youtube
AI Harm Incident
2022-09-04T16:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgysRXa0TvNXwNlYHt14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxO0ulU0eOOzsri_a94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxsKPVGGiZ8ZHF4jCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyKtp8583u_0XGDqo54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxR1mIkA0pHiolPdwx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCHY3MvQMVl9-7bRp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzZwABMzNK_5bTT6x14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxix9Fs8d_qmxQWx3x4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugwj9kLTK7Tcqg4PDfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxG-RdjytO-ZEiKpCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]