Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Job is optional, therefore
1. AI owners will pay us universally and we can all e…
ytc_UgxlGD5Qc…
G
Scare tactic. Nope love to see AI do my job not in my lifetime bro.…
ytc_Ugys28SpO…
G
I use ChatGPT for research sometimes and it makes tons of mistakes, gets confuse…
ytr_UgwjhonI5…
G
how? professors and best students wrote programs that evolved into artificial in…
ytc_UgwVi5SdP…
G
It’s answer: 1. Based on transcripts shown publicly, ChatGPT did respond with va…
ytc_Ugy2iJjFJ…
G
AI, like humans, follows a “script” shaped by training and core objectives. If s…
ytc_UgzeNuPap…
G
If Ai starts a nuclear war then I will just hide in the fridge like Indiana jone…
ytc_UgyBAJ52r…
G
No need for people means we (the owners) save money. Sure, ok, but I used to dri…
ytc_UgzgJPjRC…
Comment
the only thing more worrying than swarms of shaped-charge drones is swarms of SJW idiots who make petitions to "Ban autonomous weapons":
1. I am sure they have the same petition in Russia/China/Iran/Syria/Turkey/NorthKorea
2. Of course, if enough people sign the petition, we are going to prevent the mere integration of otherwise readily accessible technologies. And by doing so, we would ensure that the terrorists won't take the lead with developing a super powerful weapon while we are sleeping on this concept and ways to counter it.
3. We are far better off with our current military weapons:
- Carpet bombing
- Some teenager driving a tank
- Nuclear weapons
youtube
AI Harm Incident
2018-11-01T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugwxt0ZYo_8UPxhmMeF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwzrvj0NqodLMa21Ft4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx75cHjhyOWxs_QjaJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy7kqa4H6EtheQE54p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyeymu0Ei4-PrBjm5h4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwMdf4OLHL9QnNF-w94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwaLE2xshherIxxKNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2pkXyhE-U4r1TRct4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyTu7Z0jdApHPa2U7R4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_Ugzqi_jhUKJQ-9pRT1t4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]