Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Companies that use AI to replace staff should simply be heavily taxed. People ne…
ytc_UgxcbDVCU…
G
If AI replaces all those jobs, the economy will plunge into depression, corporat…
ytc_UgwTm-YXl…
G
I'm a software engineer with over 20 years experience. For a year now I rarely …
ytc_Ugy9XnyWq…
G
@neko7606 Robotics will be accelerated by AI and we will have humanoid robots ca…
ytr_UgxoGu0gR…
G
I guess we've all already been tested so much by technology and AI through socia…
rdc_iod4meo
G
AI "artists" are just allergic to trying a new skill. Anyone can do art, just ca…
ytc_UgwOYOOY4…
G
The interviewer/ presenter is pretending to be oh so shocked to hear that AI is …
ytc_Ugwi3zrMS…
G
bbc enjoyer that believes there needs to be a world government, and was a propon…
ytc_UgzwsVxz7…
Comment
It highlights the very real problem with AI especially as it is being used more and more in professional fields and is compounded if the professional user is not well trained in core fundimental understanding in their supposed feild of expertice be it engineering, clinical or any other field. AI seems to give in most cases an answer it thinkss you want. If you don't challange and word your questions concisely you will get false information that appears to be knowlagable. Basically put shit in get shit out. This is offen seen when students that don't understand their subject matter well enoughh, because they have not studied the subject and grasped an understanding to level they should and use AI to create their assigments. They are basically putting shit in and yes they get shit out and try to pass it off as work worthy of a high distinction.
Even with a well worded question for AI to provide a suond answer for a problem, AI can and will try to give you utter rubbish and if you don't know better that gets passed of as fact.
youtube
AI Harm Incident
2026-03-20T05:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzmeQre6h9xa2MQZCt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwJURSNxAuiUNhoNE94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwmKyOi0JffmvQQkzp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgxSU8HMa3LRiI3QHVZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz3hPS2zM1T1pYwP9t4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEGlL-BY7JaJXLV-J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwWd9s_zh1Y_d40CcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx67vqJmjeYpyGHgMV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwGE_0NVlfL4opsTKZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxzZWKp4nQMa1wOtOV4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"approval"}
]