Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI does not purposefully pick biased Data. It uses all Data to create an algorit…
ytc_UgxlV9i_B…
G
bro I got banned from a server because apparently the work i have done on my sch…
ytc_UgyfMptqz…
G
I don't like AI and robots and I never will. Of course it's your fault that robo…
ytc_Ugxu3Xa9p…
G
I like spending my free time on YouTube and in video games. However, I don't bel…
ytc_UgyvZJ1WQ…
G
I'd rather someone be honest about their usage of AI rather than inpersonate rea…
ytc_UgzQTg22e…
G
AI is so unethical in every aspect and dangerous !!!!!!
My heart breaks for this…
ytc_UgySvENpo…
G
It’s not super Ai you have to worry about…..It’s super Ai with robotics; You rea…
ytc_UgyZks0Xj…
G
You last conclusion is wrong: by saying ‘sorry’ the AI keeps the conversation go…
ytc_UgxmABsxk…
Comment
The writer Issac Asimov figured out the answer to this problem in the 1950's. Robots could have more powers than humans BUT they all had to be built with internal rules to prevent them for going nuts. the rules were these: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
youtube
AI Governance
2023-06-05T19:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyHLiLDWQuYlQ-XDlx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyoA4cZsSjQR2jJku14AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyf_9LZez37-U0Ex2F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx_4cI_f7_tT97QYmR4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzp_StO3dqylz_8CPB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwT-mt2eaz-rs8pRZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyWnIsNQD2PF_eODhp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxzXqMt7v6VBv38uER4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugycpv8lplxu6WOJItV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx9cT4HUJTKQtqbcRd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"}
]