Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2020s: AI bad
2000s: Cellphones bad
1990s: Video games bad
1970s: Internet bad
1…
ytc_UgxwV8hEX…
G
I don't know if this comment will get seen, but an important human or maybe rath…
ytc_UgxU4i7Yh…
G
6:11
Really?
That's a pisspoor example.
You can trigger that sort of behavior in…
ytc_UgyFcXBpX…
G
@1RiverCat Obviously you know that’s not what anyone in this conversation means…
ytr_Ugzo2U4B6…
G
Isn't AI a logical evolution of intelligence (and possibly consciousness)? Like …
ytc_UgxhNhyo_…
G
What gets me is people trying to sell AI generated content. Even if AI could mak…
ytc_UgxTttgA8…
G
Would be 1000% more relevant if it would not be an AI voice. We get an AI talkin…
ytc_Ugz53UYpM…
G
"Oughhhh my ai cant copy people’s art anymore this is an outrage how dare you yo…
ytc_UgwAsXxUU…
Comment
Assembly robots lack the awareness to know where a human is. Which is why they're kept behind cages; so they can do their tasks repetitively without needing sensors to maneuver around obstacles like human workers. Which would then require realigning their servos and increases the risk of making mistakes in the product.
But considering there were two technicians, either the one controlling the robot was an idiot by letting it run protocol with another person inside the cage. Or it was manually controlled and used for murder. But leave it to Tomo to not give any insight on who the workers even were and just resort to click-baity "omg robits r gunna kill u!"
youtube
AI Responsibility
2016-09-28T06:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzoJbnXRukfP8GVoFl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxvjNfYSE6wsybikRN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UgyHGti__J0_xpX4xtB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyNyJZDpZcI_Qh4ze94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzlpIuX-UhumIm7kcJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyIr6qzEJLuezoVZol4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi29yf0zE6x2XgCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgibxwaRsRzqcngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgiX3pQeTVVv-HgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgifjmBcwAJK4ngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}
]