Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Notice he said “UNLIKELY”, not impossible. I am more of a catastrophist. I am wi…
ytc_UgwweuIWk…
G
Wrong. All medical staff will be replaced by robots. Humans will no longer trust…
ytc_UgzkU5J6E…
G
Men will always be wolves for men. ..ALWAYS !.. .& with AI <& more with SAI> 🤯 …
ytc_UgxJg1w0Z…
G
Granted it's a little freakish but why would an AI react like a human?
Hmmm, no…
ytc_Ugz6_NZ3T…
G
They are all AI. A software engineer bursts into flames when in contact with the…
ytc_Ugw9nUWmV…
G
Bros trying to rid himself of guilt by being the first AI CEO to mention this……
ytc_UgxP-YA8D…
G
AI images would be way more interesting if they spat out credits to the closest …
ytc_UgzEYW_fn…
G
The autonomous robot with a built-in trained model will have no easy kill switch…
rdc_l5u045q
Comment
I spent my career in CyberSecurity. Think I might have to come out of retirement and do some ethical hacking. We're going to have to blur the lines in what ethical hacking is when it comes to AI. Ethical for who? The people who's jobs will be replaced or the AI makers making billions more for themselves. When this 77 year old anti-social dweeb, who doesn't think he has any responsibility to even consider the implications of what he's done and its impact on anyone else, is telling us he's talking to AI about technical details of atomic bomb making, I choose the people.
Dweeb wants to act like Mr. Nice Guy now by giving us a warning. What good is a warning when you've already built the damn thing.
Says we need government regulation. I just said I spent a career in CyberSecurity. We have had standards, regulations and laws up to our ears for decades now. Have data breaches stopped? No, they have not.
Seriously, what an asshole dork. Hope he rots in hell.
youtube
AI Governance
2026-02-09T11:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugz61dILxyCfMhtgzVV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwppycGKZG5q_6y3YJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxcUJlkp9n-fZ5Z6a94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyC2PfqlPTXtPcLU5h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_kEjZEy99UShnRJR4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzmWa1eLKya6cOpPdt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw2xzhOX0iJhgTs4bZ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzleXoBOi1PFck8H1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgxYmcm1UX1xtKm7cg14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgymKok4ISAHbTTJvpR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]