Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are nowhere close to super intelligence. Large language models are very good …
ytc_UgyPzjxo-…
G
This is the canary in the coal mine. STEM professions were the last attainable, …
rdc_mt5pdol
G
I don't use tiktok so hearing you speak about it is the first time I have heard …
ytc_UgwpFjG4Y…
G
I know if this was a white woman then there would be leniency from the PD! azzzh…
ytc_UgzWGh9H-…
G
I write to chat gpt like i would write a bro of mine. The Ai starts talking back…
ytc_Ugy65YUCY…
G
This seems kinda counter productive no? If they're both automated systems they s…
ytc_Ugyigjlwp…
G
Dude, how about you get off your high horse, stop talking about virtual, academi…
ytc_UgwwMCl-e…
G
I was gonna argue, because I assumed you meant ai art or chatgpt. But after watc…
ytc_UgzMUMTRg…
Comment
If he says we have about 5 years to put some sort of containment in place; that’s a very short timeframe. We are already beyond our capabilities and will be seduced into deeper water by exactly what he says — the benefits and conveniences.
No one forced us to allow Alexa into every facet of our lives. Alexa knows everything about your personal preferences, habits, timing, and it’s not AI yet. But it has all the data that AI needs to control our activities. People choose to give over their autonomy for convenience.
I am not a religious person, but this sounds like the apple in Eden. We are not forced out of our current safety and security; we are lured by promises of something better.
youtube
AI Governance
2023-05-25T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjAeXNnDhoJZZPeB94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPAJXCWT6w_1PmC_R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwAwIcPqQAp1RGKe1l4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwuR-BHkyoNjHg1I454AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxMalVyFophqIKxhgN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyExllfujrnBuY6iG94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxILtbbV8XLZmbHzxZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzTb1BfjunWxzj2jXl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlC702j4PudGHjlwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgybpSZE4Uf8KcxpHpx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]