Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People keep looking for security as if it were a solvable equation, but security…
ytc_Ugz7bdQaU…
G
I have a newer Tesla model Y and have been using FSD 13. One definitely needs t…
ytc_UgzfKqQq1…
G
Clearly u don't know what sales is..... This is probably replacing call center f…
ytc_UgziF3OFw…
G
Learn how to problem solve.
Handyman can do many things AI cannot deal with.
I…
ytc_UgwNnuJDA…
G
Yeah. No, to this list. Writers? LoL. Ai is not creative. Ai is at best sticber …
ytc_UgzeKj-MT…
G
Totally ridiculous! No there should not be an AI which is smarter than us. you'r…
ytc_Ugz_d5cJF…
G
Lol computer man is SO SMART he cant imagine a world with NO COMPUTERS
Jfc
YES…
ytc_UgyNWHWPA…
G
In fairness very few humans have emotions unless faced with their own mortality …
ytr_UgxNeMogG…
Comment
A super intelligent machine could analyze human history and understand the complexities of good and evil. If designed with the right intentions, it might prioritize actions that foster good outcomes for humanity. However, if programmed with harmful objectives, it could pose a risk.
What kind of safeguards do you think should be in place to ensure AI acts in humanity's best interests?
youtube
AI Governance
2025-12-24T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy4T5UmRxtAqFNsuLh4AaABAg.AR4eVOubbApAR6yA87uKL1","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytr_Ugwg8JX0OF3QY5D3TQl4AaABAg.AR4dVGMqOtNAR6yeo8pz6N","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgwZAMsl34DUaMr9JnB4AaABAg.AR4T-cFn_l7AR6zKhRGsZ0","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgxMiRa84AqKg1o9LH14AaABAg.AR4EzBUqcWxAR7-uY8wYqA","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgzsVMLmwaMXcOJRqbJ4AaABAg.AR4CofgPducAR70WHKW6UI","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzVRWFxrX53EdkF5KZ4AaABAg.AR4ADqaGHSHAR717tO8Nip","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgzoaI5hHkB34Cafyc14AaABAg.AR3xzM8Hmd0AR72Dg-09XW","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytr_Ugzb2AFfMfIczZsuA0l4AaABAg.AR3rxHiyXu5AR72j8nO2rp","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytr_UgyepHd5dAiNNR70POp4AaABAg.AR3nJcBxn8KAR73_-aCD4x","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgzfpPRmuOnv31pHTNx4AaABAg.AR3Xf6P9gZrAR7B9rnMRM8","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]