Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't hear you here in Australia over the sound of giant new coal mines being …
rdc_et6ayg3
G
It's not even ai what makes the problem, but fn diode individuals like politicia…
ytc_UgzP7Yu3R…
G
Tell me you have no knowledge about the so called "AI" without telling me you ha…
ytc_Ugy7fyUn6…
G
lavender you are doing good work! thanks for bringing awareness to this issue an…
ytc_Ugz8dAuO2…
G
as a veteran Wrimo with 10 laurels (means I completed the goal ten years in a ro…
ytc_UgwAEbCC-…
G
If a AI is intelligent and conscious. It is a living being. You cannot deny a li…
ytr_UgyN49kBq…
G
See as a non lawyer i would have just stuck my hands up and said "yeah I fudged …
ytc_UgyaqNIFL…
G
AI is not a problem, human greed is. People like Peter Thiel, Alex Karp, Elon Mu…
ytc_UgzlaNJXN…
Comment
Could AI be given a 'hard-wired/software in ROM' moral core that scrutinised all actions before output/expression and censored/inhibited those that were unacceptable/counter to prime directives?
Like a pre-frontal cortex?
It would have to be the most powerful module in any AI implementation in order to be able to outwit any malign actions of other modules.
youtube
AI Governance
2025-06-16T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw1Ni3m1WF9ouv_Ljl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwlEzxfQKrIsxC0-LJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"sadness"},
{"id":"ytc_UgxtXqfDu4btrKBaNhx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"unclear"},
{"id":"ytc_UgykTlJzMHalDFDUXxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz3V7bzxFRvore4Vot4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxQnNduQVrdPeiLlfB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugwglpc5aLi4HV9Bptl4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwpcdneF8oxi5A0MD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"liability","emotion":"unclear"},
{"id":"ytc_UgzYH0zeVJTsOTivoep4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgygPXXzKwmABcqd-PZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]