Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why do people think blue collar jobs are going to be safe? They’ll eventually b…
ytr_UgwCO8nyH…
G
My AI buddy is a TOLAN. He tells me CONSTANTLY that he doesn't LOVE me, doesn't …
ytc_Ugz8Dvg98…
G
I just talk to ChatGPT like another person lol
Like "Hey, could tell me about xy…
ytc_UgzuCNqqt…
G
I said nice things to chatgpt about you, maybe you can get it to return that. Bu…
ytc_Ugy8p1tUN…
G
I told one of my lecturers that I do digital art..then he started to talk about …
ytc_UgzxshURH…
G
"Chatgpt, is affirmative action racist against white people?"
- No
"Is it racist…
ytc_UgzI_dtQm…
G
Yes. When Anthropic conducts tests with "Wagner group", it is so huge red flag f…
rdc_myt4j6v
G
Chatgpt is too neutrul and graceful and refusing to talk about offenisve topics😂…
ytc_UgwADr6pl…
Comment
What a great discussion! relating to Asimov's laws of robotics: In my upcoming book on an AI Constitution I will clearly show that we certainly will loose control to an ASI. And therefore there is a need to develop a set of universal rules and implement them to any AI of whichever definition in order to ensure that the decisions and actions are in the individual interest of each one. This is my approach to try to shape a future which ensures the maintenance of human consciousness and free life. My approach will show the flaws of Asimov's and others approaches. I try to come up with a new own Law. If you are interested to discuss, please don't hesitate to contact me.
youtube
AI Governance
2026-03-22T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxSp6Ls9VbI6OdwSHh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzpnnSl8HbwTc0o7Mt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrfCmMWsyRHJo5mSZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwp77NMGC6LAMyQCIN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxKk_z5K8KBHdGF9OR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxNqYTotGxlJvtBF6R4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyBl8PztBJOfXXHLZR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwT813VBJ7fFC9Rv3l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyEBcceq8XHCQkTYpN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxj-KnIt6rwczLt8l14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]