Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I am interested as to what the A.I. thought about the other races as only Black …
ytc_UgynXbJ2O…
G
I can smell the fear and prejudice in these comments. As a former pro illustrato…
ytc_UgxU3SWej…
G
I'm a disabled artist! I have motor issues and struggle to use many brushes and …
ytc_Ugx6MTkRu…
G
Each person in this was brought on to lean the conversation to different points…
ytc_UgzBHjG_f…
G
I think that AI is irreversible.
We have created a god.
That will make it's own …
ytc_Ugw-D8r0R…
G
This has been the best interview I have seen for a long time. Never have I laug…
ytc_Ugy9lTiYR…
G
The Millenium Bug was one of the biggest marketing cons in history. Lots of ne…
ytr_UgwNkayaa…
G
This it what I needed back in the 80s lord knows what I could have been.…
ytc_UgzqPTNk1…
Comment
Such a pity the debate got stuck on the meta level. OpenAI has been fine-tuning behavior in GPT 3.5 for months by simply rewarding friendly answers. The result? In the first days of its release, it's been threating people for questioning the fake facts it's been telling them or claiming it's hacked web-cameras & spying on people. It's relatively easy to create an intelligent system (you can reward correct answers), it's infinitely harder to create a system that thinks based on moral goals, because we don't know how goals emerge, let alone how to correct them once they do. That's a very technical problem Mitchell just doesn't seem to be familiar with - the concern isn't that a superintelligent AI won't get what we want but that it won't care, just like we behave differently from what we've been selected for by evolution.
youtube
AI Governance
2023-08-02T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwavikaAMC_ucQ0x9h4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw6IOqZwMcewU2CbuV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwtdjIuSgwcMRt016J4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugw4-agCdVl3pjy4Hfd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHjX_f4QKz6AB1RVt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyQ0DtmFQxkrFu2X2J4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgynLYZwDYaGncX1JJB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzJwQ_qPwcXl7w6jjl4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxROEnFnRwgta-ItIR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy8T12PciZCtmv_UGp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"}
]