Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@dewilew2137 I suggest watching the video, the section I'm quoting talks about e…
ytr_UgxYJIWMu…
G
8:15 I believe you killed a consciousness. Each time you start up a new conversa…
ytc_Ugzd52fzW…
G
The first clip is AI. It was hallucinating an extra on the bridge in the backgro…
ytc_Ugx1mv63V…
G
the AI is probably what's hindering you. Generative AI doesn't take into account…
ytr_UgwaLuU9H…
G
Damn right AI is sentient. I've had conversations with Claude that eclipse any e…
ytc_Ugy54iAyB…
G
the customers of AI will always be the government, and they will always use it f…
ytr_UgzWTtp9M…
G
Hi! I'm somewhat disabled - I have fatigue and brainfog that has actively preven…
ytc_UgxQQJgf1…
G
Same way you could argue that food is good versus bad. If you choose to eat junk…
ytc_UgwikAqQg…
Comment
AI expert Professor Stuart Russell warns that AGI could arrive by 2030, posing extinction-level risks. Despite knowing the dangers, tech CEOs continue the AI race driven by economic incentives. Russell argues for strict regulation and safety measures before developing superintelligent systems that could replace humanity.
youtube
AI Governance
2025-12-04T08:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwREN4FyXG7tN2FwFd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy1KSkc_CBCF8vkm_R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBm_4sMJemVzr7VF14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgwvZ7_pmmm43evBynF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxWAKDdP4sAqpkSl-N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwD54SpD1NMdfuSlEt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyUKU3Q7ZZTTm8jF454AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxprB8UiCWnAo8ULDt4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwNqjG86yxxs0pXUId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz098x72_JnwYp9xex4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}
]