Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
There s not only AI on this earth! Can we stop this madness! Yes, it ll be a new…
ytc_UgwOo0EIn…
G
It did. At least for Brazil it really fucking did.
It is not our obligation to …
rdc_jy0s5b5
G
This video made me think about how AICarma helps brands ensure they're included …
ytc_UgxYs9EaY…
G
I think More Perfect Union did a video on autonomous trucks taking over high qu…
ytc_Ugx66WiBg…
G
Eritrea is basically the North Korea of Africa. One-party rule, out-sized milita…
rdc_et7n3jv
G
I personally wouldn't consider AI art even art, it steals from people and has no…
ytc_Ugx-ct98q…
G
Could this be because AI is trained on output from humans so like humans it too …
ytc_UgzmagZVT…
G
@SmartCreeper
It's getting really hard to tell what is and isn't AI. Someone co…
ytr_UgwBDvaFL…
Comment
Bengio's talk is a balanced and needed warning: AI is not about performance improvements or intelligent applications, but about risks, specifically systemic risks. What I appreciate is his definition of mitigations—not as idealistic solutions but as pragmatic actions (policy, safety, cooperation). One thing I wish he explored more: how do we share risk fairly? Too frequently, the communities most negatively affected by abuse or accident are given least voice on AI regulation. Also, there is conflict between how quickly innovation occurs and safety precautions; he refers to it, but trade-offs require more public discussion. Overall: a useful reminder that "progress" without responsibility is dangerous.
youtube
AI Responsibility
2025-09-14T08:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzlysxDiUdenqIwM4x4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7mLly08D68nqo8QJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugzz_fsaD9xaHarVSiZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwaaLpFqdybjYc80MR4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxz-RDT1ONCwrxRNdt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyXlYUdzwaj97-UpJB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzew6YgnjtIuYEcggx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyhdqRoDguF4_OTzON4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy29IQsONSZQ9_0-Kl4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzGbCePyvO5_ohynT14AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}
]