Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is fake.
Thats a human.
No AI has yet passed the touring test.
We are still…
ytc_UgxoGn5cQ…
G
7:52 Yeah that pretty much sums up 80% of the conversations they have on this ch…
ytc_UgwgM59V3…
G
Your late for dinner "I must now terminate you" even robots will eventually nag …
ytc_UgxUtw0Rj…
G
Using an A.I to create a virus... uhm who stops you from creating a system that …
ytc_Ugy1m0yxI…
G
Welcome to data science and machine learning. Algorithmic models can find patter…
rdc_fcsotgs
G
Is this video AI generated? Who dafuq cares about a job. I am excited for AI…
ytc_UgyU0ykij…
G
A clear cut case of someone illegally using AI is something I can fully understa…
ytc_UgwXFoTO8…
G
Here's an interesting theological take: the fear people have for AI is the same …
ytc_Ugwzm1wch…
Comment
The future of AI is not a predetermined outcome but a collective construction. While the technological momentum is undeniable, the societal trajectory of AI remains within human agency. By proactively addressing the ethical dilemmas, mitigating the risks, and strategically harnessing its immense potential, humanity can steer AI towards a future where it serves as a powerful force for progress, equity, and human flourishing. The time for decisive and collaborative action is now, to ensure that the promise of AI outweighs its peril.
youtube
AI Governance
2025-09-04T16:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzvzhoV4Oty4-tcpnZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzeTA7O-KjP3M0EzcF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwNrDpRHoxXpuEdzpl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyLDi4I5FZIG2ukBJV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw8OvSFi_qGHTBifbt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwCuVl4oZfzu0V766V4AaABAg","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyyypPNmFW7uWRNbsh4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"liability","emotion":"unclear"},
{"id":"ytc_Ugwj3aqyP4kfrQqLJWF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSE087kD9tseUUiAx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyrcRlAVtOwY4Gf1yx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]