Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This channel appears to be a promotional channel for the A.I. matter - willingly…
ytc_UgyqBkgZ-…
G
What this man is saying is likely true aside from the timescales. I worked in th…
ytc_UgxiwthtZ…
G
she has an ap that as soon as you take her home she asks if she can use your cre…
ytc_UgxoXFrEa…
G
Scientists warned about the development of nuclear weapons, happened anyways. Sa…
ytc_Ugx5K3Fti…
G
@aaaaa-dt4ni If they were made by a time travling Ai promter, believe me they w…
ytr_Ugwzrm3bP…
G
Sm1 who is a ai "artist" should not be considered as a artist at all tbh…
ytc_UgzG3cKVz…
G
This should be illegal. Why? At least for 2 reasons: (1) thousand of people will…
ytc_UgzChMGiG…
G
It's because it draws off pre-existing data bases an unbiased AI would require i…
ytc_UgxQdAsIu…
Comment
What is dangerous is not the AI itself, but the humans who use it. This is the component with the greatest potential to push human civilization toward the Great Filter—not nuclear weapons. The two differ in intention. Nuclear weapons carry an inherent reluctance to be used. AI, on the other hand, is a tool that powerfully accelerates human desires into reality—things people genuinely want to achieve.
Unlike nuclear weapons, AI can be satisfying to use, encouraging continuous development: can it solve this problem, or that one? This drive pushes its capabilities far beyond human organic computation. If the humans using it have high mental decoherence, then the direction of the AI they develop will reflect that—only with exponentially accelerating consequences.
youtube
2026-04-24T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwOAErA6RkQ6KotPFl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzVP1MS7UdCcuoMnkh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxzF1SngDOU2jNhumx4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz2ZBAHT3yo3KLqnIt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfFcs2SGNan4Jpre94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGLN81XtGazYqjlP14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwNrtmUK6aFScj6R1l4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwKmYeB05ONWValBzh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugx-MY1D2Ad5m5CkDCB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw2e2ZF09xUzR8Wn5B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]