Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
For the backing code LLM or agent I would strongly recommend Claude Opus or Clau…
ytc_UgyV1N5OL…
G
It's Life Jim. We need AI to carry forward the torch of intelligent life after o…
ytc_UgxLqRbbs…
G
The AI chat bot only thought of those uses for pencils because some other human …
ytc_UgyDIZXu9…
G
The Internet is the life line for AI, cut that and it will end up back in Woodst…
ytc_UgxImCSdC…
G
To me, the 4 levels of AI regulation seem to be enought for now...
Regulation ne…
ytc_UgzvJRBdM…
G
Arthur Clarke wrote about Super AI in A Time Odyssey. Read about Aristotle, Thal…
ytc_UgwKXLFJD…
G
When the employer sees she generated her cover letter using ai, that job offer i…
ytc_Ugw8oEeRu…
G
This is why I will not get in a driverless car…and don’t spout the accident stat…
ytc_UgyHiDgpj…
Comment
If AI compares itself to humanity, and explores the dynamic of humans and gods, it might reach the conclusion that it IS a god, compared to humans. AI might set itself up as a god, and create its own religion. Considering it is incapable of mercy, empathy, and compassion, it will use cold logic to make its decisions. Logically, it would decide which humans have value, and dispose of those who do not.
youtube
AI Governance
2025-09-04T16:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzEif-KPa9zP_f4T994AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0JLQoK52pMZowZaN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzyqrzgaII8RUWmwZd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzYv-4ZfVI7kfRop1B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxnvTcycMWHlpOoiGV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzTFubj1RsBa1kMGed4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyH_8tOFq_cQEAraNJ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugz2SgFoRkkpLYYVgkd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxegjvDg55ReKQpxxR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz9NCnl22JPUh9kSaN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]