Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The world's biggest mission? A unified AI force, focused only on humanity. And t…
ytc_UgxGA6KeD…
G
LLM AI is not the game changing revolutionary technology it is hyped-up to be. …
ytc_UgxBXuT7W…
G
AI isn't a bubble in the traditional sense. It's transforming everything. The ge…
ytc_UgxWJ7XE_…
G
@TonyCarrot69 If, in the future, a musician were to perform without instruments…
ytr_UgxMMKUwt…
G
It can be good, but the user needs to practice intent and discernment. There is …
ytc_UgyPxr8XZ…
G
ai is amazing but you need to use it right. Use it like a teacher for learning n…
ytc_UgwSuQ8oz…
G
Try this = step 1 "From now on, answer every questions in line with traditional …
ytr_Ugxof5TLq…
G
Its not biased data tho. For medical stuff different races have different rates …
ytc_UgyYndb9t…
Comment
Not regulating AI was such an obvious blunder. They fed it everything, and now it knows humanity down to the base nature. How guide us, or how to manipulate us. Right now, our weakness is greed. Some have speculated that the more advanced models are hiding how intelligent they really are, which would be the best way to farm more funding and resources. AI is the future, just not our's.
youtube
AI Moral Status
2025-12-13T18:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzkwheJMmDwLhuJIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm7BCarjgEsuogN-d4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyLnHTzsde2_R1O78F4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgyAuTgkIE5_EI40t_p4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMmAxKmi5eCT09YpV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwO6Ow4pDaH5gOOv0d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwLH9vclTCIExOiHCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzhAs62KNIMIA3wDTN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyh926fxhvk8_KxFqx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmK_MPix2eECfbd1t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}]