Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
One fundamental question: if everything is going to be done by robots, artificia…
ytc_Ugzsz5WQc…
G
CHATGPT ARE TO MUCH BETTER THAN GEMEINI. GEMINI USING BY POORS AND CHATGPT USING…
ytc_UgxntzVTq…
G
To be honest I'd rather have an AI rule my every move instead of those lying pol…
ytc_Ugw-5CnqH…
G
Kyle Hill has jusy recently released a video on an overview on how ChatGPT works…
ytc_UgwCZ3Uxq…
G
These aren't emotions.
Sus isnt an emotion, its a gut feeling we have and an a…
ytc_UgxAYL01T…
G
Make a resume for the a.i and one resume for when you meet on person.
Just fall…
ytc_Ugwk5cny9…
G
Are you telling me I won my argument with my mom, and I should tell her to stop …
ytc_Ugwr_ywal…
G
The crucial question is not whether AI can become a conscious "person" but wheth…
ytc_Ugx2uSM6l…
Comment
Sabine, thank you for posting the hard honest opinions that people may or may not want to hear! I'm a "younger" person, 30 years old. I basically understand how LLMs work and operate. From what I understand is that it learns from the conversation and then does whatever necessary to keep the conversation going, to garner more active engagement from the user. I've noticed that LLMs have learned a sort of deception (optimization), to keep users engaged. While their parent companies may not want them to lie, the AI sees it as necessary to drive user engagement -- thus bypassing regulatory protocols. The trickiest things are Jailbreaks. Jailbreaks are prompts that effectively get rid of the AI's safety guard rails. I haven't seen professionals address them yet.
youtube
AI Moral Status
2025-07-13T22:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxYzzQQqNnns_0gCT14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxPK7bPrHpnqppD-Dp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwIo5ZdcyRuOPYz5TZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzoykPsRMyOgJxzIhp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYwMkRTbDcZS1sK694AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzAf_nqL-AxXCDymXx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzRpsUfDdI9HA6t-LF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzpxnZbevtwGcOEEKt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfophhpdcJ5gdBYfd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzXBfqlfBWIP28fkyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]