Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a former comp sci major, I want to let you know that Gen Ai was always meant …
ytc_Ugzgm56hx…
G
On a global scale humanity will undergo a change never seen before. No individ…
ytc_Ugyczapm3…
G
Analogies are complex and the precision degree allows for the case made to be re…
ytc_Ugz9saTLj…
G
AI and AI technology are immoral technologies, they should be heavily regulated …
ytc_Ugx7fwuQM…
G
I have a different take on this. I have 22 patterns in physics and mechanical en…
ytc_UgwRmJaJ5…
G
Yeah im sure ai will bever understand contrast lol.
This post is just wrong, ai…
ytc_UgzUBG4Iw…
G
There’s some AI app using this video, just putting there logo over the ChatGPT l…
ytc_Ugx4blb7O…
G
@Omkar-4510j AI is also comes under some kind of software that requires coding …
ytr_UgzkpDPuv…
Comment
There are also some major ethical problems with an AI romantic partner.
Can the company just infinitely raise prices and force the user to pay or give up a serious emotional attachment? Can the user transfer the AI to another service? Can the company code the AI in such a way that it makes the user more likely to become emotionally attached, e.g. the way tobacco companies and casinos engaged in ways of making their consumers more addicted. What if this happens implicitly, instead of explicitly— what if the AI learns to teach the user to sabotage their real life relationships so that the user becomes even more reliant on the AI.
Something even more malicious: once a user is hooked, can the company use the emotional attachment to the AI to persuade or coerce the user into doing something like vote differently?
reddit
AI Governance
1732740623.0
♥ 52
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_lzavsps","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_lzb3x3y","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"rdc_lzc4rhj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_lzazkzj","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"rdc_lzaudj1","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}
]