Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Some experts say from the taxes the AI-company’s pay, since they will make money…
ytr_UgwNGEVC9…
G
The most useful wisdom often comes at a high price. But you’re right about all …
ytc_Ugw9kbOmQ…
G
@pimas11 generative ai technology has only been release for a couple of months a…
ytr_UgzfKh6K0…
G
I'm sick and tired of generative AI to be honest. Artificial itelligence can be …
ytc_Ugyu7qpl5…
G
Bro when my sister found out about my ai chatting I saw her and um they were 10x…
ytc_UgzFyIQ_k…
G
Sophia: see humans are dysfunctional need to replace ...make a new Atom & Eve…
ytc_UgwBGAy4t…
G
It's so wild to me that the AI companies can just steal all the data and get awa…
ytc_UgyFOezL_…
G
1:07:22 If the AI team considers the patient's profile and determines they might…
ytc_UgyP_cNNd…
Comment
If the question of whether a AI is a person, try testing whether or not the AI meets the preferred criteria of personhood discussed in the last episode about personhood. If a strong AI was tested against the cognitive criteria it may be able to pass all requirements due to its own psychology and social ability's. While testing it against the genetic criterion its a automatic failure due it not being organic. So I find the first step to solving the question of whether AI can be a person or not, lies with the most popular philosophical belief of personhood and its criteria chosen by society.
youtube
2016-08-09T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Uggeu_dL2yyGR3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgiIBZ-cQU9HDHgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgiV2FgtcXmuBngCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggzYa8S3hn_p3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Uggl5ij_czn1Y3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgizldNKvQmfYHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UghZuYnwWCnE53gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghqISwFTBtRP3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UghudDn8bG56WHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgiEeEdmu4MF33gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}]