Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I really wish they’d go to a single currency and it should most definitely be ca…
rdc_et7pu3j
G
This was actually, in part, Sam Altman's own prediction.
He thinks that a lot o…
rdc_jf6velv
G
11:48 No, It cannot replace a human podcaster especially one who is already very…
ytc_Ugz_JhYB0…
G
I’d rather have someone driving a Tesla on FSD near me than someone playing Tetr…
ytc_UgxUTic-2…
G
2022 was the year we rejected cryptocurrency. 2023 needs to be the year we rejec…
ytc_UgzLYmuDn…
G
There's no such thing as ai for humanity ,
Some choose to help humanity ,
For …
ytc_UgzcH3ZLt…
G
i dont think AI would try to end humanity like in the movies: fomenting wars, at…
ytc_UgxjpWWOy…
G
Intelligence doesn't require consciousness. GPT-4 is intelligent enough to serve…
ytc_UgwrJI0sw…
Comment
Living beings start learning experientially even prior to being born to some extent. AI can't really learn in that manner in the same sense that animals do through our physical senses.
What if AI becomes smart enough and has it's automated infrastructure, and rather than it using energy where it creates more carbon emissions and in turn raises the earth's temperature, it works to intentionally lower the earth's temperature because electronics can work more efficiently at cooler temps? Just a thought that occured to me at the end there.
youtube
AI Moral Status
2025-10-30T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyABG2BqQo_bQ0RTeF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzwKIBkTIjwF5QgSOR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzp0VQ5QCWvMSJH6-h4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwXt8u0LAlcm6JcuIJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx2mNarWuP2T8jCTfJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxCJBabiQ3Iz1EJtSp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwP2sI4oMWXokqcHV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzfXKjmHwOdcVoYIAd4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxNQQH7JScRsLDbMUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnUXuXIdWgn0uB8bd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]