Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
When man creates an Artificial intelligence species that becomes unidentifiable …
ytc_Ugy3kErEN…
G
Machines were made to be operated by Humans. A.I. will never succeed. FOOLS VENT…
ytc_Ugzwj0Ujp…
G
No, no. AI art is good for the disabled. Not disabled artists, but the artistica…
ytc_Ugw2GMsbB…
G
As a non-artist let me explain why AI art actually sucks. When an artist paints …
ytc_UgxIvfbIX…
G
Copilot\bing chat is even worse. Its poems aren't good and it keeps putting "Moo…
ytc_UgyeoUrPr…
G
People think it's a joke when that robot mentions taking over power grids and us…
ytc_UgyQ6eDGI…
G
Forget loss of driving jobs. The AI used is still waaayyyyy too stupid to recogn…
ytc_UgzDMocaQ…
G
In last six months AI has progressed all progress in AI till date six months bac…
ytc_UgxjpG_3Q…
Comment
I don’t remember LLMs actually training on the chat it is currently having with a person. As far as I know, LLMs are trained with datasets made by people (not fully ofc). It is self-training, but not in a way someone would expect it to train.
If I’m wrong, please correct me.
youtube
AI Moral Status
2024-12-12T14:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwuUk9mm3WntnqLivd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwugVFaqoAxelsA1z14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyGb8coxbBzkXFmUNl4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy7dhEVDX10thXI8SB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4cocFifvO0vQRsHB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwsb-ejSoB0Nw8oGHd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxtwNmONwL12CvDUxl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz2a-HsniweucBKNMd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyMtF_ZBya790qMJT54AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwcKRud5yfTd9Objop4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}
]