Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The issue is not Ai, it is the importance and significance given to the educatio…
ytc_UgyLyENYz…
G
@sociable_dysfunction ok... You don't like to read so I'll type it out again I …
ytr_UgzNYUd1V…
G
@BittenDonut2614 sometimes I can't find the pose I want. So I just make AI gen…
ytr_Ugz-fjY-J…
G
It's hard to build 'motherly instinct' when one of the major purposes for fundin…
ytc_Ugy6yMwCh…
G
Personally, i use AI as a tool rather than getting the end product itself, i gen…
ytc_Ugxg0o0ZB…
G
Reminds me of the guy who ordered 10 waymo taxis and just recorded them driving …
ytr_UgzREUJVy…
G
The Last place were people can just be as lazy as they are everywhere else. STUP…
ytc_Ugy5owver…
G
Idk why I laughed so hard when the robot got out of the vehicle lol Funny but te…
ytc_Ugy62aWC9…
Comment
As a programmer I can confirm that chatGPT can and will confidently give you blatantly wrong and dangerous information, never trust chatGPT
The reason is simple, chatGPT, much like humans, doesn't know everything, but unlike humans, chatGPT doesn't have the ability to know when it doesn't know something, so it just makes shit up, this is an inherent weakness of LLMs and is likely impossible to solve
LLMs are also vulnerable to being fed wrong, misguided, dangerous or outdated data as part of their training dataset, which means ChatGPT can also just have incorrect information, or be unable to distinguish what's correct or what isn't, or conflate two similar but also completely different concepts
youtube
AI Responsibility
2024-08-09T20:1…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzSEi0hMtaKakgNCkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNPfsgUcD7Otxzpy94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx0I6r0k8t0K74FoVh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"amusement"},
{"id":"ytc_UgzwEtJID-cvhSNkPTR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"amusement"},
{"id":"ytc_Ugz4Pt7OA6Pn4wTrS4h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLWCE696BnpYou9m94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNEh4-20nXRWDXLZt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxxXM-0GOV_fmYM3YV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugxw5UVElJPdkvdDAHB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz67E1UQsjmlZe0rYZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]