Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is a flawed version of a “perfect” human, it can’t do anything like us, but c…
ytc_UgzdlVtgC…
G
Bernie is right about this ..We are already seeing the impact of AI in the workp…
ytc_Ugx04Xxsy…
G
4th problem, all llm are made to be used by the lowest common denominator, or d.…
ytc_UgxlfB5U6…
G
The detrimental effects of these data centers is so bad that is why these exploi…
ytc_UgwLVNURp…
G
No, just *find a real source*.
I don't understand why anyone would ever use an …
rdc_nnkwo3i
G
in the very near future, there will be AI bomb and countries will rush to develo…
ytc_UgynBYndo…
G
I don't necessarily agree with the quote, but massive problem number one is it's…
rdc_du5jhx1
G
I’m working on a theoretical project about human–AI symbiosis that tries to dire…
ytc_UgxSBiA4f…
Comment
Issue:
If I have a self-driving car, then it better prioritise my survival over everyone else, because I am the passenger and I own the vehicle.
The problem is that I would rather have the software to prioritise my safety above anyone else, disregarding the wellbeing of others, if required.
If the software calculates that it would rather sacrifice my life to save others, then I am not interested in such a terrible product.
youtube
2023-08-04T14:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzWBHmbGK_KD-YLgqF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwrs67iZ0M6DtHVhJx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxDz1alild_uCkEtX94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzBmLU9l4aPDLCG9W54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"},
{"id":"ytc_UgyMmeiKSs3nDpuFDxB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxe0SkbASwCSYMLGMd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwWbKVVEPa_LBJ1XcZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzKFHCvf2mLhQDsZP94AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwKIeS8CV8W7htXuIR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx_55O1pq-KarsAbtR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]