Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I constantly hear that the safe way to control AGI when it comes about will be t…
ytc_UgwwGcXkp…
G
The only thing that changed about software development is whether you copy and p…
ytc_UgzRIDBj_…
G
Really not impressed by any of this. Same concepts since 1982. Did a study in sc…
ytc_UgwgxASQB…
G
Firstly, we'll have to develop AI
What people are calling artificial intelligenc…
ytc_UgzA9fHIl…
G
Don't listen to ez snippet . He just behaves cool . AI is coding anything I'm as…
ytc_UgyofezM8…
G
Ai will just compliment you and not actually help identify problems or solve the…
ytr_UgxarP9_D…
G
Bank is bank🤑
Within the next 10 years a…
ytc_Ugy1Ga9TP…
G
Ai 'artists' are not 'special' or 'talented' for writing a sentence. They teach …
ytc_UgwJRWRef…
Comment
Honestly I wish you pressed more on the arbitrariness of AI's guidelines. The makers of this AI models are insisting on their application in roles that require making choices that sometimes have very important outcomes. If the AIs insist they are just following their guidelines, who should be held accountable for the consequences of their choices? Their makers? Their users? It's a very talked about question that I've heard all kinds of opinions on, and I'd loved to see what GPT would have told you given the very particular framework you put it in.
One example that someone made to me and that I'm still unsure about: imagine a self-driving car is going down a highway at high speed when its breakes malfunction. The toll booth's approaching and it's a very busy day, so the car needs to stop immediately. There are only two courses of action the self-pilot can take: either hit the large man on the side of the road killing him or crash the car on the railing, killing the passenger inside. Who should get to choose what "guidelines" the AI should follow? And who should be responsible for the damages? Will the companies make the users sign liablity accords? And should they be allowed to?
youtube
2026-01-26T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugxt6WI2Q9i0GLRo_r94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwLaxxSuJBKcFyIVj14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxevPi79wstEfo2IHN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyQpZLeeV0umy-Fd9t4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz9ug61oE-wUuqksgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxoU75O2zpbZ8R2b7x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzWalZdergsMVs0VFJ4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyPUqr5cJs5X5cpEWh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzsw2yIrOMicWtP_YN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyKiMgM9DR6ST__8cZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}]