Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
OMG! So much BS about politeness to the AI... First of all, being polite varies …
ytc_Ugx0k2Nl0…
G
You need to distinguish between projects and tasks. My experience is AI is great…
ytc_UgwP0p-sG…
G
Elon told us the danger of AI in controlling human actions. Ai includes , i😢n hi…
ytc_UgxJ4MgtN…
G
Extraordinary and very important interview, TY.
The topics covered were very com…
ytc_UgyX_ZWTi…
G
Merci de penser et communiquer les danger qui nous guette non seulement l’IA peu…
ytc_UgzgP8b4T…
G
00:11:11 this is a valid point. The solution I / Geoffrey propose: good quality …
ytc_UgzzAjtTD…
G
@wshel6321 we're closer to the Singularity as Kurzweil predicted. In 30 years or…
ytr_Ugzmn8mNw…
G
“Ai LeArNs JuSt LiKe A hUmAn DoEs"
For reference, in order for a person to thin…
ytc_UgxaS2SqK…
Comment
The great question posed by the interviewer about having a collective meeting with the public, AI engineers, Ethics people, etc, etc ,etc us valid, relevant and absolutely required. Altman's response minimizing this by answering with the typical BS " we are governed by what our users want in image creation etc, completely evades to serious issues AI will cause in the world in a myriad of ways that this guy Altman refuses to discuss. The fact that the Board of Open AI in 2023 had him removed for lack of confidence, and many people have left due to safety issues bears consideration, before you all believe him. He is all about self gratification.
youtube
2025-05-10T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyOq9n2-zQVO34-viZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6WuctUNvct2zVt4x4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2UFYtpgquxati5MZ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw93vWgEqkcZyX16654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZ67SQGXdX8RhDbNV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwXlft9THpRo0XRekF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyKaoov30uadXqWhtB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgytC9Sy5ai3DXaQ66N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9GPjpySi4d5lvOwt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyfPNTqIt7V7FMyLIt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]