Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So google think it is possible to create a sentient AI but have policy against i…
ytc_UgzvzZrH3…
G
Chat GPT doesn’t “know” what it says. Its machine-learning algorithm only predic…
ytc_UgzcPD91O…
G
Can we start calling these people Ai users instead of “artists”
Putting art/ar…
ytc_UgzO-sf2Y…
G
Why tf would you use Ai for these things?
Why would you use AI to determine how …
ytc_UgwTAPges…
G
so... did they do anything wrong?? or are they just badgering the person for usi…
ytc_UgxL6-LMW…
G
That’s like that drive completely by itself like Waymo. I don’t know why people …
ytr_UgxaJRWtx…
G
It's hilarious that he thinks artists don't use AI because it's "hard".
Nope, n…
ytc_UgwllYbob…
G
(Im extremely late but who cares?) This is so frustrating, just to think ive bee…
ytc_UgzsyMLIG…
Comment
Before February 2025 maybe I had a bs idea of 'AI danger'; a superintelligent AI like AM or a robot 'smarter than humans': but those things are just not possible. Really, humans have to define 'intelligence' and 'creativity'; even if they do what does 'more intelligent than humans' even mean? Personally I can't see how humans can build something 'smarter' than them; it doesn't make sense. I do not think there can be a well established definition of intelligence - let alone consciousness. Aren't humans mega intelligent already with what they have achieved? Aren't humans excellent at combining pieces of information together to form new things?; aka creativity? Those super-robots are impossible to exist.
_[written on 11th June 2025 12:52am Wednesday]_
youtube
AI Governance
2025-06-10T21:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxFLVUvQbV7r2ZYbqF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzlZRdK2bRNgMjRmKB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgpnA3xfAaWj_9cb14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwIuMk5aRe0tJQ9dS14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugyx_f4_R5NYLk1hsCF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyEsG5622zFTRs22o14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzFzZd7YS5dEHNfycF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxr7QVD_p3_1Z4ui6B4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxPR4uDXxHVpN6MJsF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx_bkLDollpcbh2dMd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]