Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is the tool 😂😂😂😂😂 I want to see his reaction or response when AGI will be dev…
ytc_Ugx3m5XST…
G
Can someone please explain to me why we're building human-looking robots when th…
ytc_UgxHjCMjH…
G
I don't take AI "art" as art even when its good, its only good cause it crunched…
ytc_UgxQZLQx_…
G
Ezra has such terrible politics in fhe first place so AI is just another way for…
ytc_UgyPQeN67…
G
So the AI does feel “empathic” and will help if you tell it about grandma.
Was …
ytc_UgyyIkk58…
G
I've worked with Rekognition. You can try it out for free: https://aws.amazon.co…
rdc_dzfssns
G
This is true. Each account is a copy of the main model. After that, it only know…
ytr_UgydGQ7P8…
G
Alex is overcomplicating the "Assume billions of agents come online..." economic…
ytc_Ugwz7iNk2…
Comment
I've used Claude Sonnet 4.5 at work to analyze and work in my Java codebase and it's impressive what it can do which is sort of a really fancy context based text search but I can't really say it's intelligent, it's just a really good helper and still needs a lot of guidance. AI in an engineering environment is really useful to take away the grunt tasks which are repetitive or the tasks where you want to analyze a really large data set which would be too complex for a human to do. The output will still need to be validated and the "correct" response selected and refined. However, in the medium term I think the really dangerous scenario is really what humans are going to do with this technology. One can imagine what it is doing with surveillance applications or in police enforcement and military theaters - that is the scary thing - humans will use it for nefarious purposes and there will be bad outcomes.
youtube
AI Moral Status
2025-10-31T05:4…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyUSZEt_D_L-srdtY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwAR-miK3McSNbQPlh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwsUjPct9PdMZ4XAVV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwyUpBv84xu5HK-UJ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwP7jRZYjiOlpH3Ve94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyAxxpxkUA1pNFS3IF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx0bCq7miXbvb3zCFR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyTcGoRQ6hE812SaF14AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzoyNSQG_OyUFPjpMB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz_mNWRN9AgxSfaC994AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]