Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do you have enough? Do you have enough bullets to change whats happening that i…
ytr_UgxDLQP91…
G
I think Reddit is perfect example of how to use AI - consensus/community learnin…
ytc_UgwSLzA3u…
G
All the software companies can their create on AI Tools it's good use your logi…
ytc_UgyxTT-uo…
G
We appreciate your enthusiasm for financial success! While I can't directly prov…
ytr_UgzVTATCC…
G
The most prominent question in my mind after finishing this video is - what is i…
ytc_Ugx6aDcBy…
G
the companies using ai agents should just be taxed like a normal employee but on…
ytc_UgzNtm0h1…
G
You are using what I call the "cotton gin analogy" where a new technology create…
ytc_UgzRTSbzs…
G
Nah, it still won't care.
But I still say thank you to AI because I just have go…
ytc_Ugw4Xlcl2…
Comment
Don't just trust everything doomers like this guy say. I used to believe it myself, but it turns out their presentation is one sided and hence dishonest, and much or what they say is built on unstated and unverifiable assumptions. It's not true that we know nothing about aligning and steering these systems. Each time doomers quote unaligned behavior of today's LLMs, they conveniently fail to properly disclaim the relevant circumstances of how those results were obtained. Their definition of alignment, it you drill down, is a fantastical notion that is not applicable to any generally intelligent system by definition of how they chose to use the term. Go listen do David Shapiro to hear the other side of the story.
youtube
AI Governance
2025-09-24T07:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyvszysOPULvlnHPR54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMPNb7NcXl_FFV5ph4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwYce413XYXAulH9W14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz2nWBFqk2uj9sWM7l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwkzn30Eif96XevIgF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxdEWYB95-zLV3zjWt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzp4mhn5StuQwrfYxN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyEMQseeu6rjo1ITd94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwtyw4SHKVnrzrCsEp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgLCKZr76H6gYhAtp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]