Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Its not really a big deal as people make it to be, Ai art is mostly detectable t…
ytc_Ugz85wseL…
G
he looks like a really bad humanoid robot whos face is melting. he only makes en…
ytc_UgyYNFKDj…
G
I am a mix! I am currently talking to Solomon from Obey me (i use Talkie) but i …
ytc_UgzK26bjd…
G
Sometimes technology out does itself. And in my opinion this self driving car is…
ytc_UgzhOGfuD…
G
Countries that lead the world in 44 critical technology* sectors:
1. China = 37…
ytc_Ugyt4tUvS…
G
I always wondered how people would consider even be in a Cthulucult but at this …
ytc_Ugy3omHPb…
G
Suno is amazing with jamaican french everyother song maybe afro beats voice need…
ytc_UgwwDRVE1…
G
Dan Menard His personality is dowloaded from a police officer. His choices and r…
ytr_Ugjp4eawK…
Comment
Quite a few challenges to this view from the assistant professor. There are a multitude of jobs which aren’t technical or science based, jobs where empathy, human-interaction and culture are relevant. I could list dozens of roles straight off. But also, AGI still needs to have its objectives, priorities and values set by humans. It doesn’t know which task to do first unless instructed. Over time these instructions potentially become higher level, as orchestration between AIs becomes increasingly easy. Yet humans will still be needed to make decisions where there are options. I think there is a lot of fear where in fact there is opportunity- if we build in values to AI as it evolves to AGI and SI, along with basic guardrails, much as we do with our children, this should ensure that our joint roles are complimentary and not exclusive. Super intelligence is better with humans than without, especially if provided with the best of our values.
youtube
AI Governance
2025-09-07T13:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwB5DbfeR-Q0wX5lL14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwn-XCd0zWf4PgK-pJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw8jNCUQmFyHWoNGcJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxIwtA_f3pIeR3SA2t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzidIC7WvMBhy5g1Ex4AaABAg","responsibility":"none","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwbp9qrYLzAfI5chF94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxtKlf5ItmrSSxeQid4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxemCj-XncHA7IRe714AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwQNReAZ6ntIXRQS5J4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgznP2zGYyblXNm7K294AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}
]