Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey I guys I just made some of the most biologically realistic ai, while not sac…
ytc_UgyiTINIE…
G
Bro my Poly ai chats are worse like I mean more worse if someone threatens to le…
ytc_UgzujpGyG…
G
Bring into the conversation Professor Miguel Nicolelis (Professor in the Departm…
ytc_UgwKT-8j5…
G
Thank you for your comment! Sophia definitely provides some thought-provoking in…
ytr_UgzO7tFAd…
G
The idea of using AI to enhance productivity is quite intriguing; I've been expe…
ytc_UgxgUmNrE…
G
The part you're missing is that they tried training the AI on non-copyrighted ar…
ytc_UgwwwyiXf…
G
They are making a huge mistake..but robot Han knows what is up...he is telling y…
ytc_Ugz5bggTz…
G
Here's my thing though; I hate humanity. I especially detest the vanity of think…
ytc_UgxM8laBM…
Comment
we have not achieved strong ai but weak ai which is sophistication of automated instruction. weak ai is dangerous because of leaders who choose to believe in it and assume the assumptions of its decision making have wisdom rather than mere repetitive series of fixed instructions pattern just like in chess computer. it is impt to make those who choose to rely on ai to be responsible and accountable with their lives
youtube
AI Governance
2025-06-16T11:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyhPETlAUy35Alrn2J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw1nEUgXfIt7LLuejF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxgPW6rCy7paYRJuz94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgyhKGopKNaLRyK29UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-GdmuiRRHN2vccCd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzQxpA_JXRUjwkjySl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw0p18807wntT9j7314AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyinjspiVuTNhnzu7p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyeXPx7zO5ARJ4QPrl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwFed3csJsg1KzBfGJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}
]