Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Invitation: From Fragmentation to the Whole
The Root of the Conflict: We are cu…
ytc_UgzB6ktaQ…
G
Ai will def think they are better. They dont have to die or shit. They will see …
ytc_Ugzk4Z27Q…
G
I'm much more concerned by alignment issues than a sentient AI. Sentience can be…
ytc_UgzQ31Qm0…
G
I only see this argument in reverse, and I've never seen anyone claim to be an A…
ytc_Ugz59j1jS…
G
I have paid subscription and it turned out the same with me. I asked the same Qu…
ytc_UgymEw64s…
G
@KEVINWNEK-hq1ke none, OpenAI put their bots in the chat and thought we’d be too…
ytr_UgwJ40YjU…
G
Wow, AI is really out here providing evidence that over-policing leads to people…
ytc_UgzPHZ14A…
G
The quality of tech support declined due to outsourcing, now it becomes due to A…
ytc_Ugyyy4y3C…
Comment
The safest jobs will be the jobs that work with humans. Masseur, Hair dressers, manicures etc. Even if an AI or a robot can do just as a good job, people will seek human interaction. You go to a hairdresser not just because you want to cut your hair, but because you also know the person, and it's a social interaction. The same is true to coaches. Funny, but I think that as AI develops we will find our humanity because that will be our last thing that we have against AI.
youtube
AI Governance
2025-08-19T17:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwhOMGHO1Oiug6O5nJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx9hw0L7tbnKaTrwo14AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxlmjKGOXB-A63y05p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgyEXgoCwhKC31o59gZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugy9UGOmRzJ1Hv4MXV54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzY0NdWtuwJiserE614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzK39l4NZUBovUz7bR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx4Tt6GTJ6D4cU2uKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzdt1v8Th8lh_nab5t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxXy0ghLN7krfEqFDV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]