Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Such a fantastic and interesting job not only displaying the various use cases o…
ytc_UgxJUSQr-…
G
Human, Angles, Robots, and AI.
The end of the world, God's end of the world god …
ytc_UgwDo8AZd…
G
Great video! and holy sh**. The second national TV channel in Sweden (SVT2) afte…
ytc_UgzXOkXam…
G
As someone who works with AI and Data Engineering, I at first didn't even know h…
ytc_Ugy3VY5BT…
G
That one was staged. At the moment Sesame Ai is the closest thing to it, but I d…
ytr_UgzqPDt9I…
G
Le veo un gran futuro. Harán colas kilométricas los que ya están hartos de mastu…
ytc_Ugy1eaQAv…
G
We’ve all seen how great it is to have cars and trucks full of computers🤦🏻♂️….t…
ytc_UgzQLpm95…
G
it's called a commission.
you're not an artist, you didn't draw shit. you told t…
ytr_Ugyx9doR3…
Comment
If AI causes mass unemployment, who will buy the products and services companies will sell?
I have a humble idea for responsible AI. Each AI application and product can have a tag indicating the level of usefulness to humanity.
There could be several levels such as Level1 or L1 (being most positive) through L5 (being least positive). A second digit could signify the impact area such as health (let's say L1.1) and the final digit could signify the economic or social risk level. So, for example an L.1.1.1 application could be an AI application that has profound positive impacts for humankind in the area of healthcare with little economic or social risk.
Using such a system, it would be possible to come to agree on impact of AI applications, provided that an independent body of global researchers agree on the criteria to determine which tag each AI application falls into.
youtube
AI Jobs
2025-10-20T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxMpFxWHh8ibHyClJd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxK7g8e17Hmwgln7kd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugy5PB-fotRZafXXH2p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwgOlyjkCPAO6I9nfR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7B067vyjyhIHQLjd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxy6KQeUJZE4Qxm8954AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy2YhZNCQyuG4VJ0Q54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVNbN64OfArZm0eFR4AaABAg","responsibility":"government","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5w88a7pC0pxLKIuV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxFoehHjoYK2aZLv4N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"}
]