Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No AI will ever have the level of emotion that brought Arcane and its titanesque…
ytc_Ugz4CGx7Q…
G
I am an addicted, if I do not do these drugs someone else will in this particula…
ytc_UgyWIWIIZ…
G
I was using words like this before AI. I do not want to be associated with it 😓…
ytc_Ugxf8daf9…
G
I love how he mentions that they can’t make something better than the ai anyway,…
ytc_UgygzV5Z0…
G
if i had a AI teacher i would call it a clanker and i would knock it down…
ytc_UgyE75WAz…
G
Art is Art. Even the stuff that is made using an AI program as the program is no…
ytc_UgzgGAcO5…
G
I've found that if ai doesn't hold a plastic semblance, it almist resembles a dr…
ytr_UgweHNyj9…
G
It sounds like you’re pointing out how small and seemingly insignificant robots …
ytr_UgznqdufQ…
Comment
I'm fascinated by the vast array of possibilities AI presents. It promises to revolutionize fields like healthcare, education, and environmental sustainability by streamlining processes and making them more efficient. But I'm also deeply concerned about its potential misuse—how it could be used to deceive, control, or harm individuals and societies. We must address these ethical dilemmas as AI evolves, ensuring that its development and deployment prioritize human well-being and safety. As we push technological boundaries, safeguarding against misuse becomes crucial, requiring collaboration across disciplines to establish robust ethical frameworks and regulatory measures.
youtube
2024-09-19T12:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyI-E1o_cclZj5XCIx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxBd15vnqWNjEXJgoB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxPS7xyWJxOJuCbREZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx2zicbCLdQkEuy6PZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRSOWjF1gjtRLgby14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwe2dNic0-oHMUFPed4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx9U77W21TJJFLPV4F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgybbnrFSa9iq9QcUAR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxDlSRvUFqfbV7-JNp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzVfkjm5GktqKyX2Bt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]