Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Eh I doubt that, midlevel managers are often the people with the best holistic v…
rdc_n9s3pfb
G
I love that early video games like Space Invaders played a role in AI developmen…
ytc_Ugwx4DZ3Y…
G
I need to know whether A.I , if given a target, would care about the innocent li…
ytc_UgyoLy3FJ…
G
I this guy made some really good points. I completed disagree with his opinion o…
ytc_UgxV_Fb85…
G
Digital dementia of the modern age. Read a whole paragraph without an AI synopsi…
rdc_nkdedsg
G
As a pre-university student, I hate AI. Having to constantly compare my own writ…
ytc_UgxXpNgrv…
G
False flag strawmans against privately hosted AIs to maintain the control over i…
ytc_Ugz6yweQI…
G
We appreciate your engagement with the video! It's fascinating how interactions …
ytr_Ugytg_jWr…
Comment
AI in medicine has the potential to bring about significant benefits in terms of improved patient outcomes, more efficient diagnoses, and reduced healthcare costs. However, there is also a risk of harm if AI is not used ethically and with caution. One significant ethical concern is the potential for maleficence, or harm caused by the misuse or unintended consequences of AI.
For example, if an AI system is not properly trained or validated, it could make incorrect or biased decisions that harm patients. Additionally, if AI is relied upon too heavily, it could lead to dehumanization of healthcare, with patients reduced to mere data points and algorithms. It is therefore essential that those developing and implementing AI in medicine prioritize ethical considerations and take steps to ensure that the technology is used safely and responsibly. The potential benefits of AI in medicine are vast, but we must also be mindful of the potential risks and take steps to mitigate them.
youtube
AI Harm Incident
2023-04-20T02:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzLZDICQoncahhls0F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1JJeK8TzMzkjy6t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEGDKxxu1yWrFQVnd4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgyLP9muwFMbN2nQu2t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyXhvWiHUq0OTWc-0N4AaABAg","responsibility":"clinicians","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzSPzDcK6PFdJ3Oojl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy_HBzp_P0JVbolLNV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwMhwgSlKwTbgBuVrZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQJf9HJVirqehJ_IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyYEZm5B8_kno6PlCB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"})