Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If AI creates hordes of poor people, the buying power is going to diminish signi…
ytc_UgwHjH4Y3…
G
You assume that people will be in control of the ai. That will most likely not b…
ytc_Ugwz7meB_…
G
Hey ai, can you take the other end of this couch and help me move it to the cent…
ytc_UgxHiHKz2…
G
Omg god ai is gonna take over lol take the batteries out of the toy problem sol…
ytc_Ugwrb8gAG…
G
AI is going to become more intelligent than humans if it's not already. Once AI …
ytc_UgwaRZGtp…
G
Yeah, AI IS attacking our communication by all the distorting propaganda, placin…
ytc_UgzWdTunh…
G
Probably the only way I'll ever have a "romantic" partner, lol. As long as her p…
ytc_UgySLwdHp…
G
The ACLU is way behind the curve, rudimentary system of this is already in place…
ytc_Ugy839XDo…
Comment
I agree with Natasha Berg’s balanced view — instead of banning it, educators should teach responsible and purposeful use.
For example, using ChatGPT to brainstorm essay ideas, outline arguments, or check first drafts can enhance learning—if students still dig into underlying concepts themselves. But as studies suggest, overreliance may lead to shallow comprehension if not guided properly
I’d love to see structured classroom approaches: professors assign prompts like “use ChatGPT to generate 3 supporting points, then critique them in class,” or “compare AI-generated solution with your own reasoning.” This would turn a tool into a thinking partner rather than a shortcut.
What do others think — should AI assignments include a reflection component on how students used and learned from the AI, not just the final product?
youtube
2025-07-06T11:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyVxX2WYKeJW-aPrJV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiTe6cmK9fB15Vjj14AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz4FHtvpln7cRlrF8R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx_yEUNkQ8NRRnIOWt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgySEkYMgSsp9L4HA2Z4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxRmpHMtr_ocn92srh4AaABAg","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwbH0p7egMfhwHYvkV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxezRdg13xjC9_H95h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwmXmkvccU5nblGukB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxYUamX1R8w-LSB-zl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]