so AI isn't totally wrong that it's a safety issue, but it did kind of gloss over the real problem
Are you referring to my post? The content in my post
didn't come from AI, it came from media. AI gave me links to internet articles then I clicked on it, and copy and pasted what the media said.
So if you think the content from the media isn't in-depth, that's not AI problem, it's media's problem.
Also media has tendency to exaggerate and simplify complicated issues, or write contents that their reader wants to read. So media content not being in-depth is also a common problem.
If you want real in-depth knowledge of one particular issue, you should read actual opinion from real people involved (raw material), then summarize those raw data yourself for the conclusion.
If someone lacks media literacy, avoiding AI isn't going to solve this problem. Questioning every media content for being fake isn't a good idea either. The only good solution is increase media literacy skill so you can tell which content from AI/media can be used and which should be questioned, collect or search for raw data yourself if necessary. Avoiding them completely doesn't solve media literacy problem.