I don't trust large language model (#LLM) AIs: They're trained to sound plausible without regard for accuracy, ie, generate bullshit.
If you can handle that "spicy" description, please read this essay by @researchfairy, describing how LLMs can be used to deliberately weaponize #SystematicReview articles. Want a topic review that will completely plausibly support your controversial viewpoint? Say, you want to support raw milk or decry #vaccination ?