Honestly, it just makes me sad for future generations.
LLMs aren't like the computers we've grown accustomed to, where we rely on them to be accurate because the output is all based on distinct and repeatable mathematical operations. A LLM can't provide any feedback on the accuracy of its outputs, because it is simply predicting a set of tokens based on its training data. We've already seen cases where professionals who literally have advanced degrees have tried to pass off its output without scrutiny and it backfired spectacularly.
As students, who by definition lack context and experience in a given field make use of this technology, I fear they will overestimate their own skills far past the point of no return and I fear catastrophic results over the long run. Perhaps I'm being a bit overly pessimistic here, but I have trouble identifying mitigating factors that provide a significant level of reassurance.