Congratulations, you have now arrived at the Trough of Disillusionment:
It remains to be seen if we can ever climb the Slope of Enlightenment and arrive at reasonable expectations and uses for LLMs. I personally believe it's possible, but we need to get vendors and managers to stop trying to sprinkle "AI" in everything like some goddamn Good Idea Fairy. LLMs are good for providing answers to well defined problems which can be answered with existing documentation. When the problem is poorly defined and/or the answer isn't as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.