I am probably giving most of them too much credit, but I think some of them took the Bitter Lesson and learned the wrong things from it. LLMs performed better than originally expected just off context, and (apparently) scaled better with bigger model and more training than expected, so now they think they just need to crank up the size and tweak things slightly (i.e. "prompt engineering" and RLHF) and don't appreciate the limits built into the entire approach.
The annoying thing about another winter is that it would probably result in funding being cut for other research. And laymen don't appreciate all the academic funding that goes into research for decades before an approach becomes interesting and viable enough to scale up and commercialize (and then overhyped and oversold before some more modest practical usages become common, and relabeled as something other than AI).
Edit: or more cynically, the leaders and hype-men know that algorithmic advances aren't an automatic dump money in, get out disruptive product process, so they don't bother putting as much monetary investment or hype into algorithmic advances. Like compare the attention paid towards Yann LeCunn talking about algorithmic developments vs. Sam Altman promising grad student level LLMs (as measured by a spurious benchmark) in two years.
A sneer classic: https://www.reddit.com/r/SneerClub/comments/131rfg0/ey_gets_sneered_on_by_one_of_the_writers_of_the/