

Finding search terms is the one task I consistently use LLMs for. They did not say that though, they said replacing traditional search with LLMs, that traditional search is about to “go the way of the dinosaur”. I dont trust any local LLM to accurately recall anything it read.
Not to mention that once we gain dependence on LLMs, which is something big tech is trying really hard to achieve right now, it will not be all that difficult for the creators to introduce biases that give us many of the same problems as search engines. Product placement, political censorship, etc. There would not be billions of dollars in investment if they thought they weren’t going to get anything out of it.
First of all, they are not FOSS. I know it seems tangential to the discussion, but it’s important because biases cannot be reliably detected without the starting data. You should also not trust humans to see bias because humans themselves are quite biased and will generally assume that the LLM is behaving correctly if it aligns with their biases, which can be shifted in various ways over time, too.
Second, local LLMs don’t have the benefit of free software where we can modify them freely or make forks if there are problems. Sure, there’s fine tuning, but you don’t get full control that way, and you need access to your own tuning data set. We would really just have the option to switch products, which doesn’t put us much further ahead than using the closed off products available online.
I’m all for adding them to the arsenal of tools, but they are deceptively difficult to use correctly, which makes it so hard for me to be excited about them. I hardly see anyone using these tools for the purposes they are actually good for, and the things they are good for are also deceptively limited.