LLM Fatigue is a variation of Decision fatigue for AI; every day there's a new release, so you stick to the familiar 3 names despite merits
I have realized that the same 2-3 models that I started using heavily a few months back are the same ones I continue using today for the majority of use cases. Deeper exploration of these models and tedious prompting has reduced fatigue for me. There are a few exceptions but those models I tend to use for very specific tasks and not very often. I won’t mention the models but I think most serious LLM developers will know which ones I am referring to.
@soumithchintala use a model router then
@soumithchintala Perplexity gives you a choice of picking the underlying LLM. And you only pay for perplexity once.
@soumithchintala Especially because no single model is going to stay on top for long, so there’s no real point to keep switching around and having to re-optimize your prompting.
@soumithchintala Premature standardization gave us 50 years of Microsoft.
@soumithchintala ubuntu 18.04 python 3.8 yolov5 … if it works, don’t touch
@soumithchintala It happened long time ago when everyone ignored anything but resnet-50 and vgg, yolo and mask-rcnn (maybe ssd) and unet.
@soumithchintala Pablo Chat gives you the choice of LLMs and keeps adding new ones...and I switch it up constantly. As a non techie AI newbie, I am quite LLM obsessed at the moment.
@soumithchintala That is just decision fatigue, it is not a variation of it
@soumithchintala Often easier to just wait for the new version of your current LLM provider than to switch between them.