We love to say that we need more physics in ML in part because it justifies our funding/existence. However, my somewhat subjective observation has been that, beyond simple toy problems, physics constraints tend to do more harm than good to ML efficiency.
@Sergei_Imaging The problem is that for many real problems, we usually do not have physical models that are good enough to be close to actual reality. So, a black-box learning approach typically wins. And when we do know physics well, we don't need ML anymore.
@MaximZiatdinov @Sergei_Imaging This has also been my intuition, although I don't have any evidence to support it. Might be a different story in low-data regimes though?
@Andrew_S_Rosen @MaximZiatdinov @Sergei_Imaging This will depend strongly on your problem. We benefitted greatly from including more physics: pubs.aip.org/aip/jcp/articl… or pubs.aip.org/aip/jcp/articl… Conversely, removing physically motivated invariances and equivariances will make your model worse ...