To say what some of the excellent responses above said, but in a slightly different way, quantile regression makes fewer assumptions. On the right hand side of the model the assumptions are the same as with OLS, but on the left hand side the only assumption is continuity of the distribution of Y (few ties). One could say that OLS provides an estimate of the median if the distribution of residuals is symmetric (hence median=mean), and under symmetry and not-too-heavy tails (especially under normality), OLS is superior to quantile regression for estimating the median, because of much better precision. If there is only an intercept in the model, the quantile regression estimate is exactly the sample median, which has efficiency of 2π when compared to the mean, under normality. Given a good estimate of the root mean squared error (residual SD) you can use OLS parametrically to estimate any quantile. But quantile estimates from OLS are assumption-laden, which is why we often use quantile regression.
If you want to estimate the mean, you can't get that from quantile regression.
If you want to estimate the mean and quantiles with minimal assumptions (but more assumptions than quantile regression) but have more efficiency, use semiparametric ordinal regression. This also gives you exceedance probabilities. A detailed case study is in my RMS course notes where it is shown on one dataset that the average mean absolute estimation error over several parameters (quantiles and mean) is achieved by ordinal regression. But for just estimating the mean, OLS is best and for just estimating quantiles, quantile regression was best.
Another big advantage of ordinal regression is that it is, except for estimating the mean, completely Y-transformation invariant.