Economists work from a modelling approach to statistics. Bayes fits this paradigm well and makes philosophical sense. Economists get into trouble when statisticians pick apart design aspects of their data showing that sampling theory doesn't work because of some issues to do with weights or nonresponse. If you just abandoned frequentist approach which is about sampling error in favor of Bayesian Calibration you would be able to have your cake and eat it.
Shouldn't economists be vehemently proBayesian?

Economists work from a modelling approach to statistics. Bayes fits this paradigm well and makes philosophical sense. Economists get into trouble when statisticians pick apart design aspects of their data showing that sampling theory doesn't work because of some issues to do with weights or nonresponse. If you just abandoned frequentist approach which is about sampling error in favor of Bayesian Calibration you would be able to have your cake and eat it.
You appear to assume we do surveys, economists are inherently sceptical of survey data and most would try to avoid using it. our scepticism is based on the motives and incentives of survey respondents and we would probably view biases due to human incentives as more important than nonresponse biases or lack of representativeness due to poorly specified weights. I think your point is more addressed to sociology than economics.

What I'm describing characterizes the usual problem you will encounter working in a company or government agency with statisticians and evonomists. Economist will only care about model building and then they freakout when statistician begins criticizing the data collection methodology.
Economists work from a modelling approach to statistics. Bayes fits this paradigm well and makes philosophical sense. Economists get into trouble when statisticians pick apart design aspects of their data showing that sampling theory doesn't work because of some issues to do with weights or nonresponse. If you just abandoned frequentist approach which is about sampling error in favor of Bayesian Calibration you would be able to have your cake and eat it.
You appear to assume we do surveys, economists are inherently sceptical of survey data and most would try to avoid using it. our scepticism is based on the motives and incentives of survey respondents and we would probably view biases due to human incentives as more important than nonresponse biases or lack of representativeness due to poorly specified weights. I think your point is more addressed to sociology than economics.

What I'm describing characterizes the usual problem you will encounter working in a company or government agency with statisticians and evonomists. Economist will only care about model building and then they freakout when statistician begins criticizing the data collection methodology.
Economists work from a modelling approach to statistics. Bayes fits this paradigm well and makes philosophical sense. Economists get into trouble when statisticians pick apart design aspects of their data showing that sampling theory doesn't work because of some issues to do with weights or nonresponse. If you just abandoned frequentist approach which is about sampling error in favor of Bayesian Calibration you would be able to have your cake and eat it.
You appear to assume we do surveys, economists are inherently sceptical of survey data and most would try to avoid using it. our scepticism is based on the motives and incentives of survey respondents and we would probably view biases due to human incentives as more important than nonresponse biases or lack of representativeness due to poorly specified weights. I think your point is more addressed to sociology than economics.
Look the guys you are working with in those settings are the Cteam I understand that they don't know much about where the data comes from but you can't view them as representative of the profession of economics half of them probably aren't really economists. A bachelors degree in economics doesn't make you an economist. Now, if the data you are working with is secondary data then you have little control over the methodology of data collection. Most economists would work with secondary data so in this case your point is moot. In the cases where economists do work with primary data then you run into two types of people. good researchers who are largely aware of the issues your are raising and bad researchers who are not. You find bad researchers in every field as well as good ones but you can't generalize from bad researchers to practice in the field as a whole.

I think there should be room for both. Frequentism works well when you have a really strong case for identification. A Bayesian framework often takes an attitude of description rather than proof ie, "here's the data and some relationships we can observe," as well as perhaps, "here's how it fits into previous work on this topic (in the form of priors)."
That Muslim fertility NBER paper would probably be better served by a Bayesian multilevel approach, where you're just saying, "here are some interesting patterns in how Muslim birth timing differs from nonMuslim birth timing, with potential implications for birth outcomes and cognitive development." But instead, they assumed that a simple differenceindifferences is going to perfectly identify the effects of birth timing/Ramadan on cognitive development.
Bayesian theory is more philosophically consistent than frequentist methods. Ultimately, you have to care about the truth.
Frequentist methods are better suited to identifying policy relevant parameters, brah.

Bayesian here, switching to Bayesian inference doesnt reduce the need to be careful with your data, worry about informative/noninformative missingness, ignorability of nonresponses, identification issues, etc. There are a lot of advantages to going Bayesian, but it isnt magic

Why? The logic of the frequentist approach makes no sense. Also, lol at applying the frequentist approach to a model which already reflects strong priors about the structure of the problem.
Bayesian theory is more philosophically consistent than frequentist methods. Ultimately, you have to care about the truth.
Frequentist methods are better suited to identifying policy relevant parameters, brah.

Also the main advantages of Bayesian inference are:
1) As a Bayesian you can actually fit the model you want to fit, i.e the one which is most appropriate for the data. Frequentists have to spent a lot of time worrying about things like unbiasedness and deriving asymptotic distributions which can be very hard when fitting complex structured models, which often leads to them instead choosing ridiculously over simplistic models (linear,highly parametric, nonhierarchal etc) even when this is not appropriate. With a Bayesian approach you can just focus on the model itself, because MCMC is going to let you estimate it fine in 99% of situations.
2) Bayesian inference makes it much easier to combine data from multiple sources (categorical/continuous/etc) and also utilise hierarchal pooling to do a lot better when you have limited data. Yes, frequentists do use multilevel models too, but they typically get carried out with oversimplistic assumptions (see 1) whereas the Bayesian approach lets you easily do shrinkage/pooling across multiple data sets
3) Bayesian posterior distribtions are much easier to obtain than sampling distributions, particularly when you cant appeal to asymptotics. Bootstrapping as a frequentist does get you a lot, but timeseries bootstrapping is more of an art than a science, and I would personally be much more trusting of a posterior distribtion than a block bootstrap
4) Bayesian models are much better at prediction, because writing down the predictive distribution is immediate from the posterior. In contrast, frequentist predictive distribtions can be incredibly difficult to obtain, particularly in situations where the model is complex, or asymptotic assumptions fail. As such, Bayesian methods tend to be more suitable for timeseries prediction, or just anywhere where having full predictive distribtions is useful.

(1) I can fit any model by making up parameter values. What makes the Bayesian fit better than mine?
(2) What are these "oversimplistic assumptions" restricting frequentists, but not Bayesians?
(3) Easier to obtain, maybe. More useful? Unclear.
(4) See #3.Short story is, Bayes is better if you don't care about the frequentists properties of your statistical procedures. If you do, then you have to use methods with, well, good frequentist propertiesnamely, frequentist procedures. Sometimes Bayesian procedures have good frequentist properties, but when they don't (e.g., when a 95% posterior interval has close to 0 coverage), what is their rationale?
Also the main advantages of Bayesian inference are:
1) As a Bayesian you can actually fit the model you want to fit, i.e the one which is most appropriate for the data. Frequentists have to spent a lot of time worrying about things like unbiasedness and deriving asymptotic distributions which can be very hard when fitting complex structured models, which often leads to them instead choosing ridiculously over simplistic models (linear,highly parametric, nonhierarchal etc) even when this is not appropriate. With a Bayesian approach you can just focus on the model itself, because MCMC is going to let you estimate it fine in 99% of situations.
2) Bayesian inference makes it much easier to combine data from multiple sources (categorical/continuous/etc) and also utilise hierarchal pooling to do a lot better when you have limited data. Yes, frequentists do use multilevel models too, but they typically get carried out with oversimplistic assumptions (see 1) whereas the Bayesian approach lets you easily do shrinkage/pooling across multiple data sets
3) Bayesian posterior distribtions are much easier to obtain than sampling distributions, particularly when you cant appeal to asymptotics. Bootstrapping as a frequentist does get you a lot, but timeseries bootstrapping is more of an art than a science, and I would personally be much more trusting of a posterior distribtion than a block bootstrap
4) Bayesian models are much better at prediction, because writing down the predictive distribution is immediate from the posterior. In contrast, frequentist predictive distribtions can be incredibly difficult to obtain, particularly in situations where the model is complex, or asymptotic assumptions fail. As such, Bayesian methods tend to be more suitable for timeseries prediction, or just anywhere where having full predictive distribtions is useful. 
Are you saying that frequentism is committed to a contradiction? Which contradiction would that be, exactly?
Bayesian theory is more philosophically consistent than frequentist methods. Ultimately, you have to care about the truth.
"Incoherent" is maybe the wrong word, but the standard arguments are that a) basic decision theory shows that the only admissable estimators in most models correspond to different choices of Bayesian priors, and that the standard frequentist estimators are often inadmissible (the standard OLS estimator is inadmissable under meansquared error loss for example, see JamesStein), b) Bayesian axioms are the only way to do probability which avoids being dutch booked (which is the usual definition of coherence), and c) standard frequentist inference violates the likelihood principle.
I dont think people care much about this stuff anymore though, because subjective Bayesianism has largely been abandoned for various reasons.

I'm talking about the ability to estimate the model. With complex models that have hierarchal structure/varying dimension of parameter space/etc it is very hard to actually produce sampling distribtions, which often leads frequentists to fit simplistic models that make stronger assumptions. In general, this is less of an issue for Bayesians, because obtaining the posterior distribtuion of parameters in complex models is much easier.(1) I can fit any model by making up parameter values. What makes the Bayesian fit better than mine?
There is no guarantee that estimators derived from frequentist procedures actually have good frequentist properties, and Bayesian estimators often do better. The classic example is inference for Binomial proportions, where the credible derived under the Jeffrey's prior has better coverage properties than most frequentist intervals. Yes, everyone should be worrying about the performance of estimators under repeated sampling, but that itself is no reason to use frequentist procedures. Frequentism isnt just a case of worrying about repeated sampling/coverage/etc (which many Bayesians also care about), its about commitment to a whole host of dubious baggage like the idea that estimators/intervals should be minimax, and so on.Short story is, Bayes is better if you don't care about the frequentists properties of your statistical procedures. If you do, then you have to use methods with, well, good frequentist propertiesnamely, frequentist procedures. Sometimes Bayesian procedures have good frequentist properties, but when they don't (e.g., when a 95% posterior interval has close to 0 coverage), what is their rationale?
Examples where 95% posterior intervals have close to zero coverage when a sensible noninformative prior has been used tend to be pathological and dont arise often. Frequentist confidence intervals can also break down in pathological situations too, that isnt really an argument.