A common challenge for Bayesian approaches to modeling perceptual behavior is the fact that the two fundamental Bayesian components, the prior belief and the likelihood function, are formally unconstrained. Here we argue that a neural system that emulates Bayesian inference is naturally constrained by the way it represents sensory information in populations of neurons. More specifically, we apply an efficient coding principle that creates a direct link between prior and likelihood based on the underlying stimulus distribution. The resulting Bayesian estimates can show biases away from the peaks of the prior distribution, a behavior seemingly at odds with the traditional view of Bayesian estimates yet one that has indeed been reported for human perception of visual orientation. We demonstrate that our framework correctly predicts these repulsive biases, and show that the efficient encoding characteristics of the model match the reported orientation tuning properties of neurons in primary visual cortex. Our results suggest that efficient coding is a promising hypothesis in constraining neural implementations of Bayesian inference.