Categorization is an important cognitive process. However, the correct categorization of a stimulus is often challenging because categories can have overlapping boundaries. Whereas perceptual categorization has been extensively studied in vision, the analogous phenomenon in audition has yet to be systematically explored. Here, we test whether and how human subjects learn to use category distributions and prior probabilities, as well as whether subjects employ an optimal decision strategy when making auditory-category decisions. We asked subjects to classify the frequency of a tone burst into one of two overlapping, uniform categories according to the perceived tone frequency. We systematically varied the prior probability of presenting a tone burst with a frequency originating from one versus the other category. Most subjects learned these changes in prior probabilities early in testing and used this information to influence categorization. We also measured each subject's frequency-discrimination thresholds (i.e., their sensory uncertainty levels). We tested each subject's average behavior against variations of a Bayesian model that either led to optimal or sub-optimal decision behavior (i.e. probability matching). In both predicting and fitting each subject's average behavior, we found that probability matching provided a better account of human decision behavior. The model fits confirmed that subjects were able to learn category prior probabilities and approximate forms of the category distributions. Finally, we systematically explored the potential ways that additional noise sources could influence categorization behavior. We found that an optimal decision strategy can produce probability-matching behavior if it utilized non-stationary category distributions and prior probabilities formed over a short stimulus history. Our work extends previous findings into the auditory domain and reformulates the issue of categorization in a manner that can help to interpret the results of previous research within a generative framework.