Correctly assessing the consequences of a sequence of events is essential for a successful interaction with the world. Such assessment does not only require a causal understanding of the structure of the world but also the ability to distinguish whether a given event is the result of an agent's own action (intervention) or simply the consequence of the world being in action (observation). Previous studies haveq shown that humans can learn causal structures, and that they can distinguish interventions from observations. However, these studies almost exclusively considered causal structures in which interventions led to a simple forward conditioned inference problem where the outcome was only conditioned on the intervention itself (e.g., "causal chain" or "common cause" problem). Thus it remains unclear whether humans ability to correctly interpret interventions generalizes to more complex causal structures that require the integration over hidden causes. We tested humans subjects in a prediction game with a fully connected "common cause" structure. This structure represents the simplest instantiation of the general class of models with hidden causes. Employing a clever betting structure we were able to directly monitor subjects' beliefs in the game on a trial-by-trial basis. All subjects learned the conditional probabilities over the course of the game with appropriate feedback. Once learned, all but one subject were immediately able to correctly predict the causal effects of their own interventions. Subjects' beliefs were quantitatively very similar to the values predicted by optimal causal reasoning. Our results suggest that humans ability to distinguish interventions from observations includes the general class of structures with hidden common causes.