Decision makers must often infer patterns, causal processes, and other algorithms from data. For instance, decision makers have to use data to extract models of how other people behave (habits, norms, and strategies in games) and more generally of how natural processes, organizations and social systems unfold. Even if these inference problems are deterministic, they can nonetheless be difficult due to the underlying complexity of the process. We report experiments in which we identify the features of algorithms that make them difficult for people to infer from data. Subjects in our experiment are given strings of “inputs” and “outputs”, generated by a rule (a finite state machine), and are asked to infer the rule. We introduce new methods to elicit these inferences directly, and use structural models to infer them from choices. Using this data, we compare several complexity metrics, identifying those that best predict both the models that people are able to extract, and those that they tend to select.