Many policies allocate scarce resources, such as access to welfare payments, admission to universities, and access to priority medical care. This project develops a method to audit the values implicit in policies. The insight is to compare how different people are ranked in an allocation to how much they benefit, using new machine learning methods to estimate how much different people benefit. The project develops a theoretical framework, and demonstrates it by auditing the preferences implicit in the algorithmic targeting of Mexico's PROGRESA antipoverty program. Although indigenous households were ranked higher by the program, estimates suggest that they benefit so much more that the policy implicitly assigns them no higher welfare weight. The preferences implicit in the program are similar to those reported by Mexican residents in a household survey. The framework demonstrates a way to close the loop between societal debate and algorithmic implementation.