Designing high quality prediction models while maintaining social equity (in terms of ethnicity, gender, age, etc.) is critical in today’s world. Most recent research in algorithmic fairness focuses on developing fair machine learning algorithms such as fair classification, fair regression, or fair clustering. Nevertheless, it can sometimes be more useful to simply preprocess the data so as to “remove” sensitive information from the input feature space, thus minimizing potential discrimination in subsequent prediction tasks. We call this a “fair representation” of the data. A key advantage of using a fair data representation is that a practitioner can simply run any off-the-shelf algorithm and still maintain social equity without having to worry about it.