This is how pay data can get the wage gap wrong
When a company stays mum on the topic of pay equity, often, employees fill the void by crowdsourcing and sharing their own pay. This well-intentioned but misguided effort is dangerous because it often highlights issues that are not legitimate while masking the real problems.
Case in point: the New York Times released a piece on pay equity among journalism, advertising, and book publishing industry professionals entitled “On a Dry Spreadsheet, a Stark Difference: a $200,000 Pay Gap.” The article partly focused on a public spreadsheet called “Real Media Salaries” in which those in the industry self-reported pay data and included factors like race, gender and years of experience. And the headline was referring to one instance of a difference in pay between the following: “a white, male freelance creative director in New York with 28 years of experience reported a salary of $300,000. A Latino man with the same job description in New Jersey and 25 years of experience said he made $95,000.”
The intention was worthy, but the conclusions were just plain wrong.
It is too easy for people to come away from self-reported data with erroneous or misleading information, as may have happened with the well-intentioned New York Times reporters of this article. We reached out to them for comment, and they haven’t responded. So while they cite a $200,000 pay gap in the headline, when our data science team applied sound methodology to the data in the spreadsheets, we found no evidence of a gap even close to that.
This was due to several reasons, including the fact that only 69% of the records contributed to the spreadsheet are actually usable. While this percentage is more favorable than most crowdsourced pay spreadsheets, there are 505 unique job titles with varying responsibilities. Some of our favorite responses were creative entries like “pretend ABC didn’t cover up Epstein’s crimes,” “babysitting my co-host,” and “making sh*t up.”
In a pay equity analysis, one of the biggest challenges is grouping employees for review. There is no standard grouping schema or template for employers to follow. If the job roles are truly unique and cannot be grouped as “similar work,” you’ve essentially disaggregated to the point where comparisons are not possible.
The spreadsheet selected one binary comparison. This is misleading, too, as it undermines a careful statistical analysis that evaluates differences across and within groups systematically.
There is a right way to analyze pay equity data carefully and effectively. Our data science team, led by Zev Eigen, JD, PhD, ran “Real Media Salaries” through our PayEQ™ software. Here is what we found:
The ease in making (mostly incorrect) conclusions with self-reported data is a cautionary tale for companies that are silent on internal pay equity.
Those of us who work on pay equity full-time know that most companies have pay issues due to gender or race. However, the existence of pay disparities should not inspire fear. Leaders should accept this reality and view it as a call to action to employ ongoing pay equity analyses. Only by looking under the hood and conducting regular analyses to find and fix problems can a company identify the underlying behaviors, policies, and practices that led to disparities in the first place.
Employees will self-report compensation if employers fail to review compensation practices carefully, systematically, and in an ongoing way. Employers are wise to heed the call and regularly review compensation policies and practices with the best available technology. This is one of the many advantages of software. Regular review makes considering how and when to talk publicly about pay equity significantly easier.
Maria Colacurcio is the CEO and Zev Eigen is the founder and chief data scientist, of Syndio, the HR analytics SaaS company committed to eradicating unlawful workplace pay disparities.
(14)