Data and technology are the new frontier in the struggle for civil rights, and out on the frontier, a lot can go wrong. Millions of people find their homes or learn about jobs through ads, but what once took place in the pages of newspapers now happens on digital platforms. Laws such as the Fair Housing Act and Title VII of the Civil Rights Act of 1964 have long helped hold newspapers and their advertisers accountable for discriminatory marketing. But today, digital platforms—which deliver exponentially more ads than their newsprint predecessors—are making these core civil-rights laws increasingly challenging to enforce. The opacity of the digital-ad ecosystem is a major barrier to ensuring justice and equal opportunity.
Lawyers and other advocates who seek to protect civil rights in digital marketing have made some progress, but there is still so much we don’t know.
Concerns about targeted marketing often start with Facebook, which has massive market share and access to enormous amounts of personal data. It took in-depth reporting from ProPublica to reveal that the social-media giant allowed its advertisers to exclude people from seeing housing ads based on their “ethnic affinity.” That led to two years of concerted litigation and advocacy that culminated last month, when civil-rights and labor groups reached historic legal settlements with Facebook. Under the terms of those settlements, Facebook will stop allowing landlords, employers, creditors, and similar advertisers to explicitly target — or exclude — people based on age, gender, zip code, and hundreds of other sensitive targeting categories, including those relating to race and ethnicity. These changes were a meaningful victory. They show that major internet companies can be pressured into better protecting people’s civil rights.
[Read: When algorithms don’t account for civil rights]
Before settling these cases, Facebook argued that it was immune to liability under antidiscrimination laws because of the broad protection granted under Section 230 of the Communications Decency Act. This provision, enacted in order to protect free and open expression on a nascent internet, shields tech platforms from liability arising from their users’ content. In essence, Facebook argued that its advertisers were entirely to blame for any discriminatory outcomes.
But that argument may be wearing thin. Even when advertisers do nothing wrong, Facebook can still perpetuate discrimination in housing, credit, and employment in deeper and more systematic ways. After an advertiser chooses its target audience, Facebook then makes decisions about which of those users will actually see that ad. It’s in those decisions—made automatically by Facebook, millions of times a day—where discrimination can quietly creep back in.
A recent study led by researchers at Northeastern University and the University of Southern California shows that, given a large group of people who might be eligible to see an advertisement, Facebook will pick among them based on its own profit-maximizing calculations, sometimes serving ads to audiences that are skewed heavily by race and gender. (Full disclosure: One of us was a member of the research team.) In these experiments, Facebook delivered ads for jobs in the lumber industry to an audience that was approximately 70 percent white and 90 percent men, and supermarket-cashier positions to an audience of approximately 85 percent women. Home-sale ads, meanwhile, were delivered to approximately 75 percent white users, while ads for rentals were shown to a more racially balanced group. These are limited experiments, yet to be replicated, but they demonstrate a distressing trend.
The study’s results show digital advertising working exactly as designed—and exactly in ways that can perpetuate the types of harms that civil-rights laws are meant to address. Simply put, ad platforms such as Facebook make money when people click on ads. But an individual’s tendency to click on certain types of ads (and not others) often reflects deep-seated social inequities: the neighborhood they live in, where they went to school, how much money they have. An ad system that is designed to maximize clicks, and to maximize profits for Facebook, will naturally reinforce these social inequities and so serve as a barrier to equal opportunity.
[Read: Is this how discrimination ends?]
These dynamics are a perfect illustration of why the “disparate impact” doctrine—a bedrock principle of civil-rights law—is such an important tool in the era of algorithms. Under disparate impact, even unintentional actions can amount to illegal discrimination if they have an adverse impact on protected groups. Without this doctrine, opaque, machine-driven predictions are effectively above the law, as long as they don’t directly consider data indicating that a user belongs to a protected class.
Fortunately, the Facebook-ad settlements anticipated these complexities and enshrined a commitment by the social-media giant to study the issue of bias in ad delivery, to share the status of its efforts, and to consider “feasible reforms.” The Department of Housing and Urban Development added fuel to the fire last month when it charged Facebook with violating the Fair Housing Act. Ironically and disturbingly, HUD is seeking to weaken disparate impact by undoing the regulatory framework supporting it, even as the agency prosecutes cutting-edge cases that might ultimately require it.
Facebook must redouble its efforts to address all facets of potential discrimination in its ad system. As a part of that, the company should provide the public with far more detail about how its advertising system works, especially more information about the ads it runs, including aggregate demographic statistics about the groups that ultimately saw them. Facebook has taken some small steps in this direction, mostly limited to political ads and the commission of a civil-rights audit, but there is much more to do.
The Facebook settlements were a critical step toward guaranteeing that advertisers can’t intentionally exclude certain communities from housing, employment, and credit opportunities. Yet advocates and researchers are still in the early stages of understanding the mechanics of digital advertising, and the extent to which they produce biased outcomes. As internet marketing plays a growing role in shaping people’s life opportunities, it’s vital for Congress, the courts, and tech companies to tackle these issues head-on. Digital discrimination may look different from the discrimination of decades past, but it is no less real or urgent.
from The Atlantic http://bit.ly/2IyIQY4
0 comments:
Post a Comment