Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment

Edelman, Benjamin, Michael Luca, and Daniel Svirsky. “Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment.” American Economic Journal: Applied Economics 9, no. 2 (April 2017): 1-22.

In an experiment on Airbnb, we find that applications from guests with distinctively African-American names are 16% less likely to be accepted relative to identical guests with distinctively White names. Discrimination occurs among landlords of all sizes, including small landlords sharing the property and larger landlords with multiple properties. It is most pronounced among hosts who have never had an African-American guest, suggesting only a subset of hosts discriminate. While rental markets have achieved significant reductions in discrimination in recent decades, our results suggest that Airbnb’s current design choices facilitate discrimination and raise the possibility of erasing some of these civil rights gains.

English Translation of FAS Russia Decision in Yandex v. Google

In September 2015, the Russian Federal Antimonopoly Service announced its decision that Google had violated Russian law by tying its mobile apps to Google Play and setting additional restrictions on mobile device manufacturers, including limiting what other apps they install and how they configure those apps and devices. These topics are of great interest to me since I was the first to publicly distribute the Mobile Application Distribution Agreements, and because I explored related questions at length in my 2015 article Does Google Leverage Market Power Through Tying and Bundling? and more recently my working paper Android and Competition Law: Exploring and Assessing Google’s Practices in Mobile (with Damien Geradin).

For those who wish to understand the reasoning and conclusions of Russia’s FAS, one key limitation is that the September 2015 decision is available only in Russian. While the case document library summarizes key facts, allegations, and procedural developments, that’s no substitute for the full primary source documents.

In the course of expanding my Android and Competition Law paper, I recently obtained an English translation of the September 2015 decision. The decision is unofficial but, as best I can tell, accurate and reliable. It suffers redactions, but the original in Russian has the same limitation. I offer it here to anyone interested:

Yandex v. Google – Resolution on Case No. 1-14-21/00-11-15 – resolution of September 18, 2015 – unofficial English translation

Response to Airbnb’s Report on Discrimination

This month Airbnb released a report investigating discrimination by its hosts against guests (including racial minorities and others), assessing the evidence of the problem and evaluating proposed solutions. The accompanying announcement offers lofty principles—"creating a world where anyone can belong anywhere."

In contrast to the company’s prior denials, Airbnb now admits the problem is urgent: "discrimination must be addressed"; "minorities struggle more than others to book a listing"; "some members of the community did not receive the timely, compassionate response they expected and deserved when they reported instances of discrimination"; Airbnb’s nondiscrimination policy was not widely known, within or outside company. This much is beyond dispute.

While Airbnb’s report is a step in the right direction, it does little to address the crucial subject of how to actually fix the problem of discrimination. Indeed, the report proposes actions of uncertain or unproven effectiveness.  At the same time, the report quickly dismisses a simpler alternative response—removing guest photos and names from booking requests—which would be far more likely to succeed. Meanwhile, the report completely fails to defend the legal gamesmanship by which Airbnb avoids litigation on the merits when consumers complain about Airbnb, and the report equally fails to defend Airbnb’s continued prohibition on users conducting research to uncover and measure discrimination for themselves.

This article offers my critique.

Airbnb’s bottom line

What exactly did Airbnb commit to change?

  1. Airbnb plans to increase the number of "Instant Book" properties, with a stated goal of one million by January 2017. Instant Book offers a potential mechanism to reduce discrimination: When hosts pre-promise to accept any interested guest who agrees to pay, they do not have the ability to screen each individual guest’s request, therefore leaving much less opportunity to discriminate.

    Nonetheless, Instant Book addresses only a portion of the problem. If disfavored guests are limited to Instant Book properties, they are left with a reduced set of properties, typically meaning an inferior match with their preferences as well as higher price and less flexibility. Furthermore, hosts can use cancellations—permitted, in certain quantities, under Airbnb’s rules—to undo the anti-discrimination benefits of Instant Book.

    Moreover, Airbnb’s million-property goal is at best ambiguous. Airbnb doesn’t say how many Instant Book properties are already available, so it’s hard to assess how big an increase that would be or how realistic it is. Nor does Airbnb indicate the methods to be used to achieve the increase. If January comes and the objective has not been reached, what then?

  2. Airbnb says it will "experiment with reducing the prominence of guest photos in the booking process." Certainly photos have been prominent, and to its credit Airbnb now agrees excessively so, in light of the limited information they actually convey and the superiority of other information (like objective verifications).

    Notably, Airbnb’s plan to experiment with reduced photo size has been misreported in the press. For example, the Wall Street Journal reported that Airbnb "is planning to reduce the prominence of guests’ photos," and Diversity Inc. reported that Airbnb’s changes "include displaying photos … less prominently." But Airbnb’s actual promise wasn’t to reduce photo prominence. Rather, the company promised only to run an experiment, of unspecified duration and scope, with no commitment whatsoever as to subsequent changes. With so many caveats, it’s hard to put much weight on this response.

  3. Effective November 1, Airbnb will require users to accept a stronger and more detailed nondiscrimination policy: "By joining this community, you commit to treat all fellow members of this community, regardless of race, religion, national origin, disability, sex, gender identity, sexual orientation or age, with respect, and without judgment or bias."

    Airbnb’s policy offers commendable principles. But the company’s existing policy already included the same substance as the new wording. Will a compulsory screen or checkbox actually prevent hosts from continuing to discriminate? Airbnb offers no evidence that a restated policy will make a difference.  Indeed, experience from other types of discrimination suggests that those who seek to discriminate will change only slowly and under significant pressure.

  4. Sometime during the first half of 2017, Airbnb promises to modify its site to prevent a host from telling one guest that a property is unavailable, then later accepting another guest’s request for the same nights. This change seeks to penalize hosts who falsely claim a property is unavailable: If a host rejects a guest due to a false claim of unavailability, the host won’t be able to keep the property listed for others, making the pretextual rejection more costly.

    Unfortunately, there’s little reason to think this approach will operate as claimed.  Instead, strategic hosts will switch to other reasons for rejecting guests—reasons guests can less readily prove to be pretextual.  An "unavailable" response would prevent the host from booking the property to someone else, but the host can instead specify some other reason for declining the request.  So there’s little reason to think this change will stop those who want to discriminate.

    This change will also hinder guests’ efforts to demonstrate discrimination. After rejection, a concerned guest may ask a friend to inquire (or creates a test account to do so), thereby proving that the rejection was due to guest identity and not genuine unavailability. This second inquiry provides a valuable verification, checking whether the host can keep his story straight. Substituting automatic software logic for fallible hosts, Airbnb makes it more difficult to catch a host in a lie.

    Finally, Airbnb offers no explanation why the company needs four to ten months to build this feature. Airbnb’s site already offers multiple categorizations for each night including available, unavailable, and booked. By all indications, this architecture could easily be extended to offer the promised feature—for example, a new status called "booked-locked" that a host cannot change back to available. If this feature is important and useful, why wait?

The proposed changes share common weaknesses.  They all rely on unproven and indeed unstated assumptions about how hosts will respond.  And each change leaves ample room to question whether it will help at all.  One might hope the combination would be more than the sum of its parts. But when each change falls so far short, it’s hard to be optimistic. These are not the "powerful systemic changes" Airbnb promises.

The better policy: remove guest names and photos

In sharp contrast to the indirect changes Airbnb proposes, a much simpler adjustment would more directly prevent discrimination: remove guest photos and names from the information a host sees when evaluating a guest’s request.  If a host could not see a guest’s photo and name before accepting a booking, the host would have no way to determine the guest’s race, age, gender or other characteristics. Even a host who wanted to discriminate would not have the information to make a decision based on the improper factor.

Removing photos and names is particularly compelling in light of best practice developed over decades in other contexts. In 1952, the Boston Symphony Orchestra began to audition musicians behind a screen. The result was a sharp rise in the number of female musicians—widely interpreted to be a decrease in arbitrary and improper discrimination. Similarly, Massachusetts landlords cannot ask about race, national origin, sexual orientation, age, religion, and myriad other factors—for there is little proper reason to make such inquiries, and if landlords have this information, many will struggle not to use it improperly.

Airbnb admits that "some have asked Airbnb to remove profile photos from the platform."  But oddly Airbnb offers zero discussion of the benefits that approach might offer. If Airbnb thinks removing photos would not reduce discrimination, the company offers no statement of its reasoning.  And Airbnb similarly says nothing of the other contexts where similar landlords, employers, and others have similarly elected to conceal information to reduce discrimination. Meanwhile, Airbnb says absolutely nothing of my proposal that hosts see guests’ pseudonyms, not actual names, when considering a request.  Instead, Airbnb focuses on three supposed benefits of continuing to show photos before booking.

  • First, Airbnb argues that photos are "an important security feature: hosts and guests want to know who they will be meeting when a stay begins." Here, Airbnb’s report echoes April 2016 comments from Airbnb’s David King, Director of Diversity and Belonging, who told NPR: "The photos are on the platform for a reason. … You want to make sure that the guest that shows up at your door is the person that you’ve been communicating with."

    This reasoning is particularly unconvincing. No doubt, hosts and guests should eventually see each other’s photos—but after a booking is confirmed.

  • Second, Airbnb says "profile photos are an important feature that help build relationships and allow hosts and guests to get to know one another before a booking begins."

    Airbnb’s reasoning ignores the competing interests at hand. Perhaps profile photos help some guests and hosts get to know each other.  But they also impede bookings by certain guests—notably including victims of longstanding and multifaceted discrimination. How should we weigh a small benefit to many, versus a large cost to a smaller (but important) group? At the same time, the "relationship" benefit is particularly shallow when the host offers an entire property, often with keys handed off via a doorman or lockbox, so the guest and host may never even meet in person. When there is little or no "relationship" to build, Airbnb’s reasoning provides particularly poor support for a policy that harms disadvantaged guests.

    Meanwhile, Airbnb’s "get to know one another" principle is undercut by the company’s actions in other contexts—calling into question whether this factor should be taken at face value. Notably, Airbnb prevents hosts and guests from sharing email addresses and phone numbers before confirming a booking, running software to scan every message for prohibited material. Airbnb does not impose these restrictions because they in some way help build relationships between users; quite the contrary, these restrictions prevent users from talking to each other directly, by email or phone, in the ways they find convenient. Rather, the Airbnb imposes these restrictions to protect its business interests—preventing guests from booking directly and circumventing the company’s fees. Airbnb’s "get to know one another" purpose thus seems conveniently self-serving—invoked when it supports Airbnb’s favored approach, but quickly discarded when costly.

  • Third, Airbnb proposes that "guests should not be asked or required to hide behind curtains of anonymity when trying to find a place to stay." The report continues: "technology shouldn’t ask us to hide who we are. Instead, we should be implementing new, creative solutions to fight discrimination and promote understanding."

    Here too, Airbnb’s approach favors the preferences of the many over the needs of those who face discrimination. While Airbnb frames its approach as not asking guests to be anonymous, in fact Airbnb does much more than that: Airbnb affirmatively prohibits guests from being anonymous, including requiring that guests register using their real names, validating names against government s and credit records, and presenting each guest’s real name, not a pseudonym, for host approval. Far from giving guests more choice to reveal or withhold information, Airbnb requires guests to reveal information—even when research makes clear that the information facilitates discrimination.

    If there was reason to think Airbnb’s other changes would actually, completely, and promptly end discrimination by hosts, guests might feel confident in sharing sensitive information such as race. But given the likelihood that discrimination will continue despite the changes Airbnb promises, disfavored guests have every reason to want to conceal information that could facilitate discrimination. Airbnb’s policy continues to disallow them from doing so.

Some might counter that Airbnb hosts are informal and ought to have more information than hotels or ordinary landlords. Indeed, one might imagine a policy that distinguished between classes of hosts. If a host occasionally offers a shared room or a portion of a property with the host on site, the relationship feels informal, and some might argue that anti-discrimination rules are overkill in such situations. Conversely, if the host is off-site and the guest uses the property exclusively, the arms-length relationship looks more like a hotel. By all indications, the latter is vastly more common than the former (whether measured in nights booked or, especially, revenue). Airbnb’s report could have considered policies that vary based on the property type—perhaps retaining photos and names for the most informal hosts, where personal interaction between guests and hosts is a realistic possibility, while removing them for hosts offering arms-length rental of entire properties. But Airbnb devoted not a single sentence to this possibility.

Airbnb report’s silence on compulsory arbitration—and the prior positions of Airbnb’s distinguished consultant-advisors

Airbnb’s report is completely silent on the company’s requirement that users arbitrate their disputes. Nowhere mentioned in the report, despite criticism in media discussions of discrimination at Airbnb, the company’s Terms of Service make arbitration compulsory for all complaints users may have about any aspect of Airbnb’s service. Airbnb drafted the arbitration requirement and does not allow users to negotiate its provisions (or anything else). The arbitration policy requires that any concerned user bring a claim on an individual basis, not in any kind of group with others who have similar concerns. The predictable—and by all indications, intended—effect of Airbnb’s arbitration requirement is that users cannot obtain meaningful relief for a broad set of complaints they may have against Airbnb.

Distinguished consumer organizations have widely criticized arbitration as improper for consumers’ disputes with companies. But for Airbnb, arbitration offers both procedural and substantive benefits.  No individual consumer could bring a compelling arbitration case against Airbnb: There would be no economic rationale for an attorney to accept such a case, as even the most favorable resolution of the dispute would bring a small recovery and correspondingly limited funds to pay for the attorney’s time. Only representing hundreds or thousands of consumers, en masse, would justify the time and talent of top attorneys. But Airbnb’s arbitration clause specifically disallows class actions and other group suits.

Moreover, arbitrators are chosen through processes that bias them against consumers’ claims.  One lawyer recently explained why arbitration tends to favor companies over consumers: "I use the same arbitrators over and over, and they get paid when I pick them. They know where their bread and butter comes from."

In the context of discrimination at Airbnb, new and novel questions make arbitration particularly inappropriate. Arbitration lacks an appeal process where different judges evaluate an initial decision—the proper way to develop policy in new areas of law. And with arbitration results secret, even if one consumer prevailed in arbitration, others would not learn about it. Nor would other arbitrators be bound by a prior dispute’s conclusions even in identical circumstances.

Tellingly, Airbnb is at this moment invoking its arbitration requirement to attempt to dispose of class action litigation alleging discrimination. In May 2016, Virginia resident Gregory Selden filed a class action complaint, alleging that discrimination on Airbnb violated the Civil Rights Act of 1964, 42 USC 1981, and the Fair Housing Act. In response, Airbnb did not dispute that Selden had faced discrimination, nor did Airbnb explore the question of whether the host, Airbnb, or both should be responsible. Rather, Airbnb merely invoked the arbitration provision, arguing that the court could not hear the dispute because Airbnb required all users to agree to arbitrate, and thereby required all users to promise not to sue. As of September 2016, the court has yet to rule. (Case docket.)

Throughout its report, Airbnb cloaks itself in the names and resumes of distinguished advisors (including a seven-page personal introduction from the author and additional listing of experts consulted). But Airbnb’s advisors have taken tough positions against compulsory arbitration—positions which undercut Airbnb’s attempt to invoke arbitration to avoid judicial scrutiny of its practices.

Consider the position of the ACLU during the time when Airbnb consultant Laura Murphy, the report’s sole author, was ACLU’s Legislative Director. In a 2013 letter to senators considering the Arbitration Fairness Act of 2013, the ACLU explained the importance of "end[ing] the growing predatory practice of forcing … consumers to sign away their Constitutional rights to legal protections and access to federal and state courts by making pre-dispute binding mandatory arbitration … clauses unenforceable in civil rights, employment, antitrust, and consumer disputes." The letter continued: "Forced arbitration erodes traditional legal safeguards as well as substantive civil and employment rights and antitrust and consumer protection laws." Similarly, the ACLU’s 2010 letter also noted that "[f]orced arbitration particularly disadvantages the most vulnerable consumers." Murphy’s position at the ACLU entailed overseeing and communicating ACLU’s positions on proposed federal legislation including this very issue.  Murphy’s failure to object to Airbnb’s ongoing use of arbitration clauses, which similarly eliminate both procedural safeguards and substantive rights for historically-disadvantaged groups, is particularly striking in the context of the ACLU positions she previously led.

Airbnb’s other consultant-advisors have similarly taken positions against compulsory arbitration. Consider Eric Holder, former US Attorney General, now a partner at the prestigious law firm Covington & Burling. Under Holder’s leadership, the US Department of Justice repeatedly opposed compulsory arbitration, including filing a Supreme Court amicus brief critiquing American Express’s attempt to impose arbitration on merchants. DOJ there argued that "the practical effect of [the arbitration agreement] would be to foreclose [merchants] from effectively vindicating their Sherman Act [antitrust] claims." DOJ went on to explain that the arbitration agreement makes an "impermissible prospective waiver" of unknown claims, and DOJ noted that private enforcement of federal statutes is important to the overall regulatory scheme—an objective that would be undermined if parties could be forced to effectively waive their claims through compulsory arbitration. Holder’s failure to object to Airbnb’s ongoing use of arbitration clauses, similarly designed to escape federal claims that cannot otherwise be pursued, is a sharp change from the DOJ positions he previously oversaw. I credit that Murphy is the sole author, and Holder and Airbnb’s other consult-advisors have taken no public position to endorse the report or Airbnb’s approach. But when Airbnb’s consultant-advisors allow Airbnb to use their names and credentials, both in the report body and in repeated statements to the press, they necesarily lend their reputations to the company’s approach.

Facing scrutiny of its arbitration provisions in June 2016, Airbnb told the New York Times that "these [arbitration] provisions are common." Certainly many companies require that customers arbitrate their disputes. But Airbnb’s service raises persistent and widespread concerns about discrimination (among other issues). Even if arbitration were appropriate for credit cards and cell phone plans (which, to be sure, many consumer advocates dispute), it may not be appropriate for questions of race, equality, and justice. Moreover, as Airbnb seeks public approval for its anti-discrimination efforts, it ought not reject the standard dispute resolution procedures provided by law. It is particularly galling to see Airbnb’s report totally silent on dispute resolution despite prior discussion in public discussions and the press. Airbnb should remove its arbitration clause, if not for all disputes, then at least for claims of discrimination.

Airbnb’s insistence that consumers arbitrate, without the benefit of courts or court procedures, is a stark contrast to Airbnb’s approach to protect its own interests. Airbnb does not hesitate to file lawsuits when lawsuits are in the company’s interest. Most notably, Airbnb recently sued San Francisco, Santa Monica, and Anaheim over laws it disliked. Whatever the merits of Airbnb’s claims (and on that point, see my July critique with Nancy Leong), Airbnb is happy to take its preferred disputes to court—while specifically prohibiting consumers from doing so.

Testing and data

Airbnb says fixing discrimination is a "top priority," and this month’s report repeatedly echoes that claim. In that context, one might expect the company to welcome academic research to help investigate. Instead, Airbnb specifically bans it.

Consider the limits of research to date. My 2015 paper (with Mike Luca and Dan Svirsky) analyzes experiences of rookie Airbnb guests in five cities. Much is left to be studied. How about outcomes in other cities? What happens when a guest gets a favorable review from a prior host? Do hosts tend to prefer a white rookie guest with no reviews, or a black guest with a single five-star review? How about guests who have verified their Airbnb credentials by linking Facebook and LinkedIn accounts or completing other profile verifications? Do outcomes change after Airbnb’s September email to users and (planned) November policy change? Via the same methodology demonstrated in our paper, other researchers could test these questions and more. But Airbnb specifically prohibits such research via Terms of Service provisions. For one, Airbnb disallows the use of software to study its site (TOS section 14.2) looking for patterns that indicate discrimination. Moreover, Airbnb specifically prohibits creating multiple accounts (14.14), an approach widely used (by us and others) to compare the way hosts respond to guests of varying race.

These problems have a particularly clear solution. Airbnb should revise its Terms of Service to allow bona fide testing.

Airbnb’s report also offers a series of claims grounded in data ("The company’s analysis has found…", "Airbnb’s research and data show…") as well as promises to collect additional data and run additional analyses. But Airbnb asks the public to accept its analyses and conclusions without access to the data supporting its conclusions. Airbnb should offer more to counter skepticism, particularly after widely-discussed allegations of misleading or incomplete data. Certainly Airbnb could provide the interested public with aggregate data measuring discrimination and showing the differential outcomes experienced by white versus black users. If Airbnb now has mechanisms to measure discrimination internally, as the report suggests, it’s all the more important that the company explain its approach and detail its methodology and numerical findings—so past outcomes can be compared with future measurements.

Airbnb’s response in context

In my June article proposing methods to prevent discrimination at Airbnb, I called for concealing guest photos when guests apply to hosts, similarly concealing guest names, normalizing dispute resolution, and allowing testing. I didn’t expect Airbnb to do all of this. But I was disappointed that Airbnb’s response discusses only the first of these four changes, and even that only superficially. Based on Airbnb’s promise of a "comprehensive review," I hoped for more.

Airbnb argues that "there is no single solution" to discrimination and that "no one product change, policy or modification … can eliminate bias and discrimination." Certainly one might imagine multiple policies that would help make a difference, and the combination of multiple policies might be more effective than a single policy alone. But temporarily concealing photos and names, as guests apply to hosts, is the simplest and most promising solution by far. The bar is high for Airbnb to reject this natural and well-established approach; Airbnb’s report offers little to convince a skeptical reader that appropriate concealment, so widely used in other contexts, would not work at Airbnb. Nor does Airbnb’s report make any serious effort to establish that Airbnb’s alternatives will be effective. In both these respects, the concerned public should demand more.

Exploring and Assessing Google’s Practices in Mobile

Since its launch in 2007, Android has become the dominant mobile device operating system worldwide. In light of this commercial success and certain disputed business practices, Android has come under substantial attention from competition authorities. In a paper Damien Geradin and I posted this week, we present key aspects of Google’s strategy in mobile, focusing on Android-related practices that may have exclusionary effects. We then assess Google’s practices under competition law and, where appropriate, suggest remedies to right the violations we uncover.

Many of Google’s key practices in mobile are implemented through Mobile Application Distribution Agreements, confidential contracts that became available to the public through Oracle litigation and are available, to this day, only on my site. But we also evaluate Google restrictions embodied in other documents including Google’s Anti-Fragmentation Agreement as well as supplemental contracts with device manufacturers and mobile carriers providing for exclusive preinstallation of Google search.

If one accepts our conclusion that certain Google practices violate competition laws, it’s important to turn to the question of remedies–what changes Google must make. The natural starting point is to end Google’s contractual ties, allowing device manufacturers to install Google apps in whatever configurations they find convenient and in whatever way they believe the market will value. One might expect to see low-cost devices that feature Yahoo Search, MapQuest maps, and other apps that vendors are willing to pay to distribute. Other developers would retain a “pure Google” experience, foregoing such payments from competing app makers but offering apps from a single vendor, which some users may prefer.

Beyond that, remedies might seek to affirmatively restore competition. Because much of Google’s dominance in mobile seems to come from its powerful app store, Google Play, an intervention might seek to shore up other app stores–for example, letting them copy in Google’s APK’s so that they can offer Google apps to users who so choose. A full remedy would also attempt to restore competition for key apps. Just as Europe previously required Microsoft to show a screen promoting five different web browsers when a user booted Windows for the first time, a similar screen could provide users with a genuine choice of Android apps in each area where Google has favored its own offering. We suspect some users would favor a more privacy-protecting location service if that were prominent and easily available. Other users would probably find competing local services, such as TripAdvisor and Yelp, more trustworthy than Google’s offerings. These developments would increase choices for both users and advertisers, reduce the sphere of Google’s dominance, and begin to restore a competitive marketplace in fundamental mobile apps.

Our working paper:

Android and Competition Law: Exploring and Assessing Google’s Practices in Mobile

(Updated October 26, 2016: This article, as revised, is forthcoming in the European Competition Journal.)

Preventing Discrimination at Airbnb

In January 2014, Mike Luca and I posted a study finding that black hosts on Airbnb face discrimination: by all indications, guests are less willing to stay at their properties, forcing them to lower their prices to attract guests. More recently, Mike Luca, Dan Svirsky, and I contacted hosts using test guest accounts that were white and black, male and female, showing that black guests are less likely to be accepted by hosts. Both findings are troubling: The Internet has the power to make markets fairer and more inclusive, but Airbnb designed its platform to make race needlessly prominent, all but inviting discrimination.

Initially Airbnb responded to our research by framing discrimination as a problem that has "plagued societies for centuries" and emphasizing that the company "can’t control all the biases" of its users. After a barrage of media coverage, Airbnb CEO Brian Chesky this month admitted that discrimination is a "huge issue" and said the company "will be revisiting the design of our site from end to end to see how we can create a more inclusive platform." Indeed, today Airbnb convenes an invitation-only summit in Washington to discuss the situation and, perhaps, design improvements.

While I applaud Airbnb’s new interest in fighting discrimination on its platform, I can’t agree with Chesky’s subsequent claim that preventing discrimination on Airbnb is "really, really hard." Quite the contrary, the solution is apparent and has been known for years. In today’s piece, I renew my longstanding proposal that would substantially fix the problem, then offer two smaller adjustments that are appropriate under the circumstances.

The solution: Limit the distribution of irrelevant information that facilitates discrimination

Online environments make it easy to limit the information that guests and hosts reveal. If certain information causes discrimination, the natural remedy is to conceal that information so customers do not consider it when making booking and acceptance decisions.

Names and photos typically indicate the races of Airbnb guests and hosts. But names and photos are not necessary for guests and hosts to do business. Hosts and guests can amply assess one anothers’ trustworthiness using the significant other information Airbnb already collects and presents. For these reasons, I contend that the Airbnb site should not reveal sensitive race information until a transaction is confirmed. If guests and hosts don’t see names and photos in advance, they simply won’t be able to discriminate on that basis.

The proposal is a smaller change than it might at first appear. In fact, Airbnb has long limited information flows to advance its business interests. Airbnb’s revenue depends not just on introducing hosts to guests, and vice versa, but on transactions actually flowing through Airbnb’s platform so Airbnb can charge booking fees. Airbnb’s revenue would drop if guests and hosts could contact each other directly and agree to pay via Paypal, cash on arrival, or similar. To prevent hosts and guests from going around Airbnb, the company withholds email addresses and phone numbers while the users are still discussing a possible stay — letting each user examine the other’s profile, reviews, verifications, and more, but intentionally withholding the two pieces of information that would undermine Airbnb’s revenue. But Airbnb doesn’t just limit information presented in profiles and request templates; it also affirmatively filters messages between guests and hosts. Airbnb provides a text message system to let users ask questions and share more details, but it doesn’t want the messages to be used to cut Airbnb out of the transaction. So Airbnb’s servers examine each message and blocksanything that looks like an email address or a phone number. In short, there’s ample precedent for Airbnb withholding information — when it wants to.

I anticipate several objections to the proposed change:

  • Trust and safety. Certainly trust issues are central to Airbnb’s business — letting strangers stay in your home when you’re not there to supervise; staying in a stranger’s home when you’re at your most vulnerable (asleep). In that context, some might argue that every possible piece of additional information is important and should be provided. But I question what information the names and photos truly add. Airbnb already validates users’ identities, including checking driver’s license photographs and asking identity verification questions that impostors would struggle to answer. Airbnb reports linked Facebook and LinkedIn profiles, and Airbnb presents and tabulates reviews from prior transactions. Plus, all Airbnb transactions are prepaid. As a result, a host considering a guest’s request learns that the prospective guest has (say) a verified phone number and email, verified identity, verified Facebook and LinkedIn profiles with two hundred connections in total, and five favorable reviews from other Airbnb stays within the last year. A name and photo reveal race, gender, and age, but they provide little information about genuine trustworthiness. Whatever information the name and photo provide, that information comes more from (potentially inaccurate) stereotypes than from the name and photo themselves.
  • Perhaps users would be confused by communications without names. But "Airbnb Guest" and "Airbnb Host" make fine placeholders. Alternatively, Airbnb could move to a username system, letting users choose their own nicknames to use before a transaction is confirmed. eBay follows exactly that approach, and it works just fine. Airbnb would still organize a host’s messages with a given guest into a single page, and vice versa.
  • Some hosts want to see guest photos before accepting a request, and vice versa for guests choosing hosts. But if a homeowner is particularly concerned about strangers visiting his property, perhaps he shouldn’t be running a de facto hotel.
  • Photos help guests and hosts recognize each other. Airbnb’s David King, Director of Diversity and Belonging, in April 2016 told NPR: "The photos are on the platform for a reason. … You want to make sure that that guest that shows up at your door is the person that you’ve been communicating with." But nothing in the proposed change would prevent a host and guest from recognizing each other. After a booking is confirmed, they’d still see names and photos just as they do now. The change is to timing — revealing names and photos only after the booking is final.

In principle, Airbnb’s approach to names and photos could vary based on the type of listing. For example, names and photos might be viewed with special skepticism when a guest is renting the entire property. After all, when a guest occupies a property exclusively, the guest and host will have limited interaction. Indeed, they may never meet in person at all, exchanging keys via a doorman or lockbox. Then the specific identity of guest and host are all the less important. Conversely, it’s somewhat easier to see a proper role for photos and names when a host is truly "sharing" a property with a guest, jointly using common property facilities (perhaps a shared kitchen) or otherwise interacting with each other.

Allow testing

After our study, several users ran their own tests to assess possible discrimination. For example, after being rejected by a host, a black guest might create a new account with a "white-sounding" name and a photo of a white person — then use that new account to reapply to the same host. Multiple users have tested for discrimination via this methodology. (Examples: 1, 2)

Crucially, Airbnb prohibits such testing. Airbnb’s compulsory Terms of Service forbids all test accounts, instead requiring that users "agree to provide accurate, current and complete information during the registration process." Furthermore, Airbnb specifically insists that each user "may not have more than one (1) active Airbnb Account," and Airbnb claims the right to entirely eject any person who creates multiple accounts. But if each a user is limited to a single account, in the user’s true name, most testing procedures are unworkable.

Airbnb rigorously enforces its prohibitions on test accounts and follows through on its threats to exclude users with multiple accounts. Indeed, during the testing for my second article about Airbnb discrimination, the company closed my personal account — even removing my reputation and reviews accumulated from prior stays. I requested that Airbnb reopen my account but have received no reply, and my account remains disabled to this day.

Airbnb has no proper reason to penalize those who test its services. It would be intolerable for a car dealership to post a sign saying "No testers allowed" in hopes of concealing prices that vary prices based on a buyer’s race. Nor should Airbnb be able to conceal discrimination on its platform through contractual restrictions and penalties.

Normalize dispute resolution

The standard American approach to dispute resolution is the legal system — judge and jury. Dissatisfied hosts and guests — such as those who think they’ve faced discrimination — ordinarily could go to court to explain the problem and try to articulate a violation of applicable law. But Airbnb specifically prohibits users from filing a standard lawsuit. In particular, Airbnb’s Terms of Service instead demand that "any dispute, claim or controversy arising out of or relating to … the Services or use of the Site … will be settled by binding arbitration."

For guests and hosts, arbitration presents several key problems. For one, Airbnb chose the arbitration service, so anyone pursuing a grievance might naturally worry that the arbitrator will favor Airbnb. Furthermore, arbitration results are confidential — so even if some users prevail against Airbnb, the interested public would never find out. Arbitration also provides little to no ability for complainants to get the documents they need to prove their case, for example by searching Airbnb’s records to learn what the company knew, who else complained, and what alternatives it considered. Nor does arbitration provide an appeals process appropriate for exploring new or complex questions of law. Most of all, Airbnb requires that each dispute proceed solely on an individual basis, and "the arbitrator may not consolidate more than one person’s claims." But individual complaints are inevitably small, with value too low to justify the top-notch attorneys and experts who would be needed to prove discrimination and explore Airbnb’s responsibility.

Courts, not arbitrations, are the proper way to resolve these important disputes. The appearance of legitimacy would be much improved if decisions were rendered by a judge and jury chosen through formal government processes. With access to Airbnb documents and records as provided by law, aggrieved users would be able to search for information to support their claims — giving them a fair chance to assemble the evidence they need. With appeals, a series of judges would assess the situation and correct any errors in legal reasoning. And with a single case assessing the rights of a group of similar users, economies of scale would allow users to get the specialized assistance they need to have a real shot.

I don’t know that aggrieved guests or hosts would prevail against Airbnb on matters of discrimination. In some respects, such a case would break new ground, and Airbnb would forcefully argue that it is individual hosts, not Airbnb itself, whose actions are out of line. But Airbnb should face such complaints on the merits, not hide behind a contract that prevents users from bringing suit.

Airbnb told the New York Times that "these [arbitration] provisions are common." Certainly many companies require that customers arbitrate any disputes. But Airbnb’s service is special, raising persistent and serious concerns about discrimination (among other issues). These heightened sensitivities call for a different approach to dispute resolution — and a dispute system that works for credit cards and cell phone plans may not be right for questions of race, equality, and justice.

Airbnb also told the Times that "we believe ours [our arbitration agreement] is balanced and protects consumers." But it’s hard to see the balance in a contract that only takes rights away from users. Were the contract silent on dispute resolution, users could sue in any court authorized by law to hear the dispute, but instead Airbnb insists that not a single court, in any jurisdiction nationwide, can adjudicate any dispute pertaining to users’ relationships with Airbnb.

The incomplete benefits of Instant Book

Airbnb hosts are able to discriminate against disfavored guests because Airbnb’s standard process gives hosts discretion as to which guests to accept. Airbnb’s Instant Book feature thus offers a potential mechanism to reduce discrimination: When hosts pre-promise to accept any interested guest who agrees to pay, there’s much less opportunity to discriminate. Nonetheless, Instant Book addresses only a portion of the problem.

The biggest weakness of Instant Book is that only a fraction of properties offer this feature. A guest who wants to use Instant Book to avoid discrimination is thus accepting a much narrower range of properties. Airbnb periodically encourages hosts to try Instant Book, and it seems the proportion of properties with Instant Book is increasing, but slowly. That’s unsatisfactory: Guests shouldn’t have to choose between fair treatment and a full range of properties.

A second problem with Instant Book is that it serves only to protect guests from discrimination by hosts, but not to protect hosts from discrimination by guests. My research indicates that both problems are real, and Instant Book does nothing about the second.

A final concern is the prospect of cancellations that undo the anti-discrimination benefits of Instant Book. In principle a host could invoke Airbnb’s cancellation feature to reject a disfavored guest whose request was automatically confirmed by Instant Book. That’s exceptionally disruptive to the guest, taking away a booking which had already been paid in full and presented, to the guest and in Airbnb records, as confirmed. These problems are more than speculative; among others, the racist North Carolina host who canceled a black guest’s confirmed reservation had initially accepted that reservation via Instant Book.

To its credit, Airbnb requires a host to provide a reason when cancelling an Instant Book reservation, and Airbnb limits such cancellation to three per year. But the cancellations are penalty-free without any charge to hosts or any indication on a host’s profile page. (In contrast, when non-Instant Book hosts cancel reservations, they are charged fees and are penalized in profiles and on search results.) Moreover, three cancellations may suffice for many hosts to implement discriminatory preferences. If a host receives only occasional requests from the guests it seeks to discriminate against, the host could cancel those guests’ Instant Book stays with relative impunity.

Looking forward

Airbnb co-founder Joe Gebbia recently explained the site’s widespread use of photos of guests and hosts: "[A]nytime we could show a face in our service, we would … – in search results, on profiles, on the actual homepage.” Certainly photos are visually appealing and add some information. But the risk of discrimination appears to be unavoidable when photos are presented before a transaction is confirmed. No one would tolerate a hotel that required prospective guests to submit photos — nor a traditional bed-and-breakfast that did so. Nor should this approach be used online. The risk of discrimination is just too great relative to any benefit the photos may provide.

In January 2014, Mike Luca and I suggested the changes Airbnb should make to prevent discrimination:

[T]here is no fundamental reason why a guest need s see a host’s picture in advance of making a booking — nor does a guest necessarily even need to know a host’s name (from which race may be infered…). Indeed, Airbnb itself prohibits (and run s software to prevent) hosts and guests from sharing email addresses or phone numbers before a booking is made, lest this information exchange let parties contract directly and avoid Airbnb fees. Given Airbnb’s careful consideration of what information is available to guests and hosts, Airbnb might consider eliminating or reducing the prominence of host photos: It is not immediately obvious what beneficial information these photos provide, while they risk facilitating discrimination by guests. Particularly when a guest will be renting an entire property, the guest’s interaction with the host will be quite limited, and we see no real need for Airbnb to highlight the host’s picture.

Two and a half years later, the proposal remains appropriate, easy, and effective. Airbnb CEO Brian Chesky says the company "needs help solving" discrimination on Airbnb . Perhaps our longstanding proposal can be of assistance.

Android and Competition Law: Exploring and Assessing Google’s Practices in Mobile

Edelman, Benjamin, and Damien Geradin. “Android and Competition Law: Exploring and Assessing Google’s Practices in Mobile.” European Competition Journal 12, nos. 2-3 (2016): 159-194.

Since its launch in 2007, Android has become the dominant mobile device operating system worldwide. In light of this commercial success and certain disputed business practices, Android has come under substantial attention from competition authorities. We present key aspects of Google’s strategy in mobile, focusing on Android-related practices that may have exclusionary effects. We then assess Google’s practices under competition law and, where appropriate, suggest remedies to right the violations we uncover.

Discrimination Against Airbnb Guests with Michael Luca and Dan Svirsky

To facilitate trust, many online platforms encourage sellers to provide personal profiles and even to post pictures of themselves. However, these features may also facilitate discrimination based on sellers’ race, gender, age, or other characteristics.

In an article posted today, Michael Luca, Dan Svirsky, and I present results of a field experiment on Airbnb. Using guest accounts that are identical save for names indicating varying races, we submitted requests to more than 6,000 hosts. Requests from guests with distinctively African-American names are roughly 16% less likely to be accepted than identical guests with distinctively White names. The difference persists whether the host is African American or White, male or female. The difference also persists whether the host shares the property with the guest or not, and whether the property is cheap or expensive.

Discrimination is costly for hosts who indulge in it. Hosts who reject African-American guests are able to find a replacement guest only 35% of the time.

On the whole, our analysis suggests a need for caution. While information can facilitate transactions, it also facilitates discrimination. Airbnb’s site carefully shrouds information Airbnb wants to conceal, such as hosts’ email addresses and phones numbers, so guests can’t contact hosts directly and circumvent Airbnb’s fees. But when it comes to information that facilitates discrimination, including name and photo, Airbnb offers no such precaution.

Our working paper:

Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment.

What to do? Our draft suggests several ways Airbnb could change its site to reduce or prevent discrimination, including concealing guest names, concealing or deprioritizing guest photos, and increasing instant bookings. In the short run, we’re offering a browser plugin to let interested Airbnb users experience the site without information that may facilitate discrimination. Using our plugin, a host can view a guest’s request without having to see the guest’s face or name. Our plugin:

Debias Yourself.

This article extends and continues the research in my January 2014 working paper (with Michael Luca) as to discrimination against Airbnb hosts: Digital Discrimination: The Case of Airbnb.com.

Beyond the FTC Memorandum: Comparing Google’s Internal Discussions with Its Public Claims

Disclosure: I serve as a consultant to various companies that compete with Google. That work is ongoing and covers varied subjects, most commonly advertising fraud. I write on my own—not at the suggestion or request of any client, without approval or payment from any client.

Through a FOIA request, the Wall Street Journal recently obtained–and generously provided to the public–never-before-seen documents from the FTC’s 2011-2012 investigation of Google for antitrust violations. The Journal’s initial report (Inside the U.S. Antitrust Probe of Google) examined the divergence between the staff’s recommendation and the FTC commissioners’ ultimate decision, while search engine guru Danny Sullivan later highlighted 64 notable quotes from the documents.

In this piece, I compare the available materials (particularly the staff memorandum’s primary source quotations from internal Google emails) with the company’s public statements on the same subjects. The comparison is revealing: Google’s public statements typically emphasize a lofty focus on others’ interests, such as giving users the most relevant results and paying publishers as much as possible. Yet internal Google documents reveal managers who are primarily focused on advancing the company’s own interests, including through concealed tactics that contradict the company’s public commitments.

About the Document

In a 169-page memorandum dated August 8, 2012, the FTC’s Bureau of Competition staff examined Google’s conduct in search and search advertising. Through a Freedom of Information Act (FOIA) request, the WSJ sought copies of FTC records pertaining to Google. It seems this memorandum was intended to be withheld from FTC’s FOIA request, as it probably could have been pursuant to FOIA exception 5 (deliberative process privilege). Nonetheless, the FTC inadvertently produced the memorandum — or, more precisely, approximately half the pages of the memorandum. In particular, the FTC produced the pages with even numbers.

To ease readers’ analysis of the memorandum, I have improved the PDF file posted by the WSJ. Key enhancements: I used optical character recognition to index the file’s text (facilitating users’ full-text search within the file and allowing search engines to index its contents). I deskewed the file (straightening crooked scans), corrected PDF page numbering (to match the document’s original numbering), created hyperlinks to access footnotes, and added a PDF navigation panel with the document’s table of contents. The resulting document: FTC Bureau of Competition Memorandum about Google — August 8, 2012.

AdWords API restrictions impeding competition

In my June 2008 PPC Platform Competition and Google’s "May Not Copy" Restriction and July 2008 congressional testimony about competition in online search, it seems I was the first to alert policy-makers to brazen restrictions in Google’s AdWords API Terms and Conditions. The AdWords API provided full-featured access to advertisers’ AdWords campaigns. With both read and write capabilities, the AdWords API provided a straightforward facility for toolmakers to copy advertisers’ campaigns from AdWords to competing services, optimize campaigns across multiple services, and consolidate reporting across services. Instead, Google inserted contractual restrictions banning all of these functions. (Among other restrictions: "[T]he AdWords API Client may not offer a functionality that copies data from a non-AdWords account into an AdWords account or from an AdWords account to a non-AdWords account.")

Large advertisers could build their own tools to escape the restrictions. But for small to midsized advertisers, it would be unduly costly to make such tools on their own — requiring more up-front expenditure on tools than the resulting cost-savings would warrant. Crucially, Google prohibited software developers from writing the tools once and providing them to everyone interested — a much more efficient approach that would have saved small advertisers the trouble and expense of making their own tools. It was a brazen restriction with no plausible procompetitive purpose. The restriction caused clear harms: Small to midsized advertisers disproportionately used only Google AdWords, although Microsoft, Yahoo, and others could have provided a portion of the desired traffic at lower cost, reducing advertisers’ overall expense.

Historically, Google staff disputed these effects. For example, when I explained the situation in 2008, AdWords API product manager Doug Raymond told me in a personal email in March 2008 that the restrictions were intended to prevent "inaccurate comparisons of data [that] make it difficult for the end advertiser to understand the performance of AdWords relative to other products."

But internal discussions among Google staff confirm the effects I alleged. For example, in internal email, Google director of product management Richard Holden affirmed that many advertisers "don’t bother running campaigns on [Microsoft] or Yahoo because [of] the additional overhead needed to manage these other networks [in light of] the small amount of additional traffic" (staff memo at p.48, citing GOOGWOJC-000044501-05). Holden indicated that removing AdWords API restrictions would pave the way to more advertisers using more ad platforms, which he called a "significant boost to … competitors" (id.). He further confirmed that the change would bring cost savings to advertisers, noting that Microsoft and Yahoo "have lower average CPAs" (cost per acquisition, a key measure of price) (id.), meaning that advertisers would be receptive to using those platforms if they could easily do so. Indeed, Google had known these effects all along. In a 2006 document not attributed to a specific author, the FTC quotes Google planning to "fight commoditization of search networks by enforcing AdWords API T&Cs" (footnote 546, citing GOOGKAMA-0000015528), indicating that AdWords API restrictions allowed Google to avoid competing on the merits.

The FTC staff report reveals that, even within Google, the AdWords API restrictions were controversial. Holden ultimately sought to "to eliminate this requirement" (key AdWords API restrictions) because the removal would be "better for customers and the industry as a whole" since it would "[r]educe friction" and make processes more "efficient" by avoiding time-consuming and error-prone manual work. Holden’s proposal prompted (in his own words) "debate" and significant opposition. Indeed, Google co-founder Larry Page seems to have disapproved. (See staff report p.50, summarizing the staff’s understanding, as well as footnote 280 as to documents presented to Page for approval in relaxing AdWords API restrictions; footnote 281 reporting that "Larry was OK with" a revised proposal that retained "the status quo" and thus cancelled the proposed loosening of restrictions.) Hal Varian, Google’s chief economist, also sought to retain the restrictions: "We’re the dominant incumbent in this industry; the folks pushing us to develop our PAI will be the underdogs trying to unseat us" (footnote 547, citing GOOGVARI-0000069-60R). Ultimately Holden’s proposal was rejected, and Google kept the restrictions in place until FTC and EC pressure compelled their removal.

From one perspective, the story ends well: In due course, the FTC, EC investigators, and others came to recognize the impropriety of these restrictions. Google removed the offending provisions as part of its 2013 commitments to FTC (section II) and proposed commitments to the EC (section III). Yet advertisers have never received refunds of the amounts they overpaid as a result of Google’s improper impediments to using competing tools. If advertisers incurred extra costs to build their own tools, Google never reimbursed them. And Google’s tactics suppressed the growth of competing search engines (including their recruitment of advertisers to increase revenue and improve advertising relevance), thereby accelerating Google’s dominance. Finally, until the recent release of the FTC staff report, it was always difficult to prove what we now know: That Google’s longstanding statements about the purpose of the restrictions were pretextual, and that Google’s own product managers knew the restrictions were in place not to improve the information available to advertisers (as Raymond suggested), but rather to block competitors and preserve high revenue from advertisers that used only Google.

Specialized search and favoring Google’s own services: benefiting users or Google?

For nearly a decade, competitors and others have questioned Google’s practice of featuring its own services in its search results. The core concern is that Google grants its own services favored and certain placement, preferred format, and other benefits unavailable to competitors — giving Google a significant advantage as it enters new sectors. Indeed, anticipating Google’s entry and advantages, prospective competitors might reasonably seek other opportunities. As a result, users end up with fewer choices of service providers, and advertisers with less ability to find alternatives if Google’s offerings are too costly or otherwise undesirable.

Against this backdrop, Google historically claimed its new search results were "quicker and less hassle" than alternatives, and that the old "ten blue links" format was outdated. "[W]e built Google for users," the company claimed, arguing that the design changes benefit users. In a widely-read 2008 post, Google Fellow Amit Singhal explained Google’s emphasis on "the most relevant results" and the methods used to assure result relevance. Google’s "Ten things we know to be true" principles begin with "focus on the user," claiming that Google’s services "will ultimately serve you [users], rather than our own internal goal or bottom line."

With access to internal Google discussions, FTC staff paint quite a different picture of Google’s motivations. Far from assessing what would most benefit users, Google staff examine the "threat" (footnote 102, citing GOOG-ITA-04-0004120-46) and "challenge" of "aggregators" which would cause "loss of query volumes" to competing sites and which also offer a "better advertiser proposition" through "cheaper, lower-risk" pricing (FTC staff report p.20 and footnote 102, citing GOOG-Texas-1486928-29). The documents continue at length: "the power of these brands [competing services] and risk to our monetizable traffic" (footnote 102, citing GOOG-ITA-05-0012603-16), with "merchants increasing % of spend on" competing services (footnote 102, citing GOOG-ITA-04-0004120-46). Bill Brougher, a Google product manager assessed the risks:

[W]hat is the real threat if we don’t execute on verticals? (a) loss of traffic from Google.com because folks search elsewhere for some queries; (b) related revenue loss for high spend verticals like travel; (c) missing opty if someone else creates the platform to build verticals; (d) if one of our big competitors builds a constellation of high quality verticals, we are hurt badly

(footnote 102, citing GOOG-ITA-06-0021809-13) Notice Brougher’s sole focus on Google’s business interests, with not a word spent on what is best for users.

Moreover, the staff report documents Google’s willingness to worsen search results in order to advance the company’s strategic interests. Google’s John Hanke (then Vice President of Product Management for Geo) explained that "we want to win [in local] and we are willing to take some hits [i.e. trigger incorrectly sometimes]" (footnote 121, citing GOOG-Texas-0909676-77, emphasis added). Google also proved willing to sacrifice user experience in its efforts to demote competing services, particularly in the competitive sector of comparison shopping services. Google used human "raters" to compare product listings, but in 2006 experiments the raters repeatedly criticized Google’s proposed changes because they favored competing comparison shopping services: "We had moderate losses [in raters’ assessments of quality when Google made proposed changes] because the raters thought this was worse than a bizrate or nextag page" (footnote 154, citing GOOGSING-000014116-17). Rather than accept raters’ assessment that competitors had high-quality offerings that should remain in search results, Google changed raters’ criteria twice, finally imposing a set of criteria in which competitors’ services were no longer ranked favorably (footnote 154, citing GOOGEC-0168014-27, GOOGEC-0148152-56, GOOGC-0014649).

Specialized search and favoring Google’s own services: targeting bad sites or solid competitors?

In public statements, Google often claimed that sites were rightly deprioritized in search results, indicating that demotions targeted "low quality," "shallow" sites with "duplicate, overlapping, or redundant" content that is "mass-produced by or outsourced to a large number of creators … so that individual pages or sites don’t get as much attention or care." Google Senior Vice President Jonathan Rosenberg chose the colorful phrase "faceless scribes of drivel" to describe sites Google would demote "to the back of the arena."

But when it came to the competing shopping services Google staff sought to relegate, Google’s internal assessments were quite different. "The bizrate/nextag/epinions pages are decently good results. They are usually well-format[t]ed, rarely broken, load quickly and usually on-topic. Raters tend to like them. …. [R]aters like the variety of choices the meta-shopping site[s] seem… to give" (footnote 154, citing GOOGSING-000014375).

Here too, Google’s senior leaders approved the decision to favor Google’s services. Google co-founder Larry Page personally reviewed the prominence of Google’s services and, indeed, sought to make Google services more prominent. For example: "Larry thought product [Google’s shopping service] should get more exposure" (footnote 120, citing GOOG-Texas-1004148). Product managers agreed, calling it "strategic" to "dial up" Google Shopping (footnote 120, citing GOOG-Texas-0197424). Others noted the competitive importance: Preferred placement of Google’s specialized search services was deemed important to avoid "ced[ing] recent share gains to competitors" (footnote 121, citing GOOG-Texas-0191859) or indeed essential: "most of us on geo [Google Local] think we won’t win unless we can inject a lot more of local directly into google results" (footnote 121, citing GOOGEC-0069974). Assessing "Google’s key strengths" in launching product search, one manager flagged Google’s control over "Google.com real estate for the ~70MM of product queries/day in US/UK/De alone" (footnote 121, citing GOOG-Texas-0199909), a unique advantage that competing services could not match.

Specialized search and favoring Google’s own services: algorithms versus human decisions

A separate divergence from Google’s public statements comes in the use of staff decisions versus algorithms to select results. Amit Singhal’s 2008 post presented the company’s (supposed) insistence on "no manual intervention":

In our view, the web is built by people. You are the ones creating pages and linking to pages. We are using all this human contribution through our algorithms. The final ordering of the results is decided by our algorithms using the contributions of the greater Internet community, not manually by us. We believe that the subjective judgment of any individual is, well … subjective, and information distilled by our algorithms from the vast amount of human knowledge encoded in the web pages and their links is better than individual subjectivity.

2011 testimony from Google Chairman Eric Schmidt (written responses to the Senate Committee on the Judiciary Subcommittee on Antitrust, Competition Policy, and Consumer Rights) made similar claims: "The decision whether to display a onebox is determined based on Google’s assessment of user intent" (p.2). Schmidt further claimed that Google displayed its own services because they "are responsive to what users are looking for," in order to "enhance[e] user satisfaction" (p.2).

The FTC’s memorandum quotes ample internal discussions to the contrary. For one, Google repeatedly changed the instructions for raters until raters assessed Google’s services favorably (the practice discussed above, citing and quoting from footnote 154). Similarly, Page called for "more exposure" for Google services and staff wanted "a lot more of local directly into search results" (cited above). In each instance, Google managers and staff substituted their judgment for algorithms and user preferences as embodied in click-through rate. Furthermore, Google modified search algorithms to show Google’s services whenever a "blessed site" (key competitor) appeared. Google staff explained the process: "Product universal top promotion based on shopping comparison [site] presence" (footnote 136 citing GOOGLR-00161978) and "add[ing] a ‘concurring sites’ signal to bias ourselves toward triggering [display of a Google local service] when a local-oriented aggregator site (i.e. Citysearch) shows up in the web results" (footnote 136 citing GOOGLR-00297666). Whether implemented by hand or through human-directed changes to algorithms, Google sought to put its own services first, contrary to prior commitments to evenhandedness.

At the same time, Google systematically applied lesser standards to its own services. Examining Google’s launch report for a 2008 algorithm change, FTC staff said that Google elected to show its product search OneBox "regardless of the quality" of that result (footnote 119, citing GOOGLR-00330279-80) and despite "pretty terribly embarrassing failures" in returning low-quality results (footnote 170, citing GOOGWRIG-000041022). Indeed, Google’s product search service apparently failed Google’s standard criteria for being indexed by Google search (p.80 and footnote 461), yet Google nonetheless put the service in top positions (p.30 and footnote 170, citing GOOG-Texas-0199877-906).

The FTC’s documents also call into question Eric Schmidt’s 2011 claim (in written responses to a Senate committee) that "universal search results are our search service — they are not some separate ‘Google product or service’ that can be ‘favored.’" The quotes in the preceding paragraph indicate that Google staff knew they could give Google’s own services "more exposure" by "inject[ing] a lot more of [the services] into google results." Whether or not these are "separate" services, they certainly can be made more or less prominent–as Google’s Page and staff recognized, but as Schmidt’s testimony denies. Meanwhile, in oral testimony, Schmidt said "I’m not aware of any unnecessary or strange boosts or biases." But consider Google’s "concurring sites" feature, which caused Google services to appear whenever key competitors’ services were shown (footnote 136 citing GOOGLR-00297666). This was surely not genuinely "necessary" in the sense that search could not function without it, and indeed Google’s own raters seemed to think search would be better without it. And these insertions were surely "strange" in the sense that they were unknown outside Google until the FTC memorandum became available last week. In response to a question from Senator Lee, asking whether Google "cooked it" to make its results always appear in a particular position, Schmidt responded "I can assure you, we’ve not cooked anything"–but in fact the "concurring sites" feature exactly guaranteed that Google’s service would appear, and Google staff deliberated at length over the position in which Google services would appear (footnote 138).

All in all, Google’s internal discussions show a company acutely aware of its special advantage: Google could increase the chance of its new services succeeding by making them prominent. Users might dislike the changes, but Google managers were plainly willing to take actions their own raters considered undesirable in order to increase the uptake of the company’s new services. Schmidt denied that such tampering was possible or even logically coherent, but in fact it was widespread.

Payments to publishers: as much as possible, or just enough to meet waning competition?

In public statements, Google touts its efforts to "help… online publishers … earn the most advertising revenue possible." I’ve always found this a strange claim: Google could easily cut its fees so that publishers retain more of advertisers’ payments. Instead, publishers have long reported — and the FTC’s document now explicitly confirms — that Google has raised its fees and thus cut payments to publishers. The FTC memorandum quotes Google co-founder Sergey Brin: "Our general philosophy with renewals has been to reduce TAC across the board" (footnote 517, citing GOOGBRIN-000025680). Google staff confirm an "overall goal [of] better AFS economics" through "stricter AFS Direct revenue-share tiering guidelines" (footnote 517, citing GOOGBRAD-000012890) — that is, lower payments to publishers. The FTC even released revenue share tiers for a representative publisher, reporting a drop from 80%, 85%, and 87.5% to 73%, 75%, and 77% (footnote 320, citing GOOG-AFS-000000327), increasing Google’s fees to the publisher by as much as 84%. (Methodology: divide Google’s new fee by its old fee, e.g. (1-0.875)/(1-0.77)=1.84.)

The FTC’s investigation revealed the reason why Google was able to impose these payment reductions and fee increases: Google does not face effective competition for small to midsized publishers. The FTC memorandum quotes no documents in which Google managers worry about Microsoft (or others) aggressively recruiting Google’s small to midsized publishers. Indeed, FTC staff report that Microsoft largely ceased attempts in this vein. (Assessing Microsoft’s withdrawal, the FTC staff note Google contract provisions preventing a competing advertising service from bidding only on those searches and pages where it has superior ads. Thus, Microsoft had little ability to bid on certain terms but not others. See memorandum p.106.)

The FTC notes Microsoft continuing to pursue some large Google publishers, but with limited success. A notable example is AOL, which Google staff knew Microsoft "aggressively woo[ed] … with large guarantees" (p.108). An internal Google analysis showed little concern about losing AOL but significant concern about Microsoft growing: "AOL holds marginal search share but represents scale gains for a Microsoft + Yahoo! Partnership… AOL/Microsoft combination has modest impact on market dynamics, but material increase in scale of Microsoft’s search & ads platform" (p.108). Google had historically withheld many features from AOL, whereas AOL CEO Tim Armstrong sought more. (WSJ reported: "Armstrong want[ed] AOL to get access to the search innovation pipeline at Google, rather than just receive a more basic product.") By all indications Google accepted AOL’s request only due to pressure from Microsoft: "[E]ven if we make AOL a bit more competitive relative to Google, that seems preferable to growing Bing" (p.108). As usual, Google’s public statements contradicted their private discussions; despite calling AOL’s size "marginal" in internal discussions (p.108), a joint press release quotes Google’s Eric Schmidt praising "AOL’s strength."

A Critical Perspective

The WSJ also recently flagged Google’s "close ties to White House," noting large campaign contributions, more than 230 meetings at the White House, high lobbying expenditures, and ex-Google staff serving in senior staff positions. In an unusual press release, the FTC denied that improper factors affected the Commission’s decision. Google’s Rachel Whetstone, SVP Communications and Policy, responded by shifting focus to WSJ owner Rupert Murdoch personally, then explaining that some of the meetings were industry associations and other matters unrelated to Google’s competition practices.

Without records confirming discussion topics or how decisions were made, it is difficult to reach firm conclusions about the process that led the FTC not to pursue claims against Google. It is also difficult to rule out the WSJ’s conclusion of political influence. Indeed, Google used exactly this reasoning in critiquing the WSJ’s analysis: "We understand that what was sent to the Wall Street Journal represents 50% of one document written by 50% of the FTC case teams." Senator Mike Lee this week confirmed that the Senate Committee on the Judiciary will investigate the possibility of improper influence, and perhaps that investigation will yield further insight. But even the incomplete FTC memorandum reproduces scores of quotes from Google documents, and these quotes offer an unusual opportunity to compare Google’s internal statements with its public claims. Google’s broadest claims of lofty motivations and Internet-wide benefits were always suspect, and Google’s public statements fall further into question when compared with frank internal discussions.

There’s plenty more to explore in the FTC’s report. I will post the rest of the document if a further FOIA request or other development makes more of it available.

Discrimination at Airbnb with Michael Luca

Online marketplaces often contain information not only about products, but also about the people selling the products. In an effort to facilitate trust, many platforms encourage sellers to provide personal profiles and even to post pictures of themselves. However, these features may also facilitate discrimination based on sellers’ race, gender, age, or other characteristics.

Last week Michael Luca and I posted Digital Discrimination: The Case of Airbnb.com, in which we test for racial discrimination against landlords in the online rental marketplace Airbnb.com. We collected information about all Airbnb hosts in New York City, including their rental prices and the quality of their properties. We find that non-black hosts charge approximately 12% more than black hosts for the equivalent rental. These effects are robust when controlling for all information visible in the Airbnb marketplace, including even property photos.

Our findings highlight the risk of discrimination in online marketplaces, suggesting an important unintended consequence of a seemingly-routine mechanism for building trust. There is no fundamental reason why a guest needs see a host’s picture in advance of making a booking — nor does a guest necessarily even need to know a host’s name (from which race may sometimes be inferred). In other respects, Airbnb has been quite sophisticated in limiting the information available to hosts and guests on its platform — for example, AIrbnb prohibits (and runs software to prevent) hosts and guests from sharing email addresses or phone numbers before a booking is made, lest this information exchange let parties contract directly and avoid Airbnb fees. Given Airbnb’s careful consideration of what information is available to guests and hosts, Airbnb might consider eliminating or reducing the prominence of host photos: It is not immediately obvious what beneficial information these photos provide, while they risk facilitating discrimination by guests.

Digital Discrimination: The Case of Airbnb.com

Edelman, Benjamin, and Michael Luca. “Digital Discrimination: The Case of Airbnb.com.” Harvard Business School Working Paper, No. 14-054, January 2014.

Online marketplaces often contain information not only about products, but also about the people selling the products. In an effort to facilitate trust, many platforms encourage sellers to provide personal profiles and even to post pictures of themselves. However, these features may also facilitate discrimination based on sellers’ race, gender, age, or other aspects of appearance. In this paper, we test for racial discrimination against landlords in the online rental marketplace Airbnb.com. Using a new data set combining pictures of all New York City landlords on Airbnb with their rental prices and information about quality of the rentals, we show that non-black hosts charge approximately 12% more than black hosts for the equivalent rental. These effects are robust when controlling for all information visible in the Airbnb marketplace. These findings highlight the prevalence of discrimination in online marketplaces, suggesting an important unintended consequence of a seemingly-routine mechanism for building trust.