Microsoft – Ben Edelman https://www.benedelman.org Mon, 08 Dec 2025 18:16:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://www.benedelman.org/wp-content/uploads/cropped-magnifying-32x32.png Microsoft – Ben Edelman https://www.benedelman.org 32 32 Edge Shopping Stand-Down Violations https://www.benedelman.org/edge-shopping-standdown/ Mon, 08 Dec 2025 12:41:24 +0000 https://www.benedelman.org/?p=2617 Continue reading "Edge Shopping Stand-Down Violations"

]]>
Affiliate network requirements require shopping plugins to “stand-down”—not present their affiliate links, not even highlight their buttons—when another publisher has already referred a user to a given merchant.  Inexplicably, Microsoft Shopping often does no such thing.

The basic bargain of affiliate marketing is that a publisher presents a link to a user, who (the publisher hopes) clicks, browses, and buys.  But if a publisher can put reminder software on a user’s computer or otherwise present messages within a user’s browser, it gets an extraordinary opportunity for its link to be clicked last, even if another publisher actually referred the user.  To preserve balance and give regular publishers a fair shot, affiliate networks imposed a stand-down rule: If another publisher already referred the user, a publisher with software must not show its notification.  This isn’t just an industry norm; it is embodied in contracts between publishers, networks, and merchants.  (Terms and links below.)

In 2021, Microsoft added shopping features to its Edge web browser.  If a user browses an ecommerce site participating in Microsoft Cashback, Edge Shopping open a notification, encouraging a user to click.  Under affiliate network stand-down rules, this notification must not be shown if another publisher already referred that user to that merchant.  Inexplicably, in dozens of tests over two months, I found the stand-down logic just isn’t working.  Edge Shopping systematically ignores stand-down.  It pops open.  Time.  After.  Time.

This is a blatant violation of affiliate network rules.  From a $3 trillion company, with ample developers, product managers, and lawyers to get it right.  As to a product users didn’t even ask for.  (Edge Shopping is preinstalled in Edge which is of course preinstalled in Windows.)  Edge Shopping used to stand down when required, and that’s what I saw in testing several years ago.  But later, something went terribly wrong.  At best, a dev changed a setting and no one noticed.  Even then, where are the testers?  As a sometimes-fanboy (my first long-distance call was reporting a bug to Microsoft tech support!) and from 2018 to 2024 an employee (details below), I want better.  The publishers whose commissions were taken—their earnings hang in the balance, and not only do they want better, they are suing to try to get it.  (Again, more below.)

Contract provisions require stand-down

Above, I mentioned that stand-down rules are embodied in contract.  I wrote up some of these contract terms in January (there, remarking on Honey violations from a much-watched video by MegaLag).  Restating with a focus on what’s most relevant here (with emphasis added):

Commission Junction Publisher Service Agreement: “Software-based activity must honor the CJ Affiliate Software Publishers Policy requirements… including … (iv) requirements prohibiting usurpation of a Transaction that might otherwise result in a Payout to another Publisher… and (v) non-interference with competing advertiser/ publisher referrals.”

Rakuten Advertising Policies: “Software Publishers must recognize and Stand-down on publisher-driven traffic… ‘Stand-down’ means the software may not activate and redirect the end user to the advertiser site with their Supplier Affiliate link for the duration of the browser session.  … The [software] must stand-down and not display any forms of sliders or pop-ups to prompt activation if another publisher has already referred an end user.”  Stand down must be complete: In a stand-down situation, the publisher’s software “may not operate.”

Impact “Stand-Down Policy Explained”: Prohibits publishers “using browser extensions, toolbars, or in-cart solutions … from interfering with the shopping experience if another click has already been recorded from another partner.”  These rules appear within an advertiser’s “Contracts” “General Terms”, affirming that they are contractual in nature.  Impact’s Master Program Agreement is also on point, prohibiting any effort to “interfere with referrals of End Users by another Partner.”

Awin Publisher Code of Conduct: “Publishers only utilise browser extensions, adware and toolbars that meet applicable standards and must follow “stand-down” rules. … must recognise instances of activities by other Awin Publishers and “stand-down” if the user was referred to the Advertiser site by another Awin Publisher. By standing-down, the Publisher agrees that the browser extension, adware or toolbar will not display any form of overlays or pop-ups or attempt to overwrite the original affiliate tracking while on the Advertiser website.”

Edge does not stand-down

In test after test, I found that Edge Shopping does not stand-down.

In a representative video, from testing on November 28, 2025, I requested the VPN and security site surfshark.com via a standard CJ affiliate link.

Address bar showing affiliate link as start of navigation
From video at 0:01

CJ redirected me to Surfshark with a URL referencing cjdata, cjevent, aff_click_id, utm_source=cj, and sf+cs=cj.  Each of those parameters indicated that this was, yes, an affiliate redirect from CJ to Surfshark .

Arriving at surfshark.com
From video at 0:04

Then Microsoft Shopping popped up its large notification box with a blue button that, when clicked, invokes an affiliate link and sets affiliate cookies.

Edge Shopping pops open its window
From video at 0:08

Notice the sequence: Begin at another publisher’s CJ affiliate link, merchant’s site loads, and Edge Shopping does not stand-down.  This is squarely within the prohibition of CJ’s rules.

Edge sends detailed telemetry from browser to server reporting what it did, and to a large extent why.  Here, Edge simultaneously reports the Surfshark URL (with cjdata=, cjevent=, aff_click_id=, utm_source=cj, and sf_cs=cj parameters each indicating a referral from CJ) (yellow) and also shouldStandDown set to 0 (denoting false/no, i.e. Edge deciding not to stand down) (green).

POST https://www.bing.com/api/shopping/v1/savings/clientRequests/handleRequest HTTP/1.1 
...
{"anid":"","request_body":"{\"serviceName\":\"NotificationTriggering\",\"methodName\":\"SelectNotification\",\"requestBody\":\"{\\\"autoOpenData\\\":{\\\"extractedData\\\":{\\\"paneState\\\":{\\\"copilotVisible\\\":false,\\\"shoppingVisible\\\":false}},\\\"localData\\\":{\\\"isRebatesEnabled\\\":true,\\\"isEdgeProfileRebatesUser\\\":true,\\\"shouldStandDown\\\":0,\\\"lastShownData\\\":null,\\\"domainLevelCooldownData\\\":[],\\\"currentUrl\\\":\\\"https://surfshark.com/?cjdata=MXxOfDB8WXww&cjevent=cb8b45c0cc8e11f0814803900a1eba24&PID=101264606&aff_click_id=cb8b45c0cc8e11f0814803900a1eba24&utm_source=cj&utm_medium=6831850&sf_cs=cj&sf_cm=6831850\\\" ...

With a standard CJ affiliate link, and with multiple references to “cj” right in the URL, I struggle to see why Edge failed to realize this is another affiliate’s referral. If I were writing stand-down code, I would first watch for affiliate links (as in the first screenshot above), but surely I’d also check the landing page URL for significant strings such as source=cj.  Both methods would have called for standing down.

Another notable detail in Edge’s telemetry is that by collecting the exact Surfshark landing page URL, including the PID= parameter (blue), Microsoft receives information about which other publisher’s commission it is taking. Were litigation to require Microsoft to pay damages to the publishers whose commissions it took, these records would give direct evidence about who and how much, without needing to consult affiliate network logs.  This method doesn’t always work—some advertisers track affiliates only through cookies, not URL parameters; others redirect away the URL parameters in a fraction of a second.  But when it works, more than half the time in my experience, it’s delightfully straightforward.

Additional observations

If I observed this problem only once, I might ignore it as an outlier.  But no.  Over the past three weeks, I tested a dozen-plus mainstream merchants from CJ, Rakuten Advertising,  Impact, and Awin, in 25+ test sessions, all with screen recording.  In each test, I began by pasting another publisher’s affiliate link into the Edge address bar.  Time after time, Edge Shopping did not stand-down, and presented its offer despite the other affiliate link.  Usually Edge Shopping’s offer appeared in a popup as shown above.  The main variation was whether this popup appeared immediately upon my arrival at the merchant’s home page (as in the Surfshark example above), versus when I reached the shopping cart (as in the Newegg example below)s.

In a minority of instances, Edge Shopping presented its icon in Edge’s Address Bar rather than opening a popup.  While this is less intrusive than a popup, it still violates the contract provisions (“non-interference”, “may not activate”, “may not operate”, may not “interfere”, all as quoted above).  Turning blue to attract a user’s attention—this invites a user to open Edge Shopping and click its link, causing Microsoft to claim commission that would otherwise flow to another publisher.  That’s exactly what “non-interference” rules out.  “May not operate” means do nothing, not even change appear in the Address Bar. Sidenote: At Awin, uniquely, this seems to be allowed. See Publisher Code of Conduct, Rule 4, guidance 4.2. For Awin merchants, I count a violation only if Edge Shopping auto-opened its popup, not if it merely appeared in the Address Bar.

Historically, some stand-down violations were attributed to tricky redirects.  A publisher might create a redirect link like https://www.nytimes.com/wirecutter/out/link/53437/186063/4/153497/?merchant=Lego which redirects (directly or via additional steps) to an affiliate link and on to the merchant (in this case, Lego).  Historically, some shopping plugins had trouble recognizing an affiliate link when it occurred in the middle of a redirect chain.  This was a genuine concern when first raised twenty-plus years ago (!), when Internet Explorer 6’s API limited how shopping plugins could monitor browser navigation.  Two decades of improvements in browser and plugin architecture, this problem is in the past.  (Plus, for better or worse, the contracts require shopping plugins to get it right—no matter the supposed difficulty.)  Nonetheless, I didn’t want redirects to complicate interpretation of my findings.  So all my tests used the simplest possible approach: Navigate directly to an affiliate link, as shown above.  With redirects ruled out, the conclusion is straightforward: Edge Shopping ignores stand-down even in the most basic conditions.

I mentioned above that I have dozens of examples.  Posting many feels excessive.  But here’s a second, as to Newegg, from testing on December 5, 2025.

Litigation ongoing

Edge’s stand-down violations are particularly important because publishers have pending litigation about Edge claiming commissions that should have flowed to them.  After MegaLag’s famous December 2024 video, publishers filed class action litigation against Honey, Capital One, and Microsoft.  (Links open the respective dockets.)

I have no role in the case against Microsoft and haven’t been in touch with plaintiffs or their lawyers.  If I had been involved, I might have written the complaint and Opposition to Motion to Dismiss differently.  I would certainly have used the term “stand-down” and would have emphasized the governing contracts—facts for some reason missing from plaintiffs’ complaint.

Microsoft’s Motion to Dismiss was fully briefed as of September 2, and the court is likely to issue its decision soon.

Microsoft’s briefing emphasizes that it was the last click in each scenario plaintiffs describe, and claims that last click makes it “entitled to the purchase attribution under last-click attribution.”  Microsoft ignores the stand-down requirements laid out above.  Had Microsoft honored stand-down, it would have opened no popup and presented no affiliate link—so the corresponding publisher would have been the last click, and commission would have flowed as plaintiffs say it should have.

Microsoft then remarks on plaintiffs not showing a “causal chain” from Microsoft Shopping to plaintiffs losing commission, and criticizes plaintiffs’ causal analysis as “too weak.”  Microsoft emphasizes the many uncertainties: customers might not purchase, other shopping plug-ins might take credit, networks might reallocate commission for some other reason.  Here too, Microsoft misses the mark.  Of course the world is complicated, and nothing is guaranteed.  But Microsoft needed only to do what the contracts require: stand-down when another publisher already referred that user in that shopping session.

Later, Microsoft argues that its conduct cannot be tortious interference because plaintiffs did not identify what makes Microsoft’s conduct “improper.”  Let me leave no doubt.  As a publisher participating in affiliate networks, Microsoft was bound by networks’ contracts including the stand-down terms quoted above.  Microsoft dishonored those contracts to its benefit and to publishers’ detriment, contrary to the exact purpose of those provisions and contrary to their plain language.  That is the “improper” behavior which plaintiffs complain about.  In a puzzling twist, Microsoft then argues that it couldn’t “reasonably know[]” about the contracts of affiliate marketing.  But Microsoft didn’t need to know anything difficult or obscure; it just needed to do what it had, through contract, already promised.

Microsoft continues: “In each of Plaintiffs’ examples, a consumer must affirmatively activate Microsoft Shopping and complete a purchase for Microsoft to receive a commission, making Microsoft the rightful commission recipient if it is the last click in that consumer’s purchase journey.”  It is as if Microsoft’s lawyers have never heard of stand-down.  There is nothing “rightful” about Microsoft collecting a commission by presenting its affiliate link in situations prohibited by the governing contracts.

Microsoft might or might not be right that its conduct is acceptable in the abstract.  But the governing contracts plainly rule out Microsoft’s tactics.  In due course maybe plaintiffs will file an amended complaint, and perhaps that will take an approach closer to what I envision.  In any event, whatever the complaint, Microsoft’s motion to dismiss arguments seem to me plainly wrong because Microsoft was required by contract to stand-down—and it provably did not.

***

In June 2025, news coverage remarked on Microsoft removing the coupons feature from Edge (a different shopping feature that recommended discount codes to use at checkout) and hypothesized that this removal was a response to ongoing litigation.  But if Microsoft wanted to reduce its litigation exposure, removing the coupons feature wasn’t the answer.  The basis of litigation isn’t that Microsoft Shopping offers (offered) coupons to users.  The problem is that Microsoft Shopping presents its affiliate link when applicable contracts say it must not.

Catching affiliate abuse

I’ve been testing affiliate abuse since 2004.  From 2004 to 2018, I ran an affiliate fraud consultancy, which caught all manner of abuse—including shopping plugins (what that page calls “loyalty programs”), adware, and cookie-stuffing.  My work in that period included detecting the activity that led to the 2008 litigation civil and criminal charges against Brian Dunning and Shawn Hogan (a fact I can only reveal because an FBI agent’s declaration credited me).  I paused this work from 2018 to 2024, but resumed it this year as Chief Scientist of Visible Performance Technologies, which provides automation to detect stand-down violations, adware, low-intention traffic, and related abuses.  As you’d expect, VPT has long been reporting Edge stand-down violations to clients that contract for monitoring of shopping plugins.

My time from 2018 to 2024, as an employee of Microsoft, is relevant context.  I proposed Bing Cashback and led its product management and business development through launch.  Bing Cashback put affiliate links into Bing search results, letting users earn rebates without resorting to shopping plugins or reminders, and avoiding the policy complexities and contractual restrictions on affiliate software.  Meanwhile, Bing Cashback provided a genuine reason for users to choose Bing over Google.  Several years later, others added cashback to Edge, but I wasn’t involved in that.  Later I helped improve the coupons feature in Edge Shopping.  In this period, I never saw Edge Shopping violate stand-down rules.

I ended work with Bing and Edge in 2022, after which I pursued AI projects until I resigned in 2024.  I don’t have inside knowledge about Edge Shopping stand-down or other aspects of Microsoft Cashback in Edge.  If I had such information, I would not be able to share it.  Fortunately the testing above requires no special information, and anyone with Edge and a screen-recorder can reproduce what I report.

]]>
Impact of GitHub Copilot on code quality https://www.benedelman.org/github-copilot-code-quality/ Tue, 19 Nov 2024 06:02:32 +0000 https://www.benedelman.org/?p=2165 Continue reading "Impact of GitHub Copilot on code quality"

]]>
Jared Bauer summarizes results of a study I suggested this spring.  202 developers were randomly assigned GitHub Copilot, while the others were instructed not to use AI tools.  The participants were asked to complete a coding task.  Developers with GitHub Copilot had 56% greater likelihood of passing all unit tests.  Other developers evaluated code to assess quality and readability.  Code from developers with GitHub Copilot was rated better on readability, maintainability, and conciseness.  All these differences were statistically significant.

]]>
The Effect of Microsoft Copilot in a Multi-lingual Context https://www.benedelman.org/the-effect-of-copilot-in-a-multi-lingual-context/ Thu, 01 Aug 2024 04:32:36 +0000 https://www.benedelman.org/?p=2099 Continue reading "The Effect of Microsoft Copilot in a Multi-lingual Context"

]]>
We tested Microsoft Copilot in multilingual contexts, examining how Copilot can facilitate collaboration between colleagues with different native languages.

First, we asked 77 native Japanese speakers to review a meeting recorded in English. Half the participants had to watch and listen to the video. The other half could use Copilot Meeting Recap, which gave them an AI meeting summary as well as a chatbot to answer questions about the meeting.

Then, we asked 83 other native Japanese speakers to review a similar meeting, following the same script, but this time held in Japanese by native Japanese speakers. Again, half of participants had access to Copilot.

For the meeting in English, participants with Copilot answered 16.4% more multiple-choice questions about the meeting correctly, and they were more than twice as likely to get a perfect score.  Moreover, in comparing accuracy between the two scenarios, people listening to a meeting in English with Copilot achieved 97.5% accuracy, slightly more accurate than people listening to a meeting in their native Japanese using standard tools (94.8%). This is a statistically significant difference (p<.05). The changes are small in percentage point terms because the baseline accuracy is so high, but Copilot closed 38.5% of the gap to perfect accuracy for those working in their native language (p<0.10) and closed 84.6% of the gap for those working in (non-native) English (p<.05).

 

Summary from Jaffe et al, Generative AI in Real-World Workplaces, July 2024.

]]>
Impact of M365 Copilot on Legal Work at Microsoft https://www.benedelman.org/impact-of-m365-copilot-on-legal/ Fri, 24 May 2024 13:00:41 +0000 https://www.benedelman.org/?p=2093 Continue reading "Impact of M365 Copilot on Legal Work at Microsoft"

]]>
Teams at Microsoft often reflect on how Copilot helps.  I try to help these teams both by measuring Copilot usage in the field (as they do their ordinary work) and in lab experiments (idealized versions of their tasks in environments where I can better isolate cause and effect).  This month I ran an experiment with CELA, Microsoft’s in-house legal department.  Hossein Nowbar, Chief Legal Officer and Corporate Vice President, summarized the findings in a post at LinkedIn:

Recently, we ran a controlled experiment with Microsoft’s Office of the Chief Economist, and the results are groundbreaking. In this experiment, we asked legal professional volunteers on our team to complete three realistic legal tasks and randomly granted Copilot to some participants. Individuals with Copilot completed the tasks 32% faster and with 20.3% greater accuracy!

Copilot isn’t just a tool; it’s a game-changer, empowering our team to focus on what truly matters by enhancing productivity, elevating work quality, and, most importantly, reclaiming time.

All findings statistically significant at P<0.05.

Full results.

]]>
Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity https://www.benedelman.org/early-llm-based-tools-for-enterprise-information-workers-likely-provide-meaningful-boosts-to-productivity/ Tue, 05 Dec 2023 16:45:42 +0000 http://www.benedelman.org/?p=2041 Continue reading "Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity"

]]>
Early LLM-based Tools for Enterprise Information Workers Likely Provide Meaningful Boosts to Productivity. Microsoft Research Report – AI and Productivity Team. With Alexia Cambon, Brent Hecht, Donald Ngwe, Sonia Jaffe, Amy Heger, Mihaela Vorvoreanu, Sida Peng, Jake Hofman, Alex Farach, Margarita Bermejo-Cano, Eric Knudsen, James Bono, Hardik Sanghavi, Sofia Spatharioti, David Rothschild, Daniel G. Goldstein, Eirini Kalliamvakou, Peter Cihon, Mert Demirer, Michael Schwarz, and Jaime Teevan.

This report presents the initial findings of Microsoft’s research initiative on “AI and Productivity”, which seeks to measure and accelerate the productivity gains created by LLM-powered productivity tools like Microsoft’s Copilot. The many studies summarized in this report, the initiative’s first, focus on common enterprise information worker tasks for which LLMs are most likely to provide significant value. Results from the studies support the hypothesis that the first versions of Copilot tools substantially increase productivity on these tasks. This productivity boost usually appeared in the studies as a meaningful increase in speed of execution without a significant decrease in quality. Furthermore, we observed that the willingness-to-pay for LLM-based tools is higher for people who have used the tools than those who have not, suggesting that the tools provide value above initial expectations. The report also highlights future directions for the AI and Productivity initiative, including an emphasis on approaches that capture a wider range of tasks and roles.

Studies I led that are included within this report:

]]>
Randomized Controlled Trials for Microsoft Copilot for Security https://www.benedelman.org/randomized-controlled-trial-for-microsoft-security-copilot/ Tue, 05 Dec 2023 16:38:41 +0000 http://www.benedelman.org/?p=2039 Continue reading "Randomized Controlled Trials for Microsoft Copilot for Security"

]]>
Randomized Controlled Trials for Microsoft Copilot for Security. SSRN Working Paper 4648700. With James Bono, Sida Peng, Roberto Rodriguez, and Sandra Ho.

We conducted randomized controlled trials (RCTs) to measure the efficiency gains from using Security Copilot, including speed and quality improvements. External experimental subjects logged into a M365 Defender instance created for this experiment and performed four tasks: Incident Summarization, Script Analyzer, Incident Report, and Guided Response. We found that Security Copilot delivered large improvements on both speed and accuracy. Copilot brought improvements for both novices and security professionals.

(Also summarized in What Can Copilot’s Earliest Users Teach Us About Generative AI at Work? at “Role-specific pain points and opportunities: Security.” Also summarized in AI and Productivity Report at “M365 Defender Security Copilot study.”)

]]>
Sound Like Me: Findings from a Randomized Experiment https://www.benedelman.org/sound-like-me-findings-from-a-randomized-experiment/ Tue, 05 Dec 2023 16:37:01 +0000 http://www.benedelman.org/?p=2037 Continue reading "Sound Like Me: Findings from a Randomized Experiment"

]]>
Sound Like Me: Findings from a Randomized Experiment. SSRN Working Paper 4648689. With Donald Ngwe.

A new version of Copilot for Microsoft 365 includes a feature to let Outlook draft messages that “Sound Like Me” (SLM) based on training from messages in a user’s Sent Items folder. We sought to evaluate whether SLM lives up to its name. We find that it does, and more. Users widely and systematically praise SLM-generated messages as being more clear, more concise, and more “couldn’t have said it better myself”. When presented with a human-written message versus a SLM rewrite, users say they’d rather receive the SLM rewrite. All these findings are statistically significant. Furthermore, when presented with human and SLM messages, users struggle to tell the difference, in one specification doing worse than random.

(Also summarized in What Can Copilot’s Earliest Users Teach Us About Generative AI at Work? at “Email effectiveness.” Also summarized in AI and Productivity Report at “Outlook Email Study.”)

]]>
Measuring the Impact of AI on Information Worker Productivity https://www.benedelman.org/measuring-the-impact-of-ai-on-information-worker-productivity/ Tue, 05 Dec 2023 16:34:25 +0000 http://www.benedelman.org/?p=2035 Continue reading "Measuring the Impact of AI on Information Worker Productivity"

]]>
Measuring the Impact of AI on Information Worker Productivity. SSRN Working Paper 4648686. With Donald Ngwe and Sida Peng.

This paper reports the results of two randomized controlled trials evaluating the performance and user satisfaction of a new AI product in the context of common information worker tasks. We designed workplace scenarios to test common information worker tasks: retrieving information from files, emails, and calendar; catching up after a missed online meeting; and drafting prose. We assigned these tasks to 310 subjects tasked to find relevant information, answer multiple choice questions about what they found, and write marketing content. In both studies, users with the AI tool were statistically significantly faster, a difference that holds both on its own and when controlling for accuracy/quality. Furthermore, users who tried the AI tool reported higher willingness to pay relative to users who merely heard about it but didn’t get to try it, indicating that the product exceeded expectations.

(Also summarized in What Can Copilot’s Earliest Users Teach Us About Generative AI at Work? at “A day in the life” and “The strain of searching.” Also summarized in AI and Productivity Report at “Copilot Common Tasks Study” and “Copilot Information Retrieval Study.”)

]]>
An Introduction to the Competition Law and Economics of “Free” https://www.benedelman.org/an-introduction-to-the-competition-law-and-economics-of-free/ Mon, 01 Oct 2018 21:49:04 +0000 http://www.benedelman.org/?p=1634 Continue reading "An Introduction to the Competition Law and Economics of “Free”"

]]>
Benjamin Edelman and Damien Geradin. An Introduction to the Competition Law and Economics of ‘Free’.  Antitrust Chronicle, Competition Policy International.  August 2018.

Many of the largest and most successful businesses today rely on providing services at no charge to at least a portion of their users. Consider companies as diverse as Dropbox, Facebook, Google, LinkedIn, The Guardian, Wikipedia, and the Yellow Pages.

For consumers, it is easy to celebrate free service. At least in the short term, free services are often high quality, and users find a zero price virtually irresistible.

But long-term assessments could differ, particularly if the free service reduces quality and consumer choice. In this short paper, we examine these concerns.  Some highlights:

First, “free” service tends to be free only in terms of currency.  Consumers typically pay in other ways, such as seeing advertising and providing data, though these payments tend to be more difficult to measure.

Second, free service sometimes exacerbates market concentration.  Most notably, free service impedes a natural strategy for entrants: offer a similar product or service at a lower price.  Entrants usually can’t pay users to accept their service.  (That would tend to attract undesirable users who might even discard the product without trying it.)  As a result, prices are stuck at zero, entry may be more difficult, effectively shielding incumbents from entry.

In this short paper, we examine the competition economics of “free” — how competition works in affected markets, what role competition policy might have and what approach it should take, and finally how competitors and prospective competitors can compete with “free.” Our bottom line: While free service has undeniable appeal for consumers, it can also impede competition, and especially entry. Competition authorities should be correspondingly attuned to allegations arising out of “free” service and should, at least, enforce existing doctrines strictly in affected markets.

]]>
In Accusing Microsoft, Google Doth Protest Too Much https://www.benedelman.org/in-accusing-microsoft-google-doth-protest-too-much/ Thu, 03 Feb 2011 05:00:02 +0000 http://ben.suminkoo.org/?p=1299 Continue reading "In Accusing Microsoft, Google Doth Protest Too Much"

]]>
In Accusing Microsoft, Google Doth Protest Too Much. HBR Online. February 3, 2011.

Google this week sparked a media uproar by alleging that Microsoft Bing “copies” Google results. But is that actually the best characterization of what happened? In fact Google’s engineers intentionally clicked bogus listings they had previously inserted into Google’s results, and they did this on computers where they had specifically authorized Microsoft to examine their browsing in order to improve Bing.

Strikingly, Google’s own Matt Cutts previously endorsed the use of Toolbar and similar data to improve search results — calling this approach “a good idea.” And Google’s own Toolbar Privacy Policy allows Google to perform the same analysis Bing used. So I don’t have much sympathy for Google’s allegations of impropriety. Quite the contrary: With Bing’s small market share, this data is important in improving Bing search results and building a viable competitor to Google’s dominant search offering.

Details, including what exactly happened, Google’s prior statements, and Google’s widespread use of others’ intellectual property:

In Accusing Microsoft, Google Doth Protest Too Much

]]>