"High for You, Low for Your Neighbor" — Are Pricing Algorithms Really "Evil"? : Economics of Distinguishing "Good Data" from "Bad Data"

"High for You, Low for Your Neighbor" — Are Pricing Algorithms Really "Evil"? : Economics of Distinguishing "Good Data" from "Bad Data"

"Higher than when I saw it yesterday" — such experiences might not just be your imagination but rather a result of "design" in today's era. Online retail, ride-sharing, travel, subscriptions. We might think we're buying the same products or services, but we might not be seeing the "same price." Companies use clues like location data, browsing history, device information, past purchases, and time spent to estimate how much we are willing to pay (willingness to pay) and adjust prices accordingly. This is known as "price discrimination" or "personalized pricing."


This practice intuitively feels unsettling. It seems like those holding the information can see through our financial situation and desire to buy, selling at the highest possible price. In reality, there are many instances where this is seen as problematic from the perspective of consumer protection and fair trade. However, in the world of economics, there has long been a debate that "price discrimination is not always bad." This is because lowering prices can increase new buyer segments, ultimately expanding supply or enabling the maintenance of services.


So, in today's world, where data and algorithms are the premise, is price discrimination ultimately beneficial or detrimental to society? The NBER working paper "Good Data and Bad Data: The Welfare Effects of Price Discrimination" directly tackles this question. The article reported by Phys.org on March 4, 2026, introduces the essence of the research to the general public while connecting it to the realistic question of how regulators can oversee "complex algorithmic pricing."



Assuming "Big Data can't predict perfectly"

A crucial starting point of this research is the reality that even if companies possess vast amounts of data, they cannot perfectly predict individual consumers' willingness to pay, which is incorporated into the model. Data is not omnipotent, and there is always "remaining uncertainty." Therefore, companies create "segments" by grouping people with similar characteristics and set optimal prices for each segment, rather than pricing individuals precisely.


In other words, the issue is not "complete individual pricing," but "how prices and welfare (society's overall benefit) move when market segmentation changes due to information." The NBER summary states that it organizes cases where data use monotonically increases, decreases, or is indeterminate regarding welfare, focusing on "market segmentation" and "residual uncertainty."



The "three pathways" that move welfare: Why intuition fails

Phys.org (and a similar article by CMU Tepper) breaks down the routes through which information affects consumer welfare into three parts.
This is why the research does not simply divide "price discrimination = bad" and "price discrimination = efficient."

① Price dispersion within the same type (within-type price change)

Even among the same type (people with similar demand), prices "scatter" as information increases. Some people pay less, while others pay more. The eeriness of personalized pricing primarily stems from this effect.


However, from a societal perspective, if more people pay less, transaction volume may increase, potentially increasing total surplus. Conversely, if price increases dominate, consumer surplus may be reduced.


② Asymmetric price reductions between types (cross-types price change)

With more information, price reductions may occur "biased towards certain segments." For example, significantly lowering prices for segments with high price elasticity (those likely to hesitate to buy) to increase quantity, while raising prices for segments less likely to leave.


In this case, the overall societal gain or loss hinges on "how much price reduction increases for whom."


③ The magnitude of price increases and decreases do not match (price curvature)

Intuitively, one might think, "If some prices go up and others go down, isn't it a wash?" but reality is not that simple. The extent of price increases and decreases may not be the same, and the impact of the same additional information can vary depending on the "curvature" of demand and revenue curves.


The research suggests that this "curvature" element complicates conclusions and simultaneously indicates the need for quantitative measures.



Cases where "data is always good/bad"—the key is the "shape of demand"

The NBER page summarizes that there are conditions where data use "monotonically increases (monotonically good)" or "monotonically decreases (monotonically bad)" welfare, and there are "non-monotonic" cases. Furthermore, in non-monotonic cases, it discusses providing "tight bounds" on welfare impacts and the "best local direction" for additional information.


Interestingly, it is shown that there are market conditions where the "fact" that companies are collecting data alone determines whether it is good or bad (as cited in the Phys.org article). In other words, in some cases, it may be possible to identify "dangerous markets" from the demand structure without delving into "what kind of data" is involved.



Regulatory discussions become more realistic: "Drawing lines like merger reviews" for pricing algorithms

Policy discussions on data utilization often tend to swing to extremes.

"Comprehensive regulation due to privacy invasion" or "laissez-faire to avoid stifling innovation."


However, real-world regulators struggle in the middle. This is because algorithms are complex and difficult to see from the outside. Moreover, companies claim "price optimization," while consumers feel "exploitation," leading to divided value judgments. The Phys.org article emphasizes that the research attempts to bridge this conflict not through "winning or losing arguments," but by providing a "quantitative framework with thresholds." Like merger review guidelines, it measures potential harm and benefits, applying strict scrutiny or prohibition to high-risk methods while allowing those with significant benefits and minimal harm.


This "line-drawing" approach is also important as a technology policy. A complete ban is straightforward but can lead to avoidance and loopholes. On the other hand, allowing everything means action can only be taken after harm occurs. A quantitative "caution line" allows companies to mitigate risks from the design stage and enables authorities to narrow down their monitoring targets.



Thinking with concrete examples: Different outcomes from the same "discrimination"

Here, let's consider some typical scenarios that align with consumer intuition (the following explanations are based on the research framework and do not assert specific examples of individual companies).


Scenario A: Price reductions reach those who "couldn't buy"

Like student discounts, offering lower prices to those with low purchasing power increases transaction volume. Total surplus tends to increase, making it socially acceptable. If data is used for "expanding access," it is likely to be relatively permissible from a regulatory standpoint.


Scenario B: Those less likely to leave face higher prices

When those with high necessity, few alternatives, or difficult cancellation processes are offered high prices, consumer resentment intensifies. Welfare-wise, if the losses on the price increase side are significant, it tends to turn negative. This is the area where authorities are most vigilant.


Scenario C: The increase is large, and the decrease is small

Superficially, "some prices go up, and some go down," but the curvature effect can result in an overall loss. This is where "invisible losses" occur, making quantitative evaluation effective.



SNS reactions: Limited spread, but strong points

The Phys.org article itself appears to have not gone viral, with the share count on the public page showing "0."

 
However, the theme itself (price discrimination using personal data) is a "spark" that tends to ignite repeatedly on social media.


In fact, on social media, discussions generally split into the following three patterns whenever similar topics arise (not specific to this article).

  1. "Isn't that exploitation?" camp
    Personalized pricing leads to intuitive backlash, as weaker positions tend to be disadvantaged. When linked to topics like price changes by device or region, anger tends to amplify.

  2. "It's beneficial if discounts increase" camp
    Seen as an extension of coupons or dynamic pricing, this stance is pragmatic, emphasizing the benefits of price reductions.

  3. "Lack of transparency is the problem" camp
    Concerns about the lack of transparency and accountability in pricing mechanisms. This area tends to unite beyond pros and cons, leading to calls for "at least notifications" or "at least audits."


What makes this research interesting is that it acknowledges these "emotionally divisive points" by recognizing that outcomes can be good or bad depending on the situation and offers tools (estimating harms and benefits, setting danger lines) to advance the discussion.

 
In SNS terms, it shifts the agenda from "complete denial" or "complete support" to "So, where's the line?"



Considering in the context of Japan: The focus is on "operation" rather than "regulation"

In Japan, dynamic pricing and recommendation optimization are rapidly becoming mainstream. Here, what truly matters is the design of operations rather than the ideology of "support/opposition."

  • On the company side: Running algorithms solely for short-term profit maximization can lead to long-term losses due to backlash or stricter regulations. Therefore, guardrails are needed in the design stage to ensure that disadvantages do not concentrate on consumers.

  • On the administrative side: Auditing all algorithms manually is impossible. Therefore, prioritizing high-risk areas (essential goods, markets with few alternatives, markets with high cancellation costs) and using thresholds to narrow down monitoring is a realistic approach.


Ultimately, societal distrust over data price discrimination arises not only from "gain or loss" but also from the anxiety of "not knowing which side you're on." The "measuring framework" presented by the research provides a common language for discussion in response to that anxiety.


The future of pricing requires more than just becoming smarter. The question is whether that intelligence becomes "good data" or "bad data" for society. It may be the design of the rules that allow it, rather than the algorithm itself, that is being questioned.



Sources