November 29, 2018
The Promise and Peril of Personalization
Google, Amazon, and many other digital tech companies celebrate their ability to deliver personalized services. Netflix aims to provide personalized entertainment. Advertising companies suck up data so they can deliver personalized ads. Financial services, insurance, and health care companies seek to use data to personalize their services. Faith in personalization is so strong, that some legal scholars now advocate for personalized law. And why not?
Personalization makes everyone feel good, like you’re being catered to. Heck, who really wants a standardized product sold for a mass market? We’ll tell you who.
Governments, social engineers, educators, administrators, advertisers. They want YOU to be standardized. That’s the paradox. In this new world, everything seems personalized to you, and at the same time a new, standardized thing is being made. Wake up. You’re the product.
You’re participating in what economists call a “multi-sided” market. Stop and think about what’s being made and what’s being sold to each side. In these markets, there are buyers, sellers, and market-making intermediaries that sell something to everyone. One of the things the intermediary sells is you.
I. Understanding PersonalizationLike most things, personalization can mean different things, take many forms, and be good or bad for different folks. It’s helpful to run through a few different examples, starting simple and gradually increasing the complexity.
Let’s start with a classic example—the tailor. Imagine getting a custom, tailored dress or suit. Someone takes your measurements, perhaps more than 20 of them. They pay attention to how your body is different than everyone else’s. They know and appreciate that you are unique. Next, they send the data off to folks who cut and sew your outfit. Then they call you back to the store to make sure the fit is correct and adjust it if need be. In the end, you have not just a new dress or suit. You have an outfit that moves with you, makes you look good, and you probably feel great each time you see how well your clothing fits and moves with you. Your outfit is personalized.
To generalize, this example involves A sharing personal information with B so that B can use the information to customize a product or service to satisfy A’s needs (preferences). Another decent example along these lines is the conventional doctor-patient relationship. Patients provide doctors with personal information that enables the doctor to tailor diagnosis, advice, treatment, and so on. For both the tailor-customer and doctor-patient examples, personal data is an input used to improve an output (dress, suit, medical treatment) such that the improvement directly serves the interests of the person whose information is being used.
Now, let’s consider an example of a different form of personalization, called price discrimination. Price discrimination is where B sells the same product to different people at different prices. B uses personal information about customers to personalize prices, and this business practice allows B to extract more money from consumers. Economists debate about the net welfare effects of price discrimination and the notion that some consumers may be better off and others worse off. But that is not the point for this discussion.
The point is B uses personal information to customize prices only to the extent that doing so furthers B’s interests, namely getting A to pay the most A is willing to pay. In general, B uses personal information about A to customize something but primarily to further B’s interest. In short, the benefits of personalization go to B.
Now imagine personalization designed to serve a social goal, for example, encouraging donations after a disaster. People may have different motivations to act. For example, one person may be moved to donate because of public health concerns while another person may be moved by the thought off people starving. B may send messages customized to each person’s different motivations. In this example, and others like nudging to promote voting or environmental protection, personalization primarily benefits third-parties—neither A nor B, but instead C.
These models show that personalization is just a tool, a means, and that we need to pay attention to what is being personalized and to what and whose ends.
II. Personalization of and for Whom?In many cases, it is very difficult to judge personalization because it can fit any of the models we’ve discussed. For example, let’s look at nudging, an ascendant form of social engineering. Nudge designers (called “choice architects”) aim to improve decision-making in contexts where humans tend to act irrationally or contrary to their own welfare. Leveraging insights from behavioral science, choice architects use low-cost interventions to help people make better choices in important policy areas like personal finance and health care.
Not all nudges work the same way. Some involve personalization; others do not.
A standard example of a non-personalized nudge involves retirement planning. An employer could (i) leave it to employees to set-up their 401K plans and decide how much to save or (ii) set up the plans by default so that a predetermined amount is saved automatically and allow employees to make adjustments. Saving by default is an architected choice that relies on two facts: first, people often fail to set up a retirement plan, which is a social problem, and second, people tend to stick with default rules. Thus, by choosing option (ii), the choice architect nudges people to start with the better position for them and society.
Personalized nudging, however, might work differently. Imagine choice architects gain access to treasure troves of personal data to customize the nudges they design. This could mean one of two things. Corresponding with the first model, it could mean that choice architect B uses data about A’s personal preferences and values to shape the nudge’s objective. For example, B might personalize the default saving rule for various employees, perhaps by better matching the initial savings amount to individuals’ personal profiles (e.g., based on age, personal discount rates, health, etc.) or even by customizing default investment allocations (e.g., based on predicted risk tolerances). The idea is simple. Like our custom tailor, the choice architect B could help A achieve outcomes that B knows A wants. Such intimate knowledge of A’s mind can be quite powerful, but frankly (and thankfully in our minds), it’s incredibly difficult to obtain. But this is not, to our knowledge, the usual meaning.
To the contrary, the underlying objective of personalized nudging is still to induce rational behavior. What constitutes rational behavior is not itself personalized. It remains a general, seemingly objective standard. Personalization helps B identify and overcome the specific impediments and obstacles to rationality faced by A. B can custom fit the nudge—the stimuli that shape the choices perceived by A—to engineer a rational response.HT: Villanova's Brett Frischmann, the author, via Prof. Pasquale
Personalization empowers choice architects, not the human beings subject to nudging. Personalized choice architecture means using stimuli that are personalized to achieve the relevant end of rational responses. In contrast with price discriminating suppliers in our second simple model, choice architects do not pursue their own individual interests; instead, they continue to pursue an idealized conception of rational actors and what they, in theory, should want. In a sense, this blends the second and third models because choice architects pursue their vision of a social good.
Another complex example is personalized education. This has been a hot trend for many years. The idea of tailoring educational services to the needs of different populations and individual students seems a laudable goal. It can help overcome significant distributional inequities. Yet, as Neal Stephenson’s 1995 science fiction classic, Diamond Age, shows, personalized education can have more than one goal. In the novel, there is a special book, the Young Lady’s Illustrated Primer that provides a personalized education. Bethanie Maples describes the Primer as “an artificial intelligence agent for human learning/cog dev. Kind of the silver bullet for education.” We’re nowhere near that ideal. But we’re headed in that direction, and it’s worth reconsidering whether that’s the path we should be on.
Even Stephenson’s fictional Primer had another, hidden agenda. As Professor Meryl Alper sums up, the Primer’s overall design was to teach the student the designer’s view of how the world should be. Each student’s personalized education was geared to encourage a little rebellion against society, but only as a means of getting them to return to their tribe (Stephenson, 1995, p. 365). The customization served the designer’s ends, not necessarily the students’.
As the tech sector has infiltrated education, it promises data-driven, tech-enabled personalization tools. We must ask, however, what is being personalized and to what and whose ends. The market dynamics are quite difficult to unpack. It often seems that in addition to or despite the interests of school children, technology companies leverage personalization in their own interests—whether by collecting data, testing proprietary algorithms, or simply extending their brands to the next generation of impressionable consumers. When you dig into personalized education, you realize that it’s not so easy to determine what’s being personalized or for whom....MORE