Media and developing minds header

Won’t Somebody Think of the Children?! Examining Privacy Behaviors of Mobile Apps at Scale

October 17, 2018

 
Serge Egelman, PhD

Serge Egelman, PhD

Research Director of Usable Security and Privacy Group, International Computer Science Institute; Department of Electrical Engineering and Computer Sciences, Director of the Berkeley Laboratory for Usable and Experimental Studies (BLUES), University of California Berkeley

 

Overview

Background.  Every device through which we access digital content has persistent identifiers.  These are unique numbers that allow companies to connect your activities to your device.  Some are based on the hardware in your hands, while others are based on things like your cellular service, Wi-Fi interface, and other aspects of your digital identity.  Together, these digital identifiers help facilitate the collection, exchange, sale, integration, cross-referencing, and aggregation of your user data to create the compilation of detailed, actionable user profiles. Data brokers play an important role in correlating data from many sources to assemble these profiles.  The net effect of this information gathering and exploitation system is that as you use apps (including game apps), companies learn what you do, where you do it, by what means, who else is monitoring your behavior, what else you do (online and off), and so on.

To mitigate the resultant privacy problems and preempt regulation, Google and Apple created user-resettable advertising identifiers (“ad IDs”) for the Android and iOS platforms, respectively.  Use of these was mandated five years ago, for all tracking and advertising purposes. Theoretically, being able to reset these identifiers would prevent advertising companies from correlating your past and future online activities.  However, problems arise when apps transmit other permanent identifiers alongside the user resettable ones.  

Games and other apps contact advertisers and other data aggregators.  They in turn, send persistent identifiers from each player’s devices, which those companies use to gather information about users (and construct profiles of them).  They also provide those persistent identifiers to data brokers who use them to gather and integrate user information from diverse sources. This process yields detailed user profiles (containing device serial numbers, physical location data, IP addresses, digital media habits, purchasing history, and lifestyle attributes) that users are likely unaware of, and certainly cannot consent to.

Privacy Policies and Permission Screens.  Although consumers might think that they can rely on digital media companies’ privacy policies for guidance and protection, that is unrealistic.  Those policies are too dense for a typical reader to understand, too full of qualifications and exceptions, and useful only to the extent that apps and websites governed by them actually comply with their terms.  As it turns out, that compliance is problematically incomplete. As for permission screens, they tend to describe information than an app might obtain from a user, but not how it will be used, with whom it might be shared, or how such information gathering combines with persistent identifiers and user tracking to compromise privacy.

Regulatory Landscape.  In contrast to every other developed country, the US does not have comprehensive privacy laws.  There is, however, a law governing the collection of data from children under age 13. The Children’s Online Privacy Protection Act (“COPPA”) governs the use of persistent identifiers and the collection of personally identifiable information like locations and email addresses for pre-teens.  Behavioral advertising to that audience is prohibited without verifiable parental consent. “Reasonable security measures” (which at a minimum should include the encryption of children’s transmitted data) is required.

Product and Service Landscape.  The Google Play store has a “Designed for Families” category.  The apps offered there are not vetted for age-appropriate content by Google.  They are self-designated by the companies selling them. Those companies undertake certain specific obligations when they designate their apps in this way.  For example, they affirm to Google that the apps comply with COPPA.

Researchers investigated the COPPA compliance of almost 6,000 unique Android apps designated for children.  They found that a majority of those apps appear to not comply with the applicable child protection requirements.  Almost 5% of the children’s apps surveyed collected prohibited personal information about users without parental consent.  Almost 40% deliver non-resettable identifiers that permit long-term user tracking, in violation of Google’s policies.  Potentially non-compliant services received personal information from 19% of the tested apps. This is due to their use of third-party software development kits (“SDKs”, such as those that interface with advertising platforms) that incorporate tracking and behavioral advertising functions permitted in the adult marketplace.  These children’s apps either fail to disable those functions or incorrectly configure them. (Some of these third-party SDKs explicitly prohibit their use in children’s products, but appear in them frequently anyway.) Finally, about 40% failed to appropriately encrypt users’ personal information as it’s transmitted across the Web.

Non-compliance and Non-enforcement.  Some of this non-compliance may stem from developers’ failure to understand advertisers’ behaviors within their own products and services.  For example, according to researchers, “75% of the time when an app accesses the [resettable] advertising ID, it does so in conjunction with sending other persistent identifiers that are not resettable.” This not only negates the privacy-preserving characteristics of the ad ID, but also violates Google’s own terms of use.  Google’s vigorous enforcement of those terms could mitigate the risk to children. Google should be able to make the same findings as the outside researchers investigating these matters, and act on those findings. Unfortunately, platforms such as Google Play are failing to enforce their own child protection policies. (Even Google-owned advertising network DoubleClick is not 100% compliant, and Google-owned Crashlytics’ debugging and crash-reporting SDK appears in child-oriented apps even though its own terms of use prohibit such uses.)

Safe Harbor Providers.  COPPA designates seven safe harbor providers certified by the Federal Trade Commission.  Each of these organizations promulgates an industry self-regulatory framework for establishing products’ COPPA compliance, and certifies that compliance.  In practice, content certified by these Safe Harbor providers has not been found to be more consistently COPPA compliant than other apps. In this sense, the Safe Harbor scheme is protecting content producers and distributors, but not consumers (including children).

Recommendations: It is within Google and Apple’s technological capabilities to do what academic researchers are doing to identify products and services using their platforms in ways that jeopardize children’s privacy. These companies are publicly promoting consumer protection policies, but not effectively enforcing the policy against content creators. However, regulatory pressure may move them to do so.  App developers can be more careful about what code they incorporate from third-party sources, to reduce the number of unintentionally invasive products and online services.  Parents, unfortunately, lack the technical ability to vet their children’s games, apps, and other digital content for safety.

Questions and Answers

Audience Question: Does the recent European legislation in this area go far enough?

Answer: This whole issue primarily is a market failure arising from information asymmetry.  There’s no reasonable way for consumers to figure out what’s happening to their apps, or any other online service, when they disclose data.  The European GDPR is a step forward because it requires companies to disclose what data is collected and with whom it is shared, and further requires user consent if the collected data is used for other purposes.  How well this will be enforced remains to be seen. As in the US, if there’s no enforcement, there’s no improvement.

Audience Question: What is the legal framework around university admissions officers and future employers purchasing digital dossiers from data brokers?

Answer: The data brokerage industry is entirely unregulated, with narrow exceptions.  They’re not allowed to make credit decisions, so they cover themselves with disclaimers in that regard (even though their customers may be using the brokers’ data precisely for that prohibited purpose).

Audience Question: Who are the regulators?

Answer: In the case of COPPA, the FTC and state Attorneys General have the right to bring civil suits.  The FTC is under-resourced and over-burdened, and unlikely to bring more than one enforcement action for any particular violation.  In addition, the agency only acts on a consumer complaint (the risk of which is small so long as consumers can’t actually figure out when and how they’re being harmed).  As a result, such actions are having little deterrent effect. A private right of action under COPPA, and the possibility of class action suits, might change that.

Audience Question: Who runs the Children’s Advertising Review Unit (“CARU”), one of the seven Safe Harbor organizations under COPPA?

Answer: The Better Business Bureau runs CARU.  The economic literature on self-regulation indicates that it’s a mechanism of adverse selection.  Trustworthy people comply with laws. They don’t spend money paying others to certify such compliance.  Those who take advantage of regulatory safe harbors tend to be the bad actors. As for industry figures who claim that the self-regulatory standards are more demanding than any government regulation would be, why then do they protest against the adoption of such relatively lax standards?

Audience Question: Does COPPA basically allow companies to do anything they want with children’s data if parents give permission?

Answer: Yes.

Audience Question: Are there any screening processes for who can buy children’s data from data brokers?

Answer: It varies.  Some data brokers are more careful than others about whom they sell to.  Researchers investigating this system were never questioned about their reasons for buying data.

Session Materials