Android security trade-offs 0: Ecosystem complexity
Android security trade-offs
The Android ecosystem is highly diverse, complex, and has many different stakeholders typically not visible in the limelight. Consequently, making decisions about features in the platform itself — what we call AOSP (Android Open Source Project) — is hard, and often in surprising ways. Over a year and a half ago, I came to Google as the new Director of Android Platform Security. Even though my research group had been working on Android security for over 7 years, many of those complexities were completely new to me. This is another way of saying that I had to learn a lot about why some of our previous proposals to improve Android security, which seemed completely reasonable from an academic point of view, wouldn’t have worked across the ecosystem. There are many non-obvious trade-offs we have to consider in Android, particularly in terms of security and privacy.
In one of my recent conversations with a colleague from academia (thanks to Dan Boneh for planting this idea in my head), it turned out that there may be value in describing such trade-offs and the reasons behind decisions made within the AOSP design space. In a series of posts, I intend to walk through some of them and will try to argue both sides of a decision point before explaining why it was (currently) settled a particular way.
Caveat lector: In these posts, I do not officially speak for Google, although they include information I learned through my role at Google. Some of the material in this post is a shorter summary of our recent Android Platform Security Model paper.
This first post is intended to outline the ecosystem constraints under which most of the decisions need to be made. These are of course value calls, and based on some principles that were shaped over the 10 years of Android development so far:
- Android is an end-user focused operating system, which means that specific needs of e.g. developers and power-users are considered if they do not endanger the broad range of end users. In doubt, an option or a settings flag that may be easily abused or is ripe for social engineering attacks may not be worth the risk to a large population.
- Nonetheless, “side-loading” apps from other sources (e.g. through adb or different app stores) is possible and one of the distinguishing factors over other mobile device operating systems. Additionally, many devices allow their bootloaders to be unlocked to install a completely custom build of an Android system image (often referred to as a ROM). These images may, however, not necessarily be considered Android in terms of fulfilling all requirements as laid out in the CDD (Compatibility Definition Document) and tested through CTS (Compatibility Test Suite). Rooted devices are also not considered to be Android, as they typically violate CDD rules.
That is, Android is intentionally an open ecosystem that can be extended by many stakeholders, not only through contributing to AOSP or publishing apps through the Google Play Store. However, if customizations break CDD, then apps may not run.
- Apps can be written in any language. If they support the standard Android app lifecycle (e.g. with a small wrapper written in Java), code executed by the app is not restricted to a specific programming language or runtime. However, only those APIs defined in the respective platform API reference or NDK (Native Development Kit) are guaranteed to be supported. While native code may directly issue syscalls to the underlying Linux kernel within the confines of the sandbox (and subject to permissions), such undocumented calls may change between major platform releases.
- The main security barrier is the process boundary. While some mitigations may protect against exploitation of bugs within a process, the primary focus is on compartmentalization of — necessarily untrusted — apps from each other and the OS itself.
- Factory reset returns the phone to a known-good state. While the system image may have been updated after factory provisioning and is not expected to be rolled back (to a potentially insecure version), no user data should be retained through such a reset.
These principles, while successful in creating an immense ecosystem of more than a thousand device makers and countless1 app developers, create the mentioned restrictions and trade-offs when trying to make changes to the core platform.
- App compatibility is a major concern. That is, any API changes that break existing applications need to be very carefully considered. Not only do such breaking changes cause effort and cost to app developers, but users may rely on apps that are not being updated anymore. While breaking changes will occasionally be necessary to improve security and privacy for end users, this is always a difficult call and the costs of change need to weighed against the risks of not changing/fixing a discovered issue.
- There is a huge range of devices with very diverse hardware, at the low end starting at 512 MB of RAM with slow CPUs and no cryptographic acceleration hardware. There is a large difference in price points (approaching two orders of magnitude) and associated generations of chipsets and hardware capabilities. Security measures need to be considered in this context. When e.g. full on-device encryption makes a device so slow or shortens battery runtime to an extent that it becomes unusable in practice, it cannot be required across the whole ecosystem even though it would be easier not to have such a choice (which is one of the reasons why Android Q introduces Adiantum to support faster software-only encryption for such devices).
- End users, power users, developers, or enterprise administrators may have wildly different requirements and desires in how to use an Android device. These are sometimes conflicting. One major aspect of this conflict is that the majority of end users tend not to benefit from “yet another config option”. That is, default settings matter more, and sometimes even having an option/settings flag can be detrimental to security because it could be abused through social engineering.
Finally, sometimes even the threat model of a single user can have conflicting requirements. That is, some features have both security and privacy aspects, and strengthening privacy protections may impact some threats from a security point of view. These questions are the hardest to answer, because they are not balancing performance, usability, and other aspects against security mitigations (in which case hardware improvements might solve the conflict over time), but need a decision that is sometimes non-technical in nature. This is one of the reasons why an open discourse is important: both to understand what all the aspects are, and to be able to modify decisions when the environment changes. This (intended) series of blog posts aims to foster such a public discourse by describing what I learned about the hard balances Android often has to find.
The intention of these blog posts is not to make value calls or decide on the importance of different stakeholders. I am primarily trying to document the different sides of a trade-off and why a decision has been made a certain way. Readers may or may not agree with these decisions (and, from a pure developer or power-user point of view, I myself may sometimes wish for a different trade-off), but I will try to put it in context of the diverse and complex ecosystem outlined above to hopefully help readers understand why the decision was made that way.
More correctly, the term is uncountable. Because apps do not have to go through a centralized vetting point, it is impossible to know how many developers, apps, and users are out there. ↩︎