Android security trade-offs: Lock states
I know it has been far too long since my last post in the “series” (not much of a series if it takes over a year between posts) on Android security trade-offs. For reference, the previous ones are:
- Ecosystem complexity (a meta-post to explain why the Android ecosystem is as complex as it is and creates difficult trade-offs)
- Root access
This one is a bit tricky, because a running (or even powered-off) Android device can have many different “lock” states, many of which are orthogonal to each other. While some of them include inherent trade-offs, others are simply a necessity due to balancing the needs of different stakeholders in the ecosystem. While this topic is therefore not fully in the trade-off category, I still keep it within the post series and will try to walk through the different kinds of lock states:
1. Screen lock a.k.a. the Android Lockscreen
The most obvious and most important one is the Android lockscreen itself. Its purpose is to authenticate the legitimate device user and therefore keep data on the device secure against unauthorized actors with physical access to the device. In terms of the threat model in our Android platform security model, the lockscreen is a mitigation against the physical attack vectors.
There is an inherent and well-known trade-off in having a lockscreen: on the one hand, it gets in the way of performing the actual task at hand – short (10-250 seconds) interactions about 50 times per day on average and even up to 200 times in exceptional cases according to one of our recent studies – and needs to be as usable as possible; on the other hand, without a secure lock screen, abuse of devices under physical control is trivial. In their current form, lockscreens on mobile devices largely enforce a binary model — either the whole phone is accessible, or the majority of functions (especially all security or privacy sensitive ones) are locked1. We know from many academic studies (cf. another recent survey of mobile phone authentication) that long alphanumeric passwords don’t work from a usability point of view while slide-to-unlock screens would offer no security whatsoever. Android therefore needs to offer a (configurable) trade-off.
Technical detail: Lockscreen bound keys in Keymaster
Keys stored in Android Keystore can be marked to be authentication bound.
Gatekeeper implements verification of user lock screen factors (PIN/password/pattern) in TEE (Trusted Execution Environment, e.g. ARM TrustZone) and, upon successful authentication, communicates this to Keymaster for releasing access to authentication bound keys.
Weaver implements the same functionality in TRH (Tamper Resistant Hardware, a separate, intentionally simpler piece of hardware) and communicates with Strongbox. Specified for Android 9.0 and initially implemented on the Google Pixel 2 and newer phones also adds a property called “Insider Attack Resistance” (IAR): without knowledge of the user’s lock screen factor, an upgrade to the Weaver/Strongbox code running in TRH will wipe the secrets used for on-device encryption. That is, even with access to internal code signing keys, existing data cannot be exfiltrated without the user’s cooperation.
Resolving the trade-off
Recent Android releases use a tiered authentication model where a secure knowledge-factor based authentication mechanism can be backed by convenience modalities that are functionally constrained based on the level of security they provide. The added convenience afforded by such a model helps drive lockscreen adoption and allows more users to benefit both from the immediate security benefits of a lockscreen and from features such as file-based encryption that rely on the presence of an underlying user-supplied credential. As an example of how this helps drive lockscreen adoption, starting with Android 7.x we see that an additional 23 percentage points of devices have a secure lockscreen enabled if they also have a fingerprint sensor2.
As of Android 10, the tiered authentication model splits modalities into three tiers:
- Primary Authentication modalities are restricted to knowledge-factors and by default include password, PIN, and pattern. Primary authentication provides access to all functions on the phone. A knowledge-factor is still considered a trust anchor for device security and therefore the only one able to unlock a device from a previously fully locked state (e.g. from being powered off).
- Secondary Authentication modalities are biometrics — which offer easier, but potentially less secure (than Primary Authentication), access into a user’s device. Secondary modalities are themselves split into three sub-tiers based on how secure they are, depending on spoofability and a secure sensor pipeline. Secondary modalities are also prevented from performing some actions — for example, they do not decrypt file-based or full-disk encrypted user data partitions (such as on first boot) and are required to fallback to primary authentication once every 72 hours. If a weak biometric does not meet either of the criteria (spoofability and pipelines security), then they cannot unlock Keymaster auth-bound keys and have a shorter fallback period.
- Tertiary Authentication modalities are alternate modalities such as unlocking when paired with a trusted Bluetooth device, or unlocking at trusted locations. Tertiary modalities are subject to all the constraints of secondary modalities. Additionally, like the weaker secondary modalities, tertiary modalities are also restricted from granting access to Keymaster auth-bound keys (such as those required for payments) and also require a fallback to primary authentication after any 4-hour idle period.
Secondary and tertiary authentication options can be temporarily disabled with Lockdown mode in situations where the trade-off should be shifted more towards security, especially when physical attacks on the device are expected e.g. during border crossing or other intrusive searches.
More details on the tiered lockscreen can be found in the updated Android platform security model.
2. Reset lock a.k.a. Factory Reset Prevention (FRP)
The second lock is implemented by a combination of Android user space (setup wizard) and stored through PersistentDataBlockService in a special flash block protected by OEM-specific (typically TrustZone based) mechanisms. It provides a mitigation against physical device theft with the aim of hardware resale (i.e. not aimed at the data on stored on the device) and is even mandated in some countries. The trade-off is that, if the previous owner doesn’t properly factory reset their device (through the Android Settings UI) before a genuine resale or give-away, the new owner cannot do that without support from the respective OEM (unless there are implementation bugs that lead to TrustZone being exploitable), but the significant benefit is that this lock tends to reduce device theft (including violent robbery) as (potential) thieves learn about the security feature.
This lock is automatically set when an account or screen lock is configured on a modern Android device and removed by properly resetting a device to factory defaults through the Android UI (in contrast to wiping e.g. in recovery mode or through bootloader).
3. Bootloader lock
The third lock works on a lower layer than the previous two: while lockscreen and factory reset prevention are mostly implemented in Android user space (on top of the Linux kernel), the bootloader itself also keeps a lock state before the kernel is even loaded. While this is highly dependent on the chipset vendor and OEM, many modern Android device use some method similar to Android Verified Boot 2.0 to verify all contents of the system image before loading it. That is, the bootloader has embedded (public) keys used to check the cryptographic signatures on the boot image (which contains the Linux kernel), which in turn will verify the (dm-verity) signatures of read-only system partitions including
product, etc. for run-time access. A locked bootloader will explicitly fail to load and execute boot partition contents that are unsigned, carry an invalid signature (i.e. have been modified after the signature was created), or have been (correctly) signed by an unknown key.
Unlocking the bootloader will disable this security check and allow booting images without a correct signature. However, modern Android devices should make such a state clear to the end user through adequate warning notices during bootup and record this bootloader state for device attestation (which I will discuss in more detail in a future post).
The advantage of bootloader lock is clear: without such low-level security measures, adversaries could simply flash a different Linux kernel that violated higher-level security controls (e.g. not implementing SELinux policy, leaking the user data encryption key after successful derivation, etc.). While user data encryption is still bound to a user secret (the lockscreen knowledge factor), physical access adversaries could flash a tampered system image and break security guarantees starting with the next reboot.
The disadvantage is equally clear: a bootloader lock with hard-coded verification keys will prevent users from flashing custom system images even when they explicitly want to to so.
Resolving the trade-off
Google Pixel and other devices require the “OEM unlocking” developer mode setting to be enabled to allow the bootloader to be unlocked and will securely wipe the user data partition when unlocking a previously locked bootloader. Both requirements form an important mitigation against physical access adversaries — by requiring lockscreen unlock before the first bootloader unlock can be triggered, adversaries without the lockscreen factor cannot bypass other lower-layer security controls, and by wiping data, they can’t break confidentiality when an unsigned boot image is flashed.
However, that will simply remove this security check (while warning the user, but still) and therefore doesn’t necessarily solve the trade-off for all cases. Pixel phones therefore support the next step of installing user-defined root of trust to support self-signing custom system images with a new keypair and re-locking the bootloader. After installing user-defined public keys, a re-locked bootloader will then accept images signed with these keys in addition to the OEM-loaded keys, solving the trade-off fully.
More details on bootloader lock states can be found in the updated Android platform security model.
4. ADB lock a.k.a. developer mode
Developer mode is probably the most well-known and consistently supported lock after the lockscreen. Android, as an open ecosystem, actually requires devices to support developer mode in CDD. After enabling developer mode (tap the build number at least 7 times in a row), advanced settings become available, including (among many others):
- “OEM unlocking” needs to be explicity enabled so that the booloader can even be unlocked – this ensures that a users has explicitly consented to disabling some of the bootloader verifications.
- Certain quick settings tiles that are either purely for development or help in testing corner cases or potentially upcoming changes ahead of time (I’d like to point out the “Sensors Off” tile to simulate standard sensors being unavailable during app runtime).
- Many options for enhanced displays of runtime information useful for developing apps.
- Most importantly, the option to enable ADB, the Android Debug Bridge.
ADB is highly capable and malicious USB (or WiFi, if enabled) access to this debug interface would give adversaries many options to eavesdrop on or manipulate the device. Therefore, access to ADB needs to be guarded through authentication, and attacks through malicious devices disguised as simple chargers have already happened in practice (a.k.a. Juice jacking). However, explicitly asking the user for every single access would make active development highly inefficient.
The current solution to locking developer access is based on public/private key authentication: on first use, the
adb tool (actually, the
adbd daemon) on the developer’s host creates a keypair for authentication to
adbd running on an Android device while developer mode is enabled. The public key can be registered for automatic authentication without further user interaction – however, newest Android versions will by default enforce a timeout of 7 days when specific adb keys are not used to better protect users from forgetting that they might have acknowledged some key access in the past.
Some OEMs (e.g. Xiaomi) lock enabling the developer mode through account creation with that OEM, presumably to add more friction and make it harder for users to be tricked into enabling USB debugging against their own interest.
5. MNO/Carrier lock a.k.a. SIM lock
The fourth lock doesn’t actually offer a security benefit to the device user, but is an economic necessity when the the device has been bought at a discount from an MNO (mobile network operator / carrier) with a minimum contract duration. Devices under those conditions will often only work on the respective MNO’s network and will fail accept SIM cards by other MNOs until this lock is removed. The specific implementation is OEM and MNO specific, but ideally verified within the radio firmware.
6. Device lock
The same economic constraints are the motivation for the last lock implemented by Device Lock Controller: “Device Lock Controller enables device management for credit providers. Your provider can remotely restrict access to your device if you don’t make payments. If your device is restricted, basic functionality, such as emergency calling and access to settings, will still be available.”
The main difference to SIM lock is that the device is not necessarily bound to a contract with the MNO, but could be (co-)financed by arbitrary parties.
There is another trade-off in the definition of security- and privacy-critical services, i.e. which functions must be protected by the binary lockscreen and which can/should work on a locked device (at least partially). Android makes some of these configurable where it makes sense and different people or contexts will have different optimal settings. One such example is notifications: users can define if they want to show all notifications on the lockscreen, hide sensitive ones, or not show any. In other cases, OEMs make that choice, for example about the possibility to cleanly shut down the device and/or turn off network connectivity while locked.
It could be argued that powering off or disabling network access is a security critical function, as it implies disabling remote tracking and wipe. These are effective mitigations against accidental loss of a device and naive theft. On the other hand, there are other trade-offs around power, including safety issues. Radio frequency emitters near emergency rooms, in planes etc. are still problematic. Phones can also malfunction and e.g. overheat. It is a safety feature for devices to have an off button that anybody can use. Additionally, leaving a phone powered on doesn’t guarantee that it can be tracked. Faraday bags are easy to get and capable thieves will know about them. Thieves (or law enforcement) interested in the data will certainly leave it on for the chance of recovering keys from memory. That is, attackers who would turn off a phone to prevent tracking for financial or other gain are most probably not hampered by not being able to turn it off.
OEMs define this part of the lockscreen behavior for now; e.g. Google Pixel devices allow powering off and disabling network access on locked phones (and will enable airplane mode when booting in safe mode) while Samsung phones do not. There is no easy or always-correct answer to these sub-trade-offs, and the threats balance changes with different regions and contexts. ↩︎