The academic review process in one example


The academic peer review process can often be frustrating — not only for junior members of the research community. This post describes the process finally leading to the publication of our article “The Android Platform Security Model” in ACM Transactions on Privacy and Security in April 2021.

This article took a long time to reach its current form. In fact, we (Chad Brubaker, Jeffrey Vander Stoep, Nick Kralevich, and myself) started writing the first lines in June 2018 internally at Google in the Android Platform Security team (when we could still sit together in a room for a few hours without thinking about it), nearly 3 years before final publication. The reason I can give a fairly detailed account is the Git commit log – initially started as a project in Overleaf, the whole collaboration happened in Git repositories. Most of the following is based on just this log as well as email from various academic review systems.

Who this post is for

Anybody who is interested in the academic publication process ;)

More seriously, I hope this report to be useful particularly to PhD students and junior academic researchers who feel frustrated by repeated rejections in the publications-wheel-of-life. I have used this particular case as example in our institute PhD seminar to tell our PhD students not to give up too easily, and to take (even negative) reviews as a chance to improve rather than a personal rejection.

Additionally, I believe the academic review process to constantly need improvement in itself, and as a matter of transparency, we document this lengthy process as an example.

Motivation for the publication

Why did we even write this paper? After nearly 10 years of work on the open source core of Android (specifically AOSP, the Android Open Source Project), the underlying security model still had not been formally published. This was a shame, as it pioneered both abstract concepts (such as the interaction of multiple forms of consent) and many specific implementation details (UIDs as basic sandboxing separation, per-app process separation on top of a JVM runtime, fine grained SELinux policies, vertical app permissions, etc.). While many of the different aspects were long published in the form of Android developer documention or blog posts as well as analyzed in-depth in academic papers and other security publications (including some books), the underlying mental models tying all the aspects together were still only known to a few people who initially defined and implemented the mechanisms.

In short, we hoped the paper to act as a future reference for academic publications on how the Android platform security model is intended to work, and therefore to help with comparing specific research findings to its underlying mental model and abstract concepts. If researchers find that an actual implementation does not match this meta model, it is room for in-depth analysis and potential future changes.

On a personal level, in my research group(s) in Vienna, Hagenberg, and Linz (in that order), we had been working on Android right from the start since 2008 and on its security properties since 2012, yet it took me a year as head of the Android Platform Security team at headquarters in Mountain View to finally understand and appreciate many of the finer details and how the different concepts are meant to fit together. Another way to phrase the motivation is therefore to document the security model of core Android as I would have liked to read it many years ago, and to help other researchers understand the mental model behind the designs.

The process

Version 1: IEEE Security and Privacy (S&P) 2019

After the first kick-off in the middle of June 2018, we managed a fairly intense writing phase and submitted the first draft (15 pages in IEEE Transactions 2-column style) on August 2, 2018 for review at IEEE S&P 2019 as a SoK (Systematization of Knowledge) type paper. This first round of double-blind peer review finished on November 13, 2018 with a resounding reject (2 weak rejects, 1 strong reject):

Review 1: weak reject

Brief paper summary (2-3 sentences):

Today, the Android operating system is used in a wide variety of devices, from phones, tablets, game consoles to car multimedia systems and home appliances. Given its ubiquity, a significant amount of research has been devoted to studying the security policies and protection mechanisms implemented within Android. This paper surveys the design principles, discusses the threat model for Android devices, and outlines the evolution of the security mechanisms implementation through several versions of the Android OS.


  • The paper is easy to follow, and presents material in a logical fashion.
  • The discussion of the Android design principles, a specific threat model, and the evolution of security mechanisms is informative.
  • The discussion on the multi-party consent to perform actions is an interesting framing for the evaluation of the platform security. By having an explicit definition of the permissible action (i.e., actions that can be performed if, and only if, the developer permits it), it is easier to assess the different system design decisions.


  • The biggest weakness of the paper is that it does not raise to the level of what one expects from a SoK paper. SoK papers are more than mere surveys, and it that regard, the paper falls short. For instance, despite, presenting a clear threat taxonomy and describing the evolution of the defenses introduced in the Android Open Source Project (AOSP), there is little insights into the contributions of the scientific community. It would have been more insightful if, for each of the threats listed, the authors presented the academic research in that area, explained what has worked, what has failed, and what mechanisms first suggested by the academic community were then later integrated into the mainstream OS.
  • The paper aims to educate readers on the security model of the Android platform, but it is unclear who the target audience of this paper is. For an end-user of the platform, there are no clear takeaways regarding can be done to run the OS in a secure fashion. For a developer, there are no guidelines how to best leverage the security features of the platform. And probably most important for S&P, for a security researcher, there is no guidance on available research areas.
  • Another significant issue is that there are many contributions to the Android security model coming from the research community that are not acknowledged in this paper. The related work section does not mention related SoK and survey papers [R1,R2,R3] nor does it discuss well established papers such as [R4,R5]. This is a major oversight. Furthermore, notable books [B1-B3] on Android security are also omitted.

Detailed comments for the author(s):

As noted above, the paper falls short in meeting the requirements for a SoK paper. Specifically, the authors fail to provide a new viewpoint on an established, major research area, there is no support or challenge of long-held beliefs in such an area with compelling evidence, and worse, the presented taxonomy does not provide much beyond existing work (e.g., [9]). Overall, while the information compiled in the paper is well presented and useful, after reading the paper, the reader is left with more questions than answers. The paper would have been more appealing if it addressed:

(i) how research in the academic community has influenced security features in Android,

(ii) the current state of the art attacks against Android security measures spanning some well-defined time frame,

(iii), the current state of the art regarding available defenses, even if these are add-ons to the Android platform. Also important for an SoK would be insights into what areas of research need more attention after having completed a systematic study, and who the various stakeholders are for those areas that need the most attention.


  • Footnotes are used excessively, breaking the flow of the paper. Some of the footnotes (e.g, 1,2) should be replaced with citations, many others should be rolled into the paper.
  • On page seven, statistics on usage of the lock-screen mechanism are presented. What is the source of the presented data?


[R1] Spensky et al, SoK: Privacy on Mobile Devices - It’s Complicated

[R2] Acar et al, SoK: Lessons Learned from Android Security Research for Appified Software Platforms,

[R3] Faruki et al, Android Security: A Survey of Issues, Malware Penetration, and Defenses.

[R4] Enck et al, Understanding Android Security,

[R5] Heuser et al, ASM: A Programmable Interface for Extending Android Security,


[B1] Anmol Misra and Abhishek Dubey, Android Security Attacks and Defenses

[B2] Nikolay Elenkov, Android Security Internals An in-Depth Guide to Android’s Security Architecture

[B3] Jeff Six, Application Security for the Android Platform: Processes, Permissions, and Other Safeguards

Review 2: weak reject

Brief paper summary (2-3 sentences):

This paper summarizes and documents the Android platform security model, whose unique aspect is the “multi-party consent” framework. The authors first describe the security principles and threat model of Android, then discuss the security model and how it has been implemented in historical Android versions and how it can defend the threats. Some weaknesses and exceptions of the model implementation are also covered.


  • The paper makes a first try to summarize the Android platform security model.
  • Detailed discussions about the implementation of the security model in historical Android versions.
  • Some thoughts on leveraging the model to identify security problems and gaps.


  • The paper does not give sufficient insights about the space. It is heavy on summarizing existing knowledge and light on how the knowledge can be leveraged for future research.
  • The proposed security model is too abstract to be used to guide researchers to find implementation flaws.
  • The security model is not formally and precisely defined.

Detailed comments for the author(s):

To me (and as the authors mentioned), the value of the Android security model proposed in this paper should be that it can help researchers identify discrepancies and flaws in current and future versions of Android (and beyond). Section 5 (weaknesses and exceptions) serves this purpose but it is very short. I’d really like to see more application of the model and how others can benefit from the model. Further, these weaknesses are ad-hoc and reads disconnected from the model. It is unclear how a researcher can come up with these weaknesses after reading the first 4 sections of the paper (btw some of the problems are known to me already).

The paper overall has many disconnections. It is unclear to me how the implementation analysis in section 4 is guided by the model defined in section 3 – it seems to be guided and driven by some specific threats (thread model in section 2) and related security measures. As mentioned above, section 5 is also disconnected from the previous sections.

The proposed Android security model is not formally and precisely defined. I’m not sure whether those descriptions in section 3 can define something like a “model”, because it reads like a list of observations — two main uniqueness of Android (compared to PCs) are (1) users are involved in the picture because of permissions and (2) apps do not trust each other can control the sharing of their own data. However, this causual way of defining a model is unlikely to guarantee anything (e.g., whether the model is complete and self-contained with respect to certain scope)? Without a clear definition, it will also be difficult for researchers to compare the model with actual implementation, partly explaining why section 4 (implementation) is not guided by section 3.

It will be interesting if the paper can also contrast the security model with a traditional PC’s security model a bit more and explain why the security model hasn’t been developed/adopted for traditional desktop systems. From the paper’s description many principles and contexts are shared between Android and desktop systems, e.g. they are both end-user systems, desktop software can also be developed in arbitrary languages and distributed from multiple sources (no central vetting), desktop systems also have actors of users, developers and platform. Besides, the threat models are also highly similar.

At the beginning of section 3, the paper mentions “the Android security model balances security and privacy requirements of users with security requirements of applications and the platform itself”. It sounds like there are some conflicts between the security requirements of different stakeholders, but what are these conflicts? Can you make it more clear?

Review 3: reject

Brief paper summary (2-3 sentences):

The paper aims to document and model the security mechanisms of Andriod OS. It centers around the notion of a three-party consent model (i.e., a model that requires consent from users, developers, and content/server providers). The paper essentially iterates through the key security features/designs of Android.


  • The paper is easy to follow.


  • The paper doesn’t really present a systemizing of knowledge, but rather an iteration of already documented knowledge (i.e., the key security features/designs of Android).

  • The paper reads more like a documentation of some Android security features, which already exits in a more comprehensive form on Android website.

  • It’s unclear what exactly is the technical or research contribution of the paper.

  • The knowledge points discussed in the paper are rather ad-hoc.

Detailed comments for the author(s):

The motivation of the work, as stated in the paper, is that “the Android security model has previously not been published or analyzed”. I can’t agree with this statement. Android security model and design are well understood and documented by both Google and 3rd-parties.

The paper claims the key contribution of the work is “define the Android security model based on security principles”. It’s unclear what does this really mean. What the paper presents is essentially some abstract and well-known security principles (e.g., DAC, MAC, Defense in depth, Safe by design, etc.) and a list of security features adopted by Android.

The so-called “three-party consent model” (user, developer, service/content provider) is not unique to Android and has been used in Web and other platforms. Furthermore, not all security enforcement in Android requires consent from all three parties or any party on that list. I don’t think this model is universally followed in the Android OS.

I had a hard time to identify the technical or research contribution of the work. As a SoK submission, I expect the paper to not just reiterate the existing knowledge but systemize the knowledge in a way that future research can benefit or be inspired from. However, I don’t feel this paper goes beyond documenting the existing knowledge or repeating the existing documentation.

Lessons learned from version 1 reviews

First, S&P reviews continue to be thorough and reflect that all reviewers really have read the paper in detail and were very familiar with the topic. While a rejection is certainly not the desired outcome, S&P is known for its relatively low acceptance rate and harsh reviewers; therefore, getting a reject verdict from S&P should not be taken personally but as a great chance to improve with (usually) high-quality feedback on open issues to address.

On the positive side, all reviewers seem to have liked the textual presentation, both in structure and in style — all because of my fabulous co-authors of course, who are both native English speakers and well practiced in precise, concise writing. However, the positive takeaway is that the many hours spent together in a single room to filter, sort, and structure our material before writing a single sentence clearly helped in making the paper easier to read. If you can, I can heartily recommend to (physically or virtually) sit together with your co-authors in live sessions instead of only bouncing emails with text drafts back and forth. Ideally, this should be done before starting to actually write the paper to establish both a shared understanding what exactly the paper is supposed to convery and how to structure the different parts.

On the negative side, SoK was clearly the wrong paper category (even though reviewer 2 mentioned that it was `heavy on summarizing existing knowledge’). The reviewers were completely right that our draft did not fulfil the expectations of a good SoK paper, as it did not review the majority of academic literature on the topic and did not contribute to structuring those previous publications that we did refer to. While this wasn’t what we intended the paper to be (we wanted to systematically document the thought that went into the design of the Android platform security model, not the other academic literature written about it), it was completely my fault for thinking it could be seen as an SoK paper. Not having written one before, my expectations were different from the reviewers representing the research community, and it was a beginner’s mistake on SoK papers. The takeaway from this is to be clear about the type of paper you are writing and check other papers in this category if it meets the standard expecations.

Additionally, it became clear that the different parts of our paper (threat model, definition of security model, implementation, historical development, and open issues) were not sufficiently connected and that we needed to do a better job of explaining how they fit together. The takeaway was to make sure different parts of the paper clearly refer to and fit with each other even when written by different co-authors for readers who do not share the same mental model of it already.

One consistent complaint – which we have not been able to address even in the final version – was already mentioned for the first version: that we don’t formally specify the security model, but only describe it in abstract (and therefore imprecise) prose. This proved the hardest aspect to address, as the Android security model is more of a meta model spanning many layers than a consistent model within a single layer that can be used to precisely reason about and within. My best takeaway is that certain expectations come with single words in the title, and `model’ was clearly such a trigger word. Unfortunately, shy of using a different word, we were unable to really address this complaint. Or, as we sometimes say in security, my (threat) model is not your (threat) model.

Finally (and quite expectedly), we got a list of recommendations for additional academic or other standard literature to additionally cite. This was the easist part to fix.

Version 2: USENIX Security 2019

It was clear that the first version left a lot to be desired, and we tried to improve the manuscript by taking into account as many of the – generally very helpful – recommendations as we reasonably could in a very short time frame. At the same time, we updated the draft to reflect changes made to the security model implementation in Android 9.0 (Pie), which had been released while the first version was under review.

One specific change was to use tables for historical changes to Android releases (moved to an appendix because of page length limitations) instead of in-text lists to improve clarity of presentation through visual structure. Another was reordering some sections for a (hopefully) better cross-referencing and logical flow. As the core part of the paper, we also introduced explicit numbering of the security meta model rules to help referring to them from other parts. In time frame for this iteration, this last one was our most important improvement: to strengthen internal consistency and help readers connect the different aspects with specific referencing of model rules from the security mechanisms implementation section.

This second draft (19 pages in USENIX 2-column style) was submitted for review at USENIX Security 2019 fall round on November 15, 2018. This round of double-blind peer review was finished on December 14, 2018 with the recommendation to advance to round 2 of USENIX Security 2019 and the option to respond to the reviews:

Review 1: major revision

Overall merit: 2. Top 50% but not top 25% of submitted papers

Writing quality: 4. Well-written

Reviewer expertise: 4. I know a lot about this area

Paper summary:

This paper provides a comprehensive overview of Android’s platform security model. The paper begins with 4 security principles, 15 threats, and 4 security rules. The remainder of the paper is a discussion of how Android’s platform security model addresses the threats, frequently referring to the security rules, and when possible, describing the evolution of the platform security model that address different threats.


The paper is fairly comprehensive. It appear to be written by someone highly involved with (if not part of) the Google AOSP security team. The paper is written with confidence and contains many facts that would be useful to researchers or students seeking to understand Android’s platform security model.

The paper identifies 15 threats, and describes how they relate to Android. When appropriate, the paper describes how the threats are different than threats for raditional desktop and server platforms.


The paper reads more like a whitepaper than a research paper. This is due to two main factors:

  1. The paper skirts around some of the more delicate issues (e.g., SDcard, OEMS), painting Android mostly in a positive light, highlighting all of the great features and work that have been done over the years. Don’t get me wrong, it has been great work. However, at some points I felt like I was reading the equivalent of Apple’s iOS security whitepaper.

  2. The evaluation criteria (4 security rules, 15 threats) are only weakly integrated into the discussion, with many valuable examples in the appendix. The paper would benefit from making use of the evaluation criteria to evaluate Android a more central focus of the “evaluation.” For example, center discussion on the threats, and list the features that mitigate them, rather than centering the discussion on the features, listing the threats (though, this may not be the best approach).

Detailed comments for authors:

It difficult to draw the line between an application security problem and a platform security problem. My perspective is that the platform exists to provide security guarantees and primitives for applications. Some of the threats (e.g., T11 - mimicking apps; T13 - injecting input events into other apps) suggest that such considerations are in scope. Some (but not all) of the following topics fall within this gray area.

Additional Topics that May Warrant Discussion:

  • Security of the (emulated) SDcard is absent from the paper. While the SDcard is arguable user data, protection of this user data should be part of the platform security model. Security Rule 1 explicitly states that all three parties must agree for access; however, this isn’t the case for the SDcard. The SDcard is problematic for legacy reasons: a FAT file system. Android has slowly been trying to fix the problem by allowing apps to have a “app” directory on the SDcard that can be accessed without permissions, but it will take time until apps are really weaned off of the external storage permission.
  • Fragmentation and security issues introduced by OEMs is an important consideration for Android. Section 4.8 gets at this a little, but it doesn’t really go into Treble and how it is designed to address the issue with OEMs, or how the sign provides better isolation. Likewise, moving WebViews to an app in the Play Store is relegated to a table in the appendix.
  • Where do third-party libraries fit within the platform security model Technically this is an application problem, but the OS should provide primitives for applications to be developed security. Section 4.1 discusses consent from different parties, including the developer. App studies have shown that ad libraries opportunistically use permissions requested by apps. This has led to significant privacy concerns in Android applications. Solutions such as LayerCake are an example of how a platform could provide a separate ad environment as a primitive.
  • There isn’t explicit discussion of the accessibility services. To some extent this comes under the user consent, but it has been abused. Android is largely addressing this from the Play Store. It is also providing new APIs such as the autofill framework, which provides a primitive for password managers.
  • Along the lines of the autofill framework, the recent CCS'18 (Aonzo et al.) about phishing attacks discusses how mobile password managers cannot identify which password goes with which app. This isn’t directly a “platform” security issue, but it is something that a platform could attempt to facilitate.
  • There is some scattered discussion of UI attacks (click jacking) in T11 and T13, as well as protections for the system “protected confirmation” (Section 4.3). However, this could be brought out more directly. There was a functionality vs. security design decision to expose APIs for manipulating the z-axis of the rendered display. This is what has lead to many of these attacks. Personally, I blame it all on floating chat-heads.
  • The discussion of user consent (Section 4.1) is fairly hand-wavy and does not really discuss the technical apparatuses used to provide consent. Rather, it lists more guiding principles. One might argue that Android still offloads hard problems to the user (runtime permissions are better, but still hard for users).
  • Rooting devices is clearly out of scope, but there have been various root exploits discussed over the years. In terms of providing more of an evaluation of Android, it might be interesting to have a table showing how security architectural features mitigate these exploits.

Writing Comments:

  • The paper presents, four security principles, then later presents four security rules. This is confusing for the reader. It would be better to have a single set of “good properties”.
  • The paper frequently states that some design “addresses” specific threat. In most cases, the design does not completely address the threat (there are a wide range of theoretical attacks). Saying “partially address” would make the design sound weak. “Mitigates” might be a better work. This includes the tables in the appendix.
  • The paper would be best read by having a separate print out of the 4 security rules and 15 threats. The paper would be more reader friendly if it had a short name for each security rule and threat and then could state that name followed by the rule or threat identifier. Currently, the reader must constantly flip back and forth to know which rule or threat the paper is referring to (on top of already doing this for the 93 citations).
  • The end of Section 3 (Security Rules) is a discussion of how Android removes the ambient authority present in desktop OSes. Visually, it appears to be part of the “(4) Factory reset restores the device to a safe state” discussion. It may be more appropriate to add a fifth security rule: “(5) Applications are security principals”.

Trivial Comments:

  • Section 2.1 indicates that Android does not currently support non-Java APIs for the basic process lifecycle control. What about the native-activity interface?
  • Section 4.7: for updatable system apps, why not check every boot? Section 5.1 mentions the weakness for app code integrity, but it is more general for apps. There is an opportunity to enhance the security of updatable system apps.


I went back and forth between “major revision” and “reject and resubmit”. On the negative side, the research contribution of the paper is limited: the paper simply describes an existing system. While it is very valuable, I am having trouble classifying it as research.

That said, I think there is a potential path for the paper if the authors are able to refocus the paper such that the evaluation of Android based on the set of evaluation criteria is a central point of the work. There is also precedent for authors of “famous” systems to publish a paper on that system after real world experience has been obtained.

Questions for author response:

What is in scope vs. out of scope for a paper about Android platform security model?

How hard would it be to refocus the discussion on evaluating Android based on your criteria (or threats)?

Review 2: reject

Overall merit: 1. Bottom 50% of submitted papers

Writing quality: 4. Well-written

Reviewer expertise: 5. This is my area

Paper summary:

The paper defines a threat model for Android’s platform security, considering the wider context of Android’s ecosystem (e.g., open ecosystem with side-loading, vendor customization, multiple stakeholder, etc.). Based on this threat model, the authors explain how Android’s platform security model addresses those threats and in the very end the authors conclude with some ideas on where the current model has gaps or a call for action.


  1. Summarizing Android’s platform security model and defining the security model of Android as a multi-party consent model
  2. Definition of threat model and explaining how it is addressed in the design of Android’s platform security


  1. While good summary of the existing model, I find it lacks a clear research contribution
  2. Cut short and incomplete related work section, with references consisting too a large extent of online sources, like blog entries or Google’s documentation
  3. Tables in Appendix mostly derived from that documentation

Detailed comments for authors:

The paper provides a good summary of many of the threats to Android’s platform security and an easy to follow summary of how Android’s platform security model addresses those threats. This is a nice to read summary and can provide an easy entry to Android’s platform security. For instance, I see many explanations in the paper that would be great for teaching a class on Android security. Thus, I think such work that summarizes the status-quo has definitively its value and I hope to see this work published at an appropriate venue or journal.

However, for USENIX, I find this work does not fit very well. While it greatly summarizes aspects of Android security, its novel aspects are very limited nor does it question existing works or design decisions of the model. The current paper could form a good foundation to be extended into a survey or an SoK paper, if the authors go beyond summarizing the current status from mainly the official Android documentation and changelogs but extend their work to more academic work in a structured way and see the platform model more critical.

For instance:

  • Paragraph “User(s)” in Section 4.2: Android has seen fundamental change in this aspect during its evolution, and there are clear connections to various academic works, such as works by Porter Felt [R1,R2] and recent results like [R3] or [R4]; Android developers are still adapting to the most recent changes, and part of a systematic evaluation of the status-quo should also consider recent recommendations and how research is reflect in it, e.g. [R5]
  • Section 4.3: Something that is being worked on in the academic community since a few years, is the granularity of the sandbox. Advertisement libraries [R6-R9] and webviews (e.g. [R10,R11]), for instance, have been identified as threats to user privacy, but are not explicitly addressed in the standard Android platform security model. Thus, it is also unfortunate that they are missing among the exceptions mentioned in Section 5.
  • Section 4.5 mentions the work by Georgiev [57], however, closer related work by Fahl [R12,R13] on SSL misuse by Android developers have immediately lead to the development of the declarative network security configuration in Android 7+.
  • Section 4.6: Although Android’s exploit mitigation has been constantly improved, problems have been identified in the past [R14] that stem directly from Android’s application life-cyle (i.e., forking app processes from zygote) and that undermined exploit mitigations
  • Section 4.8: In addition to platform support, which by itself takes a long time to reach end-user devices, different works (e.g., [R15]) have looked at ways for patching that are independent of the platform and app developers
  • General: How does the current model support or incentivize app developers in programming defensively? How can the user be better supported in giving his consent?

The conclusion indeed raises some interesting problems, however, this section is too limited to make a strong enough contribution alone. Regarding the W^X idea, the authors might consider works like [R16].

Reviewer references:

[R1] A. Porter Felt, E. Ha, S. Egelman, A. Haney, E. Chin, and D. Wagner, “Android permissions: User attention, comprehension, and behavior,” in Proc. 8th Symposium on Usable Privacy and Security (SOUPS’12), ACM, 2012.

[R2] A. Porter Felt, S. Egelman, M. Finifter, D. Akhawe, and D. Wagner, “How to ask for permission,” in Proceedings of the 7th USENIX Conference on Hot Topics in Security (HotSec’12), USENIX Association, 2012.

[R3] F. Roesner, T. Kohno, A. Moshchuk, B. Parno, H. J. Wang, and C. Cowan, “User-driven access con- trol: Rethinking permission granting in modern operating systems,” in Proc. 33rd IEEE Symposium on Security and Privacy (SP’12), IEEE Computer Society, 2012.

[R4] P. Wijesekera, A. Baokar, A. Hosseini, S. Egelman, D. Wagner, and K. Beznosov, “Android permis- sions remystified: A field study on contextual integrity,” in Proc. 24th USENIX Security Symposium (SEC’15), USENIX Association, 2015.


[R6] M. Grace, W. Zhou, X. Jiang, and A.-R. Sadeghi, “Unsafe exposure analysis of mobile in-app advertisements,” in Proc. 5th ACM conference on Security and Privacy in Wireless and Mobile Networks (WISEC’12), ACM, 2012.

[R7] R. Stevens, C. Gibler, J. Crussell, J. Erickson, and H. Chen, “Investigating user privacy in android ad libraries,” in Proc. 2012 Mobile Security Technologies Workshop (MoST’12), IEEE Computer Society, 2012.

[R8] S. Demetriou, W. Merrill, W. Yang, A. Zhang, and C. A. Gunter, “Free for all! assessing user data exposure to advertising libraries on android,” in Proc. 23rd Annual Network & Distributed System Security Symposium (NDSS ’16), The Internet Society, 2016.

[R9] S. Son, G. Daehyeok, K. Kaist, and V. Shmatikov, “What mobile ads know about mobile users,” in Proc. 23rd Annual Network & Distributed System Security Symposium (NDSS ’16), The Internet Society, 2016.

[R10] T. Luo, H. Hao, W. Du, Y. Wang, and H. Yin, “Attacks on WebView in the Android system,” in Proc. 27th Annual Computer Security Applications Conference (ACSAC’11), ACM, 2011.

[R11] V. S. Martin Georgiev, Suman Jana, “Breaking and fixing origin-based access control in hybrid web/mobile application frameworks,” in Proc. 21st Annual Network and Distributed System Security Symposium (NDSS’14), The Internet Society, 2014.

[R12] S. Fahl, M. Harbach, T. Muders, L. Baumgärtner, B. Freisleben, and M. Smith, “Why eve and mallory love android: an analysis of android ssl (in)security,” in Proc. 19th ACM Conference on Computer and Communication Security (CCS ’12), ACM, 2012.

[R13] S. Fahl, M. Harbach, H. Perl, M. Koetter, and M. Smith, “Rethinking SSL development in an appified world,” in Proc. 20th ACM Conference on Computer and Communication Security (CCS ’13), ACM, 2013.

[R14] B. Lee, L. Lu, T. Wang, T. Kim, and W. Lee, “From zygote to morula: Fortifying weakened aslr on android,” in Proc. 35th IEEE Symposium on Security and Privacy (SP’14), IEEE Computer Society, 2014.

[R15] C. Mulliner, J. Oberheide, W. Robertson, and E. Kirda, “Patchdroid: Scalable third-party security patches for android devices,” in Proc. 29th Annual Computer Security Applications Conference (ACSAC’13), ACM, 2013.

[R16] Michael Backes, Thorsten Holz, Benjamin Kollenda, Philipp Koppe, Stefan Nürnberger, Jannik Pewny. “You can run but you can’t read: Preventing disclosure exploits in executable code,” in Proc. 21st ACM Conference on Computer and Communications Security (CCS ‘14), ACM, 2014.


  • p.6: “user consent through the user selecting a target app in the share dialog;” this gives the impression the user is always a consenting party in data sharing, but in Android’s open model, this is not necessarily the case (see, e.g., colluding applications)


On page 2 you mention that ref [2-5] are related work, which is misleading because they are all blog entries.

Page 3: [11] previously introduced a layered threat model -> Recent work [11] previously introduced a layered threat model

Page 6: “The OS must be opinionated and not just offload hard problem onto the user”. Please provide an example here.

Page 8: as well as request users grant them during use -> as well as request users to grant them during use

Page 10: components above.FBE allows -> components above. FBE allows

On page 10 you reference Table 4 as if it was part of the main paper and not the appendix.

For consistency define HAL on page 12 as you did for other abbreviations.

Table 3 and 4 in the appendix have the same title.

Questions for author response:

What exactly should the community take away from this paper?

Response to round 1 reviews for version 2

Wow! Excellent, extremely helpful reviews within a month — this is one reason why USENIX Security is by far my most favorite of the “big-N” security conferences (the others being that it is highly supportive of systems design and implementation papers, which is one major area we keep working on, and that is has been publishing all papers as open access for a long time). Our response, submitted on December 18, 2018:

We greatly appreciate the helpful and thorough feedback provided by the reviewers and we believe that most of the weaknesses identified can be addressed either directly or by more clearly scoping the subject, with perhaps one major exception.

Both reviewers called out the lack of fundamental research. We agree. The intent of this paper is not to present fundamental research, but for authors involved in developing a popular operating system to document motivations and design constraints of its security model and its implementation. As such, we see it as an applied study of a system that has been in the field and developed for 10 years rather than new theoretical concepts yet to be tried at scale. By clearly stating the security model and its reference implementation we hope to (a) provide both the research community and (very intentionally) industry practitioners with a systematic description and background on the Android security model, and in turn, we selfishly hope to benefit from (b) triggering further research that demonstrates flaws in both the implementation as well as the model itself. Additionally, we hope that this paper will be (c) useful as an academic reference to some of the design principles and implementation details that so far need to be cited from web sources - which is both the reason why we currently have to refer to online Google sources, and for targeting USENIX as a prime venue to offer a new academic source.

That said, we agree with both reviewers that the evaluation of the security model and its implementation based on a threat model that we consider relevant for mobile phone users should take a more central focus in the paper. Specifically, we consider the threat model, security model, and an evaluation of how well the current reference implementation (AOSP) mitigates these threats in scope of this paper. We consider app security and the diversity of the OEM ecosystem mostly out of scope unless the platform provides mechanisms that mitigate common mistakes based on practical experience (e.g. TLS certificate pinning). We believe that it would be sufficiently easy to pivot the discussion towards evaluating AOSP based on our threat model, as it would mostly shift the structure from mitigations to threats, but the associations between those two points of view are still the same. That is, most of the work has been done internally, and we would primarily restructure the paper.

We are very thankful to both reviewers for identifying points that we did not yet sufficiently address in our evaluation in terms of limitations and related work we missed to discuss. In the spirit of full disclosure, there are unfortunately some aspects that the authors cannot at this point publicly discuss, as they are still under development and will only become public towards the end of 2019.

Some detailed changes we plan to make:

  • We agree that the SD card, accessibility services and autofill, and the issue of SDKs/libraries embedded in apps are incorrectly omitted. This will be addressed in next revision along with relevant citations to backing research. Some of these issues are indeed still unsolved and will be explicitly called out as topics for future work.
  • Changing the flow to better evaluate the implementation of the security model will result in much of the appendix moving into the main matter. We will also clarify the security principles vs. rules.
  • Thanks for the suggestion on specifically analyzing mitigations to rooting exploits. We will try to either create a new table or integrate this with the existing structure.

This list is not exhaustive (also because all authors are currently on holidays or otherwise unavailable), and we will try to address the other suggestions as well (within the limitations of what can be publicly disclosed at the time of writing).

On January 18, 2019, the second review round resulted in another reject verdict with two more reviews added:

Review 3: reject

Overall merit: 1. Bottom 50% of submitted papers

Writing quality: 3. Adequate

Reviewer expertise: 4. I know a lot about this area

Paper summary:

The paper documents the Android security model. The paper describes Android’s security model as a multi-stakeholder model that actively seeks to satisfy multiple (i.e., 3) stakeholders, and defines a trust and threat model in that respect. Moreover, the paper describes how Android enforces access control at each layer of the OS stack, in order to uphold guarantees provided to different stakeholders. Finally, the paper identifies gaps and open problems.


  • The paper discusses the perspectives of various stakeholders and is generally well written.
  • The paper discusses certain implementation-level details of Android 9.0 which may be well known, but not discussed in academic literature as of now (e.g., end of Sec. 4.3).


  • The paper presents no new insights or results. It only documents existing literature on Android’s security model, from within and outside academic publication venues. As a result, the novelty of this paper is minimal (even for an SoK submission, as described below).
  • The set of stakeholders seems to be incomplete without adding the “enterprise” stakeholder to represent the growing use of Android in BYOD scenarios.
  • The threat model completely misses information stealing (i.e., either directly or via transitive access).
  • Most of the contents of the paper have been discussed in prior work. The related work section of the paper is grossly insufficient and should consider discussing work that has targeted the various topics discussed in the paper.

Detailed comments for authors:

This paper clearly belongs in an SoK category, if there is one. However, major changes would be needed, even if this was pitched as an SoK paper. Consider the following recommendations:

  1. The proposed analysis with 3 stakeholders seems limited. Consider other stakeholders, such as the enterprise, or other users (in case the device is shared).
  2. The threat model is currently incomplete and should consider important threats such as private information theft. Take a look at this recent SoK on Android security by Acar et al[1].
  3. The paper generally describes known content as well as insightful comparison together. It would really help if the insights present in the writing were drawn out or highlighted.
  4. The related work is clearly insufficient. Most of the content of this paper can be found in prior work, which needs to be cited. As of now, the paper contains a significant number of references (93), very few of which are actually cited in the paper.

Finally, a major concern is that the main pitch of the paper seems unclear. Is it describing the Android security model as is? Or is it contrasting the model with other Desktop OSes like Windows/Linux, and other modern operating systems like iOS? The latter would be really interesting, and the paper seems to do so once in a while. However, a consistent pitch, throughout the paper, would make it really interesting.


[1] ACAR, Y., BACKES, M., BUGIEL, S., FAHL, S., MCDANIEL, P., AND SMITH, M. Sok: Lessons learned from android security research for appified software platforms. In 37th IEEE Symposium on Security and Privacy (S&P ’16) (2016), IEEE.

Review 4: major revision

Overall merit: 2. Top 50% but not top 25% of submitted papers

Writing quality: 4. Well-written

Reviewer expertise: 3. I know the material, but am not an expert

Paper summary:

The paper summarizes the Android ecosystem, its stakeholders (users, developers, and the platform), and the general threat model on mobile devices. It then describes the security mechanisms implemented since Android 4.1 in the AOSP.


  • Description of Android ecosystem complexity and the resulting challenges and threats
  • Summary of security-relevant changes over the last Android versions


  • Unclear which parts are novel
  • Limited comparison to other OSes (only brief discussion of threat model on desktop OSes, no comparison to iOS)
  • Limited discussion of evolution of security measures

Detailed comments for authors:

I appreciate the overall idea of the paper, and I especially like the discussion of the complexity of the ecosystem and the tensions between interests of different stakeholders.

The threat model also nicely distinguishes the increased security threats on mobile devices compared to desktops, but overall a comparison to the evolution of security mechanisms in other OSes, especially iOS, would have been nice.

Furthermore, while the paper acknowledges that it is a partial systematization of existing knowledge (that was never formally published), it is not clear to me which parts are novel contributions of this paper.

As it is, I feel there are limited new insights, and the most interesting parts to me are the tables buried the appendix: I would be curious about a more historic analysis of which security measures were introduced in different Android versions and whether they addressed security threats already seen in the wild (or reported in research). On this topic, I am also wondering, why the discussion only starts with Android 4.1.

Finally, while the paper focuses on AOSP, some integral parts of Android as it is deployed in practice are out of scope: app vetting through the Google Play Store and the Verify Apps functionality are an important part of Google’s security model for Android, yet explicitly out of scope. I would like to have seen more justification as to why. App store vetting is only briefly mentioned in the conclusion, and the related work cited as covering this functionality only consists of Google’s blogposts about them.

On a minor note, codenames and numbers for Android versions are used interchangeably, for consistency I’d stick to either or (or both).

Lessons learned from version 2 reviews

First, please scroll back upwards and (re-)read review 1. To whoever wrote that review — many thanks for your time and effort that went into this insightful, thoughtful, and highly helpful review! I can only repeat that such stellar reviews are a distinct feature of USENIX Security, and submitting a sufficiently mature piece of work for review there is highly rewarding in terms of potential for improving a paper draft. Needless to say, we implemented nearly all of reviewer 1’s suggestions more-or-less verbatim in the next iteration of the paper. The first takeaway from this round is that insightful reviewers who work hard to come up with specific recommendations for improvements provide tremendous service to authors despite a rejection.

As was evident in those reviews, this paper was difficult to explain and/or not ideally suited as a typical conference paper because it doesn’t have a clear focus on a single, novel scientific contribution. Instead, it was always meant to be a systematic summary of the high-level intentions and designs that shaped the design and implementation of the Android security model over many years. However, as clearly seen with version 1, neither is it a typical SoK type paper (although it is quite funny that reviewer 3 indicated that they also thought this may be a SoK paper, which was our initial impulse that turned out to be sub-optimal). It was hard to find an actionable takeaway from this major point; the best learning is probably to define the scope of a paper and set expectations as clearly as possible and, if it doesn’t fit a typical category, state that outright, which we tried to improve upon in futher iterations.

Another interesting takeaway for the core model was reviewer 3’s mentioning of `enterprise’ stakeholders, which helped us realize that the three-party consent is really a multi-party consent model with 2-4 parties depending on the context. Future iterations of the paper have adapted the descriptions accordingly. If the review process wasn’t blinded, we would heartily acknowledge the reviewer’s part in shaping this discussion detail.

In these 2 rounds of reviews of version 2, more specific shortcomings were identified, such as lack of discussion of external app storage (external or virtual SD card), abusable APIs (such as accessibility), or abstract/loose treatment of user consent. We tried to fix those where possible — unfortunately with some restrictions of what could be documented publicly at the time, while some internal developments were still ongoing.

Lastly, the reviewers mentioned an even longer list of additional academic publications to cite. Again, this was the easist part to address.

(Non-reviewed) Version 3: arXiv 2019

We took about 3 months of concerted effort and multiple rounds of internal reviews and language level polishing (thanks to the many Google colleagues helping out with reviewing!) to improve the paper based on these excellent recommendations. While there were some parts that we still didn’t know how to fix (e.g., the lack of formal model specification mentioned first in IEEE S&P reviews and the lack of a single, novel scientific contribution mentioned for both versions), we were proud of this third version and were already being approached by colleagues from academia on when the paper may become publicly available. Therefore, we decided to self-publish this interim state while continuing to work on the manuscript with full peer review.

The result was this version 3 first published as arXiv:1904.05572 on April 19, 2019.

Version 4: ACM Transactions on Privacy and Security (TOPS) 2020

After realizing that this paper would be a difficult fit to typical security conferences – including its quickly growing length, partially due to the number of references –, we started looking at appropriate journals. The first intention was to submit to IEEE Transactions on Mobile Computing (TMC), also because I had previous experience with publications in this journal. However, since the proper scope and fit already turned out to be a critical aspect in the whole process, we actively reached out to a few colleagues in security academia for their informed opinions on best possible publication venues. We explicltly thank Patrick Traynor for encouraging us to submit to ACM Transactions on Privacy and Security (TOPS). In August 2019, we started preparing for a submission to ACM TOPS, at the same time updating the content to reflect changes in Android 10.

While other projects took our time and attention, additions to this iteration include adding an overview figure on mediaserver security sandboxing and hardening, moving the historic changes tables from appendix back into the main matter, additional enhancements introduced with Android 10 and later (e.g., Identity Credential, CFI and SCS, tri-state permissions, and Adiantum for file based encryption without hardware acceleration), and a completely new section with details on different system image signing keys and how these layers of code signing fit together. Additionally, we more clearly define the scope to be AOSP and that, while important for overall device security, non-AOSP services like GMS are considered out of scope of this paper.

After additional rounds of internal reviews (again, thanks to the additional Google colleagues supporting us with their time and dedication), we submitted this fourth version (32 pages in ACM 1-column style) to ACM TOPS on May 13, 2020. This round of single-blind peer review was finished on August 23, 2020 with a verdict of major revision with this summary by the Associate Editor:

Both reviewers provided detailed comments on how to improve the paper - need to have more insights, clarity, justifications, logical flow of the presentation; and in particular a more coherent, systematic and rigorous treatment of the security model, rather than simply presenting the 5 disconnected rules.

Review 1: minor revision


I really enjoyed reading this paper (for the second time). I previously reviewed an earlier version this paper in an anonymized form. I have been aware of its arXiv version and frequently refer people to it. This is the most comprehensive description of the Android platform security model. It is written in a way that is geared towards security researchers, reflecting on concepts that are well understood in academia. Most other descriptions of Android’s security model are geared towards industry and simply do not provide the insight needed by security researchers.

The difficulty with the paper is that it does not fit the mold of a traditional research paper. That is, it does not define a problem, propose a solution, and evaluate that solution. Nor does it provide a quantitative empirical evaluation of a some subject or population. However, if I take a step back and consider the broader picture of scientific methodology, I would classify this paper as a case study on possibly the most important computing platform of this century. Case studies are an important scientific methodology used in many fields, though the security community often looks down upon them. Thus, in this light, I view this paper as containing important scientific research that is valuable to the security community.

In addition to simply documenting the Android platform security model, this paper provides a historical perspective of how the model has required adaptation over time. These are key lessons for future systems and there is significant value in documenting the process.

All of this said, I do have some disappointment with the paper in its current form. This disappointment primarily stems from the space constraints of the paper. Section 4, which provides the core description of the Android platform security model, covers topics a varying depth. There does not seem to be rhyme or reason as to why the authors spend one paragraph or more than a page on a topic (other than the need to fall within the page requirement). Honestly, I would love to see this document turned into a living book that is updated as Android matures. Each subsection of Section 4 could easily be turned into one or more chapters.

As a journal paper, I’m not sure what I would de-emphasize in order to add description to areas that I might personally want to see more detail. Others may have a different desired emphasis, and I think it is good that this paper provides emphasis on topics that are not as well covered by the background sections of other security literature. With this consideration in mind, I don’t think major revision to adjust scoping would be particularly productive.

That said, there are a number of minor items that would benefit from revision.

  1. The writing in Section 4 frequently lacks the insight and the evolution of change. I found the tables in the appendix much more informative than the text in the corresponding sections.

  2. I felt that the paper was not particularly forthcoming about the details when Android’s initial design decisions were not good, and in some cases failed to meet the security rules (which are fundamentally hard to meet in all cases all the time). Some examples:

  • Section 4.1 (Consent) in general is fairly week with respect to this concern. It is largely philosophical in terms of what should be done, but no history of how different design considerations have changed, or possibly how these goals have changed over time.
  • Section 4.1.1 (Developer Consent): the second last paragraph discusses developers of varying skill levels and being safe by default. However, the original Android components were public by default. This was changed rather early in Android’s history, but it is a good anecdote.
  • Section 4.1.1 (Developer Consent): the last paragraph discusses signing keys, but it neglects to discuss the difficulty of migrating apps from one key to another, and the limited expressiveness of the policy for updating an application. Barrera’s Baton paper at WiSec'14 has some nice context on the problem. I’m not sure what the current upgrade policy is. The Baton paper has a great anecdote of Google having to create a whole new app to migrate to a new key (I think for the Google Authenticator app, if memory serves). This is a great opportunity to discuss how the goals sometimes collide and how Android chose to resolve them, and why.
  • Section 4.1.3 (User Consent): the discussion provides nice high level statements and design philosophy, but it doesn’t get into any of the “failures” and solutions that have occurred overtime.
  • Section 4.2 (Authentication) doesn’t discuss the smear problem with pass patterns. Aviv has a WOOT paper on this, if I remember correctly. It also was not clear that the pattern did not fall into the “swipe-only lockscreens” classification mentioned in the second paragraph.
  1. I wonder if the subsection on consent should be pushed later in the section. It is probably the weakest in terms of technical content. However, if other content such as app permissions were discussed first, perhaps it could be more informative.
  2. Section 4.3.1 (Permissions): Special Access Permissions: can you give an example?
  3. Section 4.3.1 (Permissions): the five classes seems redundant with the protection levels. Is there any way to streamline these discussions into a single presentation.
  4. Section 4.3.2 (Application Sandbox): There is no discussion of how UIDs are used to also encode multiple physical users. Table 1 indicates SEAndroid policy is also being used for that now, so maybe things have changed since I last looked? In general, there is a lot of detail in Table 1 that I’d love to see deeper explanations of. How does scoped storage work? How is SEAndroid being used for physical user separation and per-app sandboxes? What were the /proc and /sys considerations? How does seccomp vs SELinux compare for protecting the kernel? Similarly, Section 4.3.3 and 4.3.4 don’t go into any of the interesting evolution of protections that is listed in Tables 2 and 3.
  5. Section 4.8 (patching): it would also be nice to go into more detail on Treble. Does Treble also impact the security isolation? It was not mentioned there.

Overall, there isn’t a clear narrative to Section 4. It jumps around to different topics.

Minor Writing Considerations:

  • The 15 threats are unwieldy to reference purely by number. The reader has to constantly “flip” back and forth … which is fine if the paper is printed, but on an tablet computer, it means swiping back and forth a lot, and at some point, I just gave up. I highly recommend embedding more semantics into the threat names. You already group them by type. For example, T{1-4} -> T.P.{1-4} where the “P” stands for physical access. The other groupings could have similar names, e.g., T.N.{1-2} for network, etc. When doing this, it would be nice if either the lowest or highest number represents the most powerful adversary, and the other end of the spectrum represents the least powerful adversary, strictly ordering the threats by adversarial capability when possible. This would make referring to the threats later in the text super intuitive.
  • When referring the goals at the beginning of Section 4, it would be helpful to have a parenthetical short name for each goal, e.g., (4) (safe reset). This would remind the reader of the goal without them needing to flip back and forth. Something similar could be done with the threats, but the number scheme mentioned above might be sufficient.
  • Consider putting the examples at the end of Section 4.1 into the signpost. They might provide good context through which the subsection topics can be discussed.
  • Section 4.2 should have a 4.2.1 and 4.2.2. Simply number the earlier discussion so there isn’t a hanging .1

Additional Questions:

Does the paper present innovative ideas or material?: Yes

In what ways does this paper advance the field?: The paper presents how the Android platform has innovated security protections over the past decade.

Is the information in the paper sound, factual, and accurate?: Yes

If not, please explain why.:

Does this paper cite and use appropriate references?: Yes

If not, what important references are missing?:

Is the treatment of the subject complete?: Yes

If not, What important details / ideas/ analyses are missing?:

Should anything be deleted from or condensed in the paper?: No

If so, please explain.:

Please help ACM create a more efficient time-to-publication process: Using your best judgment, what amount of copy editing do you think this paper needs?: None

Most ACM journal papers are researcher-oriented. Is this paper of potential interest to developers and engineers?: Yes

Review 2: major revision



This paper defines the threat and security models for Android and describes how the past and current implementations of the Android Open Source Project (AOSP) enforce the security model with multiple interacting security measures on different layers, mainly including consent, (user) authentication, isolation and containment, data-at-rest encryption, exploit mitigation and system integrity. The paper is not meant for proposing new techniques or a systemization of knowledge over existing work in a research topic. It sheds light on how Android is (has been) designed and implemented from the security perspective. Although the paper is quite informative, it is not clear how a security researcher benefits from it. Most of the issues discussed are known and not deep, to academic researchers. The article reveals Google’s security design philosophy and practice, but does not elevate it to a higher and more generic level for a researcher to benefit from Google’s experience and practice in Android security or insights.

Detailed comments.

  • The security model presented in the paper seems to be geared for mobile devices with frequent human interaction. As the multiple-party consent underpins the philosophy of Android security, the involvement of users seems indispensable. This may leave out those Android devices (e.g.,IoT devices in cyberphysical systems) which operate with few user interactions and more specialized applications. How would the current security model deal with those devices?
  • It is far-fetched to define the Android platform security model simply with five apparently disconnected rules: multiparty consent, open access, compatibility requirement, factory reset and application being principal. A reader would expect a “security model” to present a more coherent, systematic and rigorous treatment. Perhaps it is better to claim that these are five essential design guidelines or principles for Android security. Since some of the rules are not established by the necessity of security from the technology perspective, it would be interesting to know how Google identified them out of business or other practical incentives.
  • The paper only briefly touches exploit mitigation when describing Android implementations (Section 4). The length is disproportionally short as compared with its importance, as most attacks endusers encounter are exploits from malicious apps. For example, malware may use repackaging or abuse Android UI-related APIs (e.g., clickjacking) to attack the users. How does Android security cope with such threats?
  • Section 4.3.4 “Sandboxing the kernel”. The problems here refer to rogue or vulnerable device drivers in the kernel space. However, this subsection does not provide sufficient information about how the drivers are sandboxed within the kernel (if any).
  • The notion of tamper resistant hardware (TRH) is not clear enough. According to the description, it differs from TEE which usually refers to the environment inside the ARM processor’s Secure World. If TRH refers to a secure co-processor, how prevalent is it available in commodity Android devices?

Additional Questions:

Does the paper present innovative ideas or material?: No

In what ways does this paper advance the field?: The paper does not advance the field. It reviews the security model and implementation in Android

Is the information in the paper sound, factual, and accurate?: Yes

If not, please explain why.:

Does this paper cite and use appropriate references?: Yes

If not, what important references are missing?:

Is the treatment of the subject complete?: No

If not, What important details / ideas/ analyses are missing?: The manuscript may cover more on Android’s exploit mitigation techniques, especially upon user interface attacks.

Should anything be deleted from or condensed in the paper?: No

If so, please explain.:

Please help ACM create a more efficient time-to-publication process: Using your best judgment, what amount of copy editing do you think this paper needs?: None

Most ACM journal papers are researcher-oriented. Is this paper of potential interest to developers and engineers?: Yes

Version 5: ACM Transactions on Privacy and Security (TOPS), major revision 2020

Again, our great thanks go to anonymous reviewer 1! In addition to pointing out additional references that fit well into the current draft and specific areas to describe in more detail, this review also suggested some editorial changes such as renumbering the threats or including short names for rules for easier cross-referencing. Reviewer 2 also pointed out some avenues for improvement, including sandboxing of kernel components, use of TRH, and UI attacks.

After asking for a deadline extension for submitting the revision (we wanted to work on most of the feedback and were slowed down by the ongoing pandemic), we submitted our major revision (I’ll call it paper version 5) on November 11, 2020.

Our major revision had a few components, including updating to Android 11, a completely new first draft of a formalization, additional details on lessons learned throughout the years, commenting on UI attacks and more details on project Treble, as well as removing redundant descriptions that have crept into previous iterations. This was explained in much more detail in a review response and added to the major revision alongside a full diff w.r.t. the original TOPS submission.

On January 5, 2021, we received the result of our major revision submission, which was the verdict of minor revision with this summary by the Associate Editor:

Even though reviewer 1 recommended accept, the reviewer has some concerns with Appendix A. Reviewer 2 has similar concerns on Appendix A, in addition to several other comments. Please revise the paper accordingly.

Review 1: accept


I am largely happy with the changes made to the manuscript, and I feel that it is in an acceptable form.

However, I found the appendix addition a bit odd. The end of section 3 states that Appendix A is an “first, albeit incomplete, formalization of the access control properties of these rules”. This phrasing completely undercuts the value of the appendix and makes me think it should be excluded from the paper.

While I’m not going to require it, I feel like the content of the appendix should be integrated with Section 3. Just call it what it is: a high level formalization. You can then use this formalization as appropriate in the remainder of the paper to where appropriate.

Along these lines, I don’t think you need to include a formalization for Rule 3 (compatibility). Given its placement in the appendix, I wonder if it should also be listed last in Section 3. To some extent, it isn’t a traditional security model property, but I agree with the authors that treating security as a “compatibility” requirement is important in order to make vendors take it seriously.

Minor Notes:

  • please take a grammatical pass at the new text. E.g, “Goodle Play” in the blue text on Page 26 of the -diff PDF.

Additional Questions:

Does the paper present innovative ideas or material?: Yes

In what ways does this paper advance the field?:

Is the information in the paper sound, factual, and accurate?: Yes

If not, please explain why.:

Does this paper cite and use appropriate references?: Yes

If not, what important references are missing?:

Is the treatment of the subject complete?: Yes

If not, What important details / ideas/ analyses are missing?:

Should anything be deleted from or condensed in the paper?: No

If so, please explain.:

Please help ACM create a more efficient time-to-publication process: Using your best judgment, what amount of copy editing do you think this paper needs?: None

Most ACM journal papers are researcher-oriented. Is this paper of potential interest to developers and engineers?: Yes

Review 2: minor revision


Thank authors for addressing my concerns in the first round review. Below are my comments upon the revised manuscript.

  • It is appreciated that the authors made the efforts to formalize the Android platform security model as presented in Appendix A. Nonetheless, since the revision states in the end of Section 3 that the rules in the model “evolved from practical experience instead of a top-down theoretical design”, the formalization is unneeded and inappropriate to some extent. The purpose of formalization is not to introduce notations, but to have a top-down theoretic design or analysis, which is exactly the current model does not have. It is perfectly fine and fair for the authors to make a summary of important rules from Android’s design practice. There is no need to force through a formalization. In my opinion, the authors can drop Appendix A and perhaps add a caveat in Section 3 explaining that the meaning of ``model" in the paper is somehow different from its conventional use so that readers would have a proper expectation when reading subsequent sections.
  • Line 215. (Section 2.2). The chosen ciphertext attacks such as CCA and CPA are defined under the context of formalizing the security strength of encryption ciphers. They are not applicable for network communications models.
  • Page 13, the paper argues that biometric authentication is less secure than primary authentication using password or PIN. As far as I know, biometrics has much larger entropy than passwords or PINs can offer. Any citation to support your observation? The paper also categorize secondary authentication modalities into three sub-tiers. However, there is no description of these sub-tiers. Please elaborate them.
  • Page 14. Section 4.2.2 and Section 4.2.3. Authenticating to third parties should not be considered as the authentication issue AOSP is concerned. These issues are orthogonal to Android authentication.
  • Page 21. Section 4.3.5. Why are user space components init, uneventd and vold part of the TCB below the kernel?
  • Page 21. Section 4.3.5. Are “Android key store” and “Android keystore” the same?
  • Page 22. Please clarify the security assurance of protected confirmation. The claim that “even a full compromised kernel cannot lead to creating these signed confirmation” is misleading, though it is true under the assumptions. It gives readers the wrong impression that it is secure against a compromised kernel. In fact, the malicious kernel can directly manipulate the app’s authentication logic to bypass the protected confirmation or force it to return TRUE. -Section 4.8. I guess HAL stands for Hardware Abstraction Layer. Please use the full name before using its abbreviation.

Some writing issues.

  • Reading of the paper can be more enjoyable if there is less use of parentheses. It is quite annoying and distracting to embedded many supplementary explanations and examples in the lines, especially when they are relatively long.
  • It is a common practice to add a comma after “e.g.” and “i.e.”.
  • Please use “Section 4.2” instead of “section 4.2”. The same applies to references to tables and figures.
  • Line 597, “or should surfing” should be “or shoulder surfing”
  • Line 608, " … kernel compromise confer the ability …" should be “… kernel compromise confers the …”
  • Line 770, “…xml). to be …” should be “….xml) to be”

Additional Questions:

Does the paper present innovative ideas or material?: No

In what ways does this paper advance the field?: The paper does not advance the field. It reviews the security model and implementations in Android.

Is the information in the paper sound, factual, and accurate?: Yes

If not, please explain why.:

Does this paper cite and use appropriate references?: Yes

If not, what important references are missing?:

Is the treatment of the subject complete?: Yes

If not, What important details / ideas/ analyses are missing?:

Should anything be deleted from or condensed in the paper?: No

If so, please explain.: I would recommend the authors to remove Appendix A. The security model is derived from practice. Representing 5 disconnected rules with notations does not serve the purpose of systemizing and generalizing practices to a theoretic model with abstraction.

Please help ACM create a more efficient time-to-publication process: Using your best judgment, what amount of copy editing do you think this paper needs?: Light

Most ACM journal papers are researcher-oriented. Is this paper of potential interest to developers and engineers?: Yes

Version 5a: arXiv 2020

This version (the only one published with the draft of a model rules formalization) was also uploaded – with only minor formatting changes – to arXiv as an update to the previously published version 3 as arXiv:1904.05572v2 (having grown to 38 pages in 1-column style) on December 14, 2020.

Version 6: ACM Transactions on Privacy and Security (TOPS), minor revision 2021

In this final, minor revision, the main change was to remove the incomplete formalization of platform security model rules again (just bringing it down again to the limit of 35 pages), alongside minor additions and clarifications, e.g., on the different biometric authentication classes and other unlock mechanisms, or what is considered as part of the TCB. This version 6 was submitted on January 23, 2021 and accepted on January 29, 2021.

We submitted the `camera ready’ version (our last one compiled locally) on February 5, 2021 and received a first proof from ACM production editors on April 1, 2021, giving us 5 days (over the Easter holidays…) to mark any potential issues. As seems slightly usual at this stage, the editing introduced a number of minor to medium errors (mostly through some, erm, opinionated but unfortunately inconsistent changes concerning acronyms), and the extremely short response time (after having taken 2 months for their editing) left us a bit … flustered. Another round of reviewing editing proofs later (with no further feedback or acknowledging the errors we still pointed out in the second proof), the final publication went online on April 28, 2021 under open access.


While it has taken nearly 3 years from starting to work on this paper until final online publication and the whole process was quite frustrating at times, we are very happy about the final result.

The main lessons are that

  • iterations help the quality of a paper: Both internal (among your own group) and external review rounds turn up recommendations for changes, which generally help to improve a paper — but only to a certain number of rounds, as we could see e.g., with the adding and removing of a formalization or the necessity to hunt for and remove redundancies after a few iterations.
  • persistence can pay off: Don’t give up easily on the first (or second, or third) rejection, but try to find the room for improvement. Of course, there are papers that won’t ever make it into the `tier A’ venues of any specific field, and deciding if another iteration will improve the work and make it potentially publishable is never easy.
  • good reviews are rare, but if you get them, they are tremendously helpful: Pick venues to submit by the best possible reviews you can get, which are highly valuable even if they result in a reject verdict. Targeting the wrong academic venue or paper category is a mistake that will potentially waste time and effort without helping to improve the final outcome. For systems security work, USENIX Security is very hard to top in terms of reviews quality.
  • for some types of papers, journals may be a better fit than conferences: Even though in security, conferences are the main venues of publication and journals often seem too slow for the fast-paced research and development, some kinds of work – like our summary/systematic overview type of paper – may have a higher chance of acceptance at journals, both because they tend to accept longer, more `matured’ manuscripts and because of the built-in multiple stages of the review process. It is an excellent direction that many of the main security conferences are now adopting the revision model instead of binary accept/reject decisions, giving authors the chance to improve (major or minor) points instead of having to try again from the start. However, most conferences only allow a single revision, while for journals the succession of major revision -> minor revision -> accept is quite common.
René Mayrhofer
René Mayrhofer
Professor of Networks and Security & Director of Engineering at Android Platform Security; pacifist, privacy fan, recovering hypocrite; generally here to question and learn