We see several possible future directions to this work:
– Type inference: we plan to investigate how type inference could be automa- tized or partially automatized for the span type system. We will study typing by constraint generation and explore whether existing off-the-shelf solvers or new procedures could allow to solve these constraints. Preliminary results (see [5]) show that the case of work is manageable, and it generates a set of constraints close to the one in [4]. However, the case of span could re- quire more sophisticated reasoning because of the strong distinction between servers and channels with the advancing of time.
– We have mentioned that our type system for span is not adapted to analyse some concurrent systems such as the simple example of the semaphore (Sect.
4.2). However, we believe that a type system based on an adaptation of usages [27,30,29] could be promising for this purpose.
– It would be challenging to examine whether similar type systems could be developed to account for some other complexity properties, for instance to extract the number of parallel processes needed to achieve the span.
Acknowledgements We are grateful to Naoki Kobayashi for suggesting the definition of annotated processes and their reduction that we use in this paper.
This work was supported by the LABEX MILYON (ANR-10-LABX-0070) of Universite de Lyon.
References
1. Akl, S.G.: Encyclopedia of Parallel Computing, chap. Bitonic Sort, pp. 139–146.
Springer US, Boston, MA (2011)
2. Albert, E., Correas, J., Johnsen, E.B., Rom´an-D´ıez, G.: Parallel cost analysis of dis- tributed systems. In: Static Analysis - 22nd International Symposium, SAS 2015, Saint-Malo, France, September 9-11, 2015, Proceedings. Lecture Notes in Com- puter Science, vol. 9291, pp. 275–292. Springer (2015)
3. Albert, E., Flores-Montoya, A., Genaim, S., Martin-Martin, E.: Rely-guarantee termination and cost analyses of loops with concurrent interleavings. Journal of Automated Reasoning59(1), 47–85 (2017)
4. Avanzini, M., Dal Lago, U.: Automating sized-type inference for complexity anal- ysis. Proceedings of the ACM on Programming Languages1(ICFP), 43 (2017) 5. Baillot, P., Ghyselen, A.: Types for Complexity of Parallel Computation in Pi-
calculus (Technical Report) (Oct 2020), https://hal.archives-ouvertes.fr/
hal-02961427, working paper or preprint
6. Dal Lago, U., Gaboardi, M.: Linear dependent types and relative completeness. In:
Logic in Computer Science (LICS), 2011 26th Annual IEEE Symposium on. pp.
133–142. IEEE (2011)
7. Dal Lago, U., Martini, S., Sangiorgi, D.: Light logics and higher-order processes.
Mathematical Structures in Computer Science26(6), 969–992 (2016)
8. Dardha, O., Giachino, E., Sangiorgi, D.: Session types revisited. Information and Computation256, 253 – 286 (2017)
9. Das, A., Hoffmann, J., Pfenning, F.: Parallel complexity analysis with temporal session types. Proc. ACM Program. Lang.2(ICFP), 91:1–91:30 (2018)
10. Das, A., Hoffmann, J., Pfenning, F.: Work analysis with resource-aware session types. In: Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2018, Oxford, UK, July 09-12, 2018. pp. 305–314. ACM (2018)
11. Degano, P., Gadducci, F., Priami, C.: Causality and replication in concurrent pro- cesses. In: Perspectives of System Informatics. pp. 307–318. Springer Berlin Hei- delberg (2003)
12. Degano, P., Priami, C.: Causality for mobile processes. In: Automata, Languages and Programming. pp. 660–671. Springer Berlin Heidelberg (1995)
13. Demangeon, R., Yoshida, N.: Causal computational complexity of distributed pro- cesses. In: Proceedings of the 33rd Annual ACM/IEEE Symposium on Logic in Computer Science. pp. 344–353. LICS ’18, ACM (2018)
14. Di Giamberardino, P., Dal Lago, U.: On session types and polynomial time. Math- ematical Structures in Computer Science-1(2015)
15. Gaboardi, M., Marion, J., Rocca, S.R.D.: A logical account of pspace. In: Proceed- ings of the 35th ACM SIGPLAN-SIGACT Symposium on Principles of Program- ming Languages, POPL 2008, San Francisco, California, USA, January 7-12, 2008.
pp. 121–131. ACM (2008)
16. Giachino, E., Johnsen, E.B., Laneve, C., Pun, K.I.: Time complexity of concurrent programs - - A technique based on behavioural types -. In: Formal Aspects of Component Software - 12th International Conference, FACS 2015, Niter´oi, Brazil, October 14-16, 2015, Revised Selected Papers. Lecture Notes in Computer Science, vol. 9539, pp. 199–216. Springer (2016)
17. Gimenez, S., Moser, G.: The complexity of interaction. In: Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming
Languages, POPL 2016, St. Petersburg, FL, USA, January 20 - 22, 2016. pp. 243–
255 (2016)
18. Hainry, E., Marion, J.Y., P´echoux, R.: Type-based complexity analysis for fork processes. In: Foundations of Software Science and Computation Structures - 16th International Conference, FOSSACS 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings. Lecture Notes in Computer Science, vol. 7794, pp. 305–
320. Springer (2013)
19. Harper, R.: Practical Foundations for Programming Languages. Cambridge Uni- versity Press (2012)
20. Hoffmann, J., Aehlig, K., Hofmann, M.: Multivariate amortized resource analysis.
ACM Trans. Program. Lang. Syst.34(3), 14:1–14:62 (2012)
21. Hoffmann, J., Aehlig, K., Hofmann, M.: Resource aware ML. In: Computer Aided Verification - 24th International Conference, CAV 2012, Berkeley, CA, USA, July 7-13, 2012 Proceedings. Lecture Notes in Computer Science, vol. 7358, pp. 781–786.
Springer (2012)
22. Hoffmann, J., Hofmann, M.: Amortized resource analysis with polynomial poten- tial. In: Programming Languages and Systems, 19th European Symposium on Pro- gramming, ESOP 2010, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2010, Paphos, Cyprus, March 20-28, 2010. Pro- ceedings. Lecture Notes in Computer Science, vol. 6012, pp. 287–306. Springer (2010)
23. Hoffmann, J., Shao, Z.: Automatic static cost analysis for parallel programs. In:
Vitek, J. (ed.) Programming Languages and Systems. pp. 132–157. Springer Berlin Heidelberg, Berlin, Heidelberg (2015)
24. Hofmann, M.: Linear types and non-size-increasing polynomial time computation.
Information and Computation183(1), 57–85 (2003)
25. Hofmann, M., Jost, S.: Static prediction of heap space usage for first-order func- tional programs. In: Conference Record of POPL 2003: The 30th SIGPLAN- SIGACT Symposium on Principles of Programming Languages, New Orleans, Louisisana, USA, January 15-17, 2003. pp. 185–197. ACM (2003)
26. Hughes, J., Pareto, L., Sabry, A.: Proving the correctness of reactive systems using sized types. In: Proceedings of the 23rd ACM SIGPLAN-SIGACT symposium on Principles of programming languages. pp. 410–423. ACM (1996)
27. Kobayashi, N.: A type system for lock-free processes. Information and Computation 177(2), 122 – 159 (2002)
28. Kobayashi, N.: Type systems for concurrent programs. In: Formal Methods at the Crossroads. From Panacea to Foundational Support, pp. 439–453. Springer (2003) 29. Kobayashi, N.: Type-based information flow analysis for the π-calculus. Acta In-
formatica42(4-5), 291–347 (2005)
30. Kobayashi, N.: A new type system for deadlock-free processes. In: International Conference on Concurrency Theory. pp. 233–247. Springer (2006)
31. Kobayashi, N., Pierce, B.C., Turner, D.N.: Linearity and the pi-calculus. ACM Trans. Program. Lang. Syst.21(5), 914–947 (sep 1999)
32. Madet, A., Amadio, R.M.: An elementary affine λ-calculus with multithreading and side effects. In: Typed Lambda Calculi and Applications - 10th International Conference, TLCA 2011, Novi Sad, Serbia, June 1-3, 2011. Proceedings. Lecture Notes in Computer Science, vol. 6690, pp. 138–152. Springer (2011)
33. Marion, J.Y.: A type system for complexity flow analysis. In: Proceedings of the 26th Annual IEEE Symposium on Logic in Computer Science, LICS 2011, June 21- 24, 2011, Toronto, Ontario, Canada. pp. 123–132. IEEE Computer Society (2011)
34. Sangiorgi, D., Walker, D.: The pi-calculus: a Theory of Mobile Processes. Cam- bridge university press (2003)
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/
4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Transactional Consistency Models
Sidi Mohamed Beillahi(), Ahmed Bouajjani, and Constantin Enea Universit´e de Paris, IRIF, CNRS, Paris, France,{beillahi,abou,cenea}@irif.fr
Abstract. Concurrent accesses to databases are typically encapsulated in transactions in order to enable isolation from other concurrent compu- tations and resilience to failures. Modern databases provide transactions with various semantics corresponding to different trade-offs between con- sistency and availability. Since a weaker consistency model provides bet- ter performance, an important issue is investigating the weakest level of consistency needed by a given program (to satisfy its specification). As a way of dealing with this issue, we investigate the problem of checking whether a given program has the same set of behaviors when replacing a consistency model with a weaker one. This property known asrobust- nessgenerally implies that any specification of the program is preserved when weakening the consistency. We focus on the robustness problem for consistency models which are weaker than standard serializability, namely, causal consistency, prefix consistency, and snapshot isolation.
We show that checking robustness between these models is polynomial time reducible to a state reachability problem under serializability. We use this reduction to also derive a pragmatic proof technique based on Lipton’s reduction theory that allows to prove programs robust. We have applied our techniques to several challenging applications drawn from the literature of distributed systems and databases.
Keywords: Transactional databases·Weak consistency·Program verification
1 Introduction
Concurrent accesses to databases are typically encapsulated in transactions in or- der to enable isolation from other concurrent computations and resilience to fail- ures. Modern databases provide transactions with various semantics correspond- ing to different tradeoffs between consistency and availability. The strongest consistency level is achieved with serializable transactions [42] whose outcome in concurrent executions is the same as if the transactions were executed atom- ically in some order. Since serializability (SER) carries a significant penalty on availability, modern databases often provide weaker consistency models, e.g.,
This work is supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agree- ment No 678177).
©The Author(s) 2021
N. Yoshida (Ed.): ESOP 2021, LNCS 12648, pp. 87–117, 2021.
https://doi.org/10.1007/978-3-030-72019-3 4
causal consistency (CC) [38], prefix consistency (PC) [22, 25], and snapshot iso- lation (SI) [12]. Causal consistency requires that if a transaction t1 “affects”
another transactiont2, e.g.,t1executes beforet2 in the same session ort2 reads a value written by t1, then the updates in these two transactions are observed by any other transaction in this order. Concurrent transactions, which are not causally related to each other, can be observed in different orders, leading to behaviors that are not possible underSER. Prefix consistency requires that there is a total commit order between all the transactions such that each transaction observes all the updates in a prefix of this sequence (PC is stronger than CC).
Two transactions can observe the same prefix, which leads to behaviors that are not admitted by SER. Snapshot isolation further requires that two different transactions observe different prefixes if they both write to a common variable.
Since a weaker consistency model provides better performance, an important issue is identifying theweakest level of consistency needed by a program (to sat- isfy its specification). One way to tackle this issue is checking whether a program P designed under a consistency modelShas the same behaviors when run under a weaker consistency modelW. This property of a program is generally known asrobustnessagainst substitutingS withW. It implies that any specification of P is preserved when weakening the consistency model (fromStoW). Preserving any specification is convenient since specifications are rarely present in practice.
The problem of checking robustness for a given program has been investi- gated in several recent works, but only when the stronger model (S) is SER, e.g., [9, 10, 19, 26, 13, 40], or sequential consistency in the non-transactional case, e.g. [36, 15, 29]. However, there is a large class of specifications that can be implemented even in the presence of “anomalies”, i.e., behaviors which are not admitted under SER (see [46] for a discussion). In this context, an impor- tant question is whether a certain implementation (program) is robust against substituting a weak consistency model, e.g., SI, with a weaker one, e.g.,CC.
In this paper, we consider the sequence of increasingly strong consistency models mentioned above,CC,PC, andSI, and investigate the problem of checking robustness for a given program against weakening the consistency model to one in this range. We study the asymptotic complexity of this problem and propose effective techniques for establishing robustness based on abstraction. There are two important cases to consider: robustness against substitutingSIwithPCand PC with CC, respectively. Robustness against substituting SI with CC can be obtained as the conjunction of these two cases.
In the first case (SIvsPC), checking robustness for a programPis reduced to a reachability (assertion checking) problem in a composition ofP underPCwith a monitor that checks whether aPCbehavior is an “anomaly”, i.e., admitted by P under PC, but not under SI. This approach raises two non-trivial challenges:
(1) defining a monitor for detecting PC vs SI anomalies that uses a minimal amount of auxiliary memory (to remember past events), and (2) determining the complexity of checking if the composition of P with the monitor reaches a specific control location1 under the (weaker) model PC. Interestingly enough,
1 We assume that the monitor goes to an error location when detecting an anomaly.
we address these two challenges by studying the relationship between these two weak consistency models,PCandSI, andserializability. The construction of the monitor is based on the fact that the PC vs SI anomalies can be defined as roughly, the difference between thePCvsSERandSIvsSERanomalies (investi- gated in previous work [13]), and we show that the reachability problem under PCcan be reduced to a reachability problem underSER. These results lead to a polynomial-time reduction of this robustness problem (for arbitrary programs) to a reachability problem underSER, which is important from a practical point of view since the SER semantics (as opposed to the PC or SI semantics) can be encoded easily in existing verification tools (using locks to guard the isola- tion of transactions). These results also enable a precise characterization of the complexity class of this problem.
Checking robustness against substitutingPCwithCCis reduced to the prob- lem of checking robustness against substitutingSERwithCC. The latter has been shown to be polynomial-time reducible to reachability under SERin [10]. This surprising result relies on the reduction from PCreachability toSERreachability mentioned above. This reduction shows that a given program P reaches a cer- tain control location under PCiff a transformed program P, where essentially, each transaction is split in two parts, one part containing all the reads, and one part containing all the writes, reaches the same control location underSER. Since this reduction preserves the structure of the program, CCvs PCanomalies of a programP correspond toCCvsSERanomalies of the transformed programP.
Beyond enabling these reductions, the characterization of classes of anomalies or the reduction from thePCsemantics to the SERsemantics are also important for a better understanding of these weak consistency models and the differences between them. We believe that these results can find applications beyond ro- bustness checking, e.g., verifying conformance to given specifications.
As a more pragmatic approach for establishing robustness, which avoids a non-reachability proof underSER, we have introduced a proof methodology that builds on Lipton’s reduction theory [39] and the concept of commutativity de- pendency graph introduced in [9], which represents mover type dependencies between the transactions in a program. We give sufficient conditions for robust- ness in all the cases mentioned above, which characterize the commutativity dependency graph associated to a given program.
We tested the applicability of these verification techniques on a benchmark containing seven challenging applications extracted from previous work [30, 34, 19]. These techniques are precise enough for proving or disproving the robustness of all these applications, for all combinations of the consistency models.
Complete proofs and more details can be found in [11].