Consistency is one of the fundamental issues of distributed computing. There are many competing consistency models, with subtly different power in principle. In practice, the well-known Consistency-Availability-Partition Tolerance trade-off translates to difficult choices between fault tolerance, performance, and programmability. The issues and trade-offs are particularly vexing at scale, with a large number of processes or a large shared database, and in the presence of high latency and failure-prone networks. It is clear that there is no one universally best solution. Possible approaches cover the whole spectrum between strong and eventual consistency. Strong consistency (total ordering via, for example, linearizability or serializability) provides familiar and intuitive semantics but requires slower and, in some contexts, fragile coordination. The unlimited parallelism allowed by weaker models such as eventual consistency promises high performance, but divergence and conflicts make it difficult to ensure useful application invariants, and meta-data is hard to keep in check. The research and development communities are actively exploring intermediate models (replicated data types, monotonic programming, CRDTs, LVars, causal consistency, red-blue consistency, invariant- and proof-based systems, etc.), designed to improve efficiency, programmability, and overall operation without negatively impacting scalability.
This workshop aims to investigate the principles and practice of weak consistency models for large-scale, distributed shared data systems. It brings together theoreticians and practitioners from different horizons: system development, distributed algorithms, concurrency, fault tolerance, databases, language and verification, including both academia and industry.
A CRDT is a data type specially designed to allow multiple instances to be replicated and modified without coordination, while providing an automatic mechanism to merge concurrent updates that guarantee eventual consistency. In this paper we present a ...
Causal consistency is the strongest consistency model under which low-latency and high-availability can be achieved. In the past few years, many causally consistent storage systems have been developed. The long-term goal of this initial work is to ...
The amount of data being processed in Data Centres (DCs) keeps growing at an enormous rate so that full replication may start being impractical. The application of replication between DCs is used to increase data availability in the presence of site ...
Out of the many NoSQL databases in use today, some that provide simple data structures for records, such as Redis and MongoDB, are now becoming popular. Building applications out of these complex data types provides a way to communicate intent to the ...
The use cases for Conflict-free Replicated Data Types (CRDTs) that are studied in the literature are limited to collaborative editing applications and data stores. The communication protocols used to distribute replica updates in these scenarios are ...
Modern internet applications require scalability to millions of clients, response times in the tens of milliseconds, and availability in the presence of partitions, hardware faults and even disasters. To obtain these requirements, applications are ...
We propose Lasp, a novel programming model aimed to simplify correct, large-scale, distributed programming. Lasp leverages ideas from distributed dataflow programming extended with convergent data types. This provides support for computations where not ...
Replication has been widely adopted to build highly scalable services, but this goal is often compromised by the coordination required to ensure application-specific properties such as state convergence and invariant preservation. In this paper, we ...
Several recent cloud-backed storage systems advocates the composition of a number of cloud services for improving performance and fault tolerance (e.g., [1, 3, 4]). An interesting aspect of these compositions is that the consistency guarantees they ...
In this paper, we introduce a technique that can be used by distributed transactional protocols to reduce the vulnerability window of transactions. For this purpose, we propose a so far unexplored (to the best of our knowledge) usage of hybrid clocks. ...