Konrad Hinsen
11/28/2024, 9:29 AMDuncan Cragg
11/28/2024, 11:35 AMKonrad Hinsen
11/28/2024, 11:55 AMDuncan Cragg
11/28/2024, 1:18 PMDuncan Cragg
11/28/2024, 1:20 PMTom Larkworthy
11/28/2024, 5:05 PMKonrad Hinsen
11/29/2024, 7:22 AMKonrad Hinsen
11/30/2024, 8:07 AMTom Larkworthy
12/02/2024, 4:51 PMshalabh
12/03/2024, 1:27 AMSpannerAs a counterpoint, the cost of global strict serializability in spanner can be so high that you need caches and other denormalized stores for acceptable performance. Now, adding even one cache immediately defeats any consistency properties because (spanner + cache) does not have the same consistency as spanner alone. Also, you cant run analytics on spanner either so you need another copy in an analytics database. All these extensions introduce complexity into the system and need hand-written code to deal with consistency corner cases. In fact I think the dbs that "scale" are a great example of how strong determinism doesn't scale. What I think would work better is give up strict serializability across the system, but track the various inconsistencies. Maybe we can have a managed eventually consistent system (rather than an ad-hoc one). One way of doing this might be to allow different versions of the same object to exist in different parts of the system, but use version ids or logical timestamps to track the history and relation. Allow divergence where needed but use local rules to resolve them. This requires some core principles, like how to use logical timestamps across the entire system, but does not require the whole system itself to be serializable.