High level
- What is the failure model assumed?
- Critique the CAP of ZK system
- What is the liveness and durability guarantee?
- Why wait-free is not enough for coordination?
Implementation details
- Why use full path instead of handle to access znodes?
- Why zk can process read locally at each replica? How about stale local copy problem?
- Consier the case of new leader changing configs, we need to ensure that no partial config is read ever, even if new leader failed in the middle. Why the lock approach in Chubby can not handle this? How to construct a solution with zk to solve this?
- ZK uses WAL and fuzzy snapshot to ensure exactly-once processing? Why the fuzzy snapshot works, even if it may not cache actual state at ANY givne time?
- Why the transaction to be broadcasted need to be idempotent? Why in ARIES the operation inside the WAL does not need to be idempotent?
- How does client use heatbeat to maintain connection with server?
Implementaion patterns
- Configuration
- Group membership
- Leader election
- Leader information
Role of Zookeeper in Kafka
- Map from partition to replica list
- As a comparison, in GFS, it is the chunker server that keeps track of truck info. The master known which chuck is on which server
through heartbeats
- As common in most master/replication problems, this metadata has a version/epoch to handle the stale master/fail-recover problem
- For each partition, which replica is the leader and which replicas are in sync
- leader epoch is to handle stale leader
- version effctively handles membership view version
- What the role of controller? Similar to the master server in GFS
- Broker information. This is the membership managment storage, along with view number, in active-passive replication
- Controller id and epoch
- Compare this with the master lease technique
- Controller will force alive memebers to elect a new primary/leader, i.e., like the master in GFS