Editorial: LC premium set 3

2020-12-13 00:00:00 +0000

Editorial: LC premium set 2

2020-12-06 00:00:00 +0000

Editorial: LC premium set 1

2020-12-01 00:00:00 +0000

On FFT

2020-11-15 00:00:00 +0000

Original video

  • Eval a polynomical A(x) at x0 : Horner’s rule
  • Mutlply two polys:
    • Closed form at O(n^2) naively -> O(nlogn) possible
    • Equivalent to convolution of vectors A * reverse(B) - inner products of all possible shifts
  • Poly representations:
    • Samples for n points with all distince xs.
      • Convert between this and coefficient form in O(nlogn)
      • Sampling n points takes O(n^2) in total
    • A sequence of n roots
  • Eval A(x) for x in X: D&C by even/odd i
    • A(x) = A_even(x^2) + x * A_odd(x^2)
    • Derive the recursion and cost => draw the recursion tree and we see it is still n^2
    • Square a number on the unit circle on the complex plane
  • FFT is the d&c algo for DFT (coefficent to sample form)
    • Fast poly multiplicatin: FFT(A) * FFA(B) and then do inverse FFT
  • Similar techique

Editorial: Coding set 18

2020-10-25 00:00:00 +0000

Editorial: Coding set 17

2020-10-11 00:00:00 +0000

Editorial: Coding set 16

2020-09-28 00:00:00 +0000

Editorial: Coding set 15

2020-09-19 00:00:00 +0000

Editorial: Coding set 14

2020-09-03 00:00:00 +0000

Good for whiteboarding

Champagne Tower

  • My first attempt was wrong

Number of Subarrays with Bounded Maximum

  • Corner case

988. Smallest String Starting From Leaf

  • My optimization was wrong

Additive Number

  • First attempt missed a corner case

Super Pow

  • My gut feeling was wrong

Longest Word in Dictionary through Deleting

  • Official insight is cleaner than mine
  • First attempt missed a corner case

Editorial: Coding set 13

2020-08-22 00:00:00 +0000

Good for whiteboarding

Iterator for Combination

  • My optimization was incorrect

Editorial: Coding set 12

2020-08-17 00:00:00 +0000

Editorial: Coding set 11

2020-08-14 00:00:00 +0000

Good for whiteboarding

Student Attendance Record II

  • Official insight cleaner than mine

Rectangle Area II

  • Took me too long to figure out the decomposition

Trapping Rain Water II

  • Unable to come up with the insight myself

Russian Doll Envelopes

  • My algo was incorrect

Editorial: Coding set 10

2020-08-02 00:00:00 +0000

Good for whiteboarding

Minimum Moves to Equal Array Elements II

  • Whiteboard version has bug

Editorial: Coding set 9

2020-07-27 00:00:00 +0000

Editorial: Coding set 8

2020-07-20 00:00:00 +0000

Maximum Length of Pair Chain

  • How to prove we can do greedy

Circular Array Loop

  • My gut feeling was wrong

Making File Names Unique

  • My optimization was wrong

Monotone Increasing Digits

  • Multiple solutions

Binary Subarrays With Sum

  • Multiple solutions

Matchsticks to Square

  • A lot trickier than it appears!

Editorial: Coding set 7

2020-07-16 00:00:00 +0000

Editorial: Coding set 6

2020-07-11 00:00:00 +0000

On Uber's Cadence

2020-07-07 00:00:00 +0000

Checked 10+ workflow engines. Here are my conclusions:

  • It is not designed for high traffic (> 100 per sec) + short lived task case (< 10 sec). At most one requirement can be satisfied
  • However, WE should support many executing tasks, e.g., 100k tasks managed by the workflow engine
  • Evolution lineage
AWS Simple Workflow -> Uber Cadence -> Temporal
		    -> AWS Step Function

Concepts

  • Workflow: similiar to the coordinator in saga. Its code is hosted on the workflow worker, which is your process
    • The communication between workflow worker and cadence service is encapsulated in a decision task (also called workflow task), e.g., when an external event happens to the workflow, a decision task will be created and dispatched to WW
  • Activity: similar to the sub txn component in saga. Its code is hosted on the activity worker, which is your process and often the same process as the workflow worker
    • The communciation between activity worker and cadence service is encapsulated in the activity task, e.g., WW sends a ScheduleActivityTask to cadence, which will dispatch a corresponding activity task to the AW
  • Execution history: persistent log to support exactly-once, all-or-nothing semantics. All task data will be persisted too to support replay during recovery

Architecture

  • Front end: API gateway
  • Matching service: task scheduling, dispatching, and task list
    • backed by task storage
  • History service: workflow state, timer q, and transfer q
    • backed by event, workflow, visibility storage

Editorial: Coding set 5

2020-06-19 00:00:00 +0000

How kafka consumer commits offsets

2020-06-17 00:00:00 +0000

Version 2.3

Data structure


public class KafkaConsumer<K, V> implements Consumer<K, V> {

   private final ConsumerCoordinator coordinator;
   private final SubscriptionState subscriptions;
}

public class SubscriptionState {

    /* the partitions that are currently assigned, note that the order of partition matters (see FetchBuilder for more details) */
    private final PartitionStates<TopicPartitionState> assignment;
    /* the list of topics the user has requested */
    private Set<String> subscription;


    private static class TopicPartitionState {

        private FetchState fetchState;
        private FetchPosition position; // last consumed position
}

public class PartitionStates<S> {

    private final LinkedHashMap<TopicPartition, S> map = new LinkedHashMap<>();
    private final Set<TopicPartition> partitionSetView = Collections.unmodifiableSet(map.keySet());

    public static class PartitionState<S> {
        private final TopicPartition topicPartition;
        private final S value;
}
}
}

Inside ConsumerCoordinator

In org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - the local proxy for consumer coordinator.

    private RequestFuture<Void> sendOffsetCommitRequest(final Map<TopicPartition, OffsetAndMetadata> offsets) {
        Node coordinator = checkAndGetCoordinator();

        // create the offset commit request
        Map<String, OffsetCommitRequestData.OffsetCommitRequestTopic> requestTopicDataMap = new HashMap<>();
        for (Map.Entry<TopicPartition, OffsetAndMetadata> entry : offsets.entrySet()) {
            TopicPartition topicPartition = entry.getKey();
            OffsetAndMetadata offsetAndMetadata = entry.getValue();
            OffsetCommitRequestData.OffsetCommitRequestTopic topic = requestTopicDataMap
                    .getOrDefault(topicPartition.topic(),
                            new OffsetCommitRequestData.OffsetCommitRequestTopic()
                                    .setName(topicPartition.topic())
                    );

            topic.partitions().add(new OffsetCommitRequestData.OffsetCommitRequestPartition()
                    .setPartitionIndex(topicPartition.partition())
                    .setCommittedOffset(offsetAndMetadata.offset())
                    .setCommittedLeaderEpoch(offsetAndMetadata.leaderEpoch().orElse(RecordBatch.NO_PARTITION_LEADER_EPOCH))
                    .setCommittedMetadata(offsetAndMetadata.metadata())
            );
            requestTopicDataMap.put(topicPartition.topic(), topic);
        }

        final Generation generation = generation();
            // if the generation is null, we are not part of an active group (and we expect to be).
            // the only thing we can do is fail the commit and let the user rejoin the group in poll()
            if (generation == null) {
                log.info("Failing OffsetCommit request since the consumer is not part of an active group");
                return RequestFuture.failure(new CommitFailedException());
            }

        OffsetCommitRequest.Builder builder = new OffsetCommitRequest.Builder(
                new OffsetCommitRequestData()
                        .setGroupId(this.groupId)
                        .setGenerationId(generation.generationId)
                        .setMemberId(generation.memberId)
                        .setGroupInstanceId(groupInstanceId.orElse(null))
                        .setTopics(new ArrayList<>(requestTopicDataMap.values()))
        );

        log.trace("Sending OffsetCommit request with {} to coordinator {}", offsets, coordinator);

        return client.send(coordinator, builder)
                .compose(new OffsetCommitResponseHandler(offsets));
    }