Loading problems...
TL;DR: The optimal solution uses a greedy mathematical approach based on the frequency of the most common task to calculate the minimum required "frame" of time slots, handling any remaining tasks by filling the gaps or extending the total length.
In the Task Scheduler problem (LeetCode 621), you are provided with a list of character-labeled tasks and a non-negative integer n. n represents the mandatory cooldown period between two identical tasks. The CPU can either execute a task or stay idle. The objective is to determine the minimum number of units of time required to finish all given tasks while respecting the cooling constraint.
This is a popular interview question because it tests the ability to recognize mathematical patterns within scheduling constraints rather than relying solely on simulation.
A naive approach attempts to simulate the CPU execution timeline step-by-step. In this method, we might maintain a timeline array and try to place tasks one by one. For each time slot, we iterate through the available tasks and pick one that hasn't been executed in the last n slots. If no task is valid, we mark the slot as idle.
Pseudo-code:
Initialize time = 0
While tasks remain:
Find a task that is valid (last_executed_time + n < current_time)
If found:
Execute task
Update last_executed_time
Decrement count
Else:
Idle (increment time)
Increment time
Return timeTime Complexity: The complexity is difficult to bound strictly but approaches . If the number of tasks is high and n is large, the simulation performs redundant checks for every time unit.
Why it fails: While this simulation can produce the correct answer, it is inefficient. It does not look ahead to optimize the arrangement globally. Instead, it makes local decisions that might result in a valid but non-minimal schedule. Furthermore, simulating the timeline creates unnecessary overhead when a mathematical calculation can derive the answer in linear time .
The key intuition for the LeetCode 621 solution is to recognize that the most frequent task acts as the bottleneck.
If the most frequent task A appears max_freq times, it requires max_freq - 1 cooling intervals between its executions. To minimize the total time, we should space out these As exactly n units apart. This creates max_freq - 1 groups (or chunks), where each group consists of one execution of A followed by n empty slots.
The final execution of A does not require a cooling period after it because the tasks are finished.
Visual Description:
Imagine constructing a table where the number of rows equals max_freq.
max_freq - 1 rows have a length of exactly n + 1 (1 task + n wait time).n waiting periods) in the first max_freq - 1 rows with the remaining less frequent tasks.The invariant here is that if we can satisfy the spacing for the most frequent task, we can satisfy it for all less frequent tasks by simply filling the empty slots. If the empty slots are insufficient (i.e., we have too many tasks), we simply widen the schedule, which effectively means no idle time is needed.

Let's trace the algorithm with tasks = ["A","A","A","B","B","B"], n = 2.
max_freq = 3 (from A).num_max_tasks = 2.max_freq - 1 = 2 such cycles.n + 1 = 3.2 * 3 = 6. This accounts for A _ _ A _ _.6 + num_max_tasks = 6 + 2 = 8.A B _ A B _ A B.tasks.length = 6.max(8, 6) = 8.Consider tasks = ["A","A","A","B","B","B"], n = 0.
max_freq = 3.num_max_tasks = 2.(3 - 1) * (0 + 1) + 2 = 2 * 1 + 2 = 4.tasks.length = 6.max(4, 6) = 6.The correctness relies on the Pigeonhole Principle and the greedy choice property.
The lower bound of the schedule is determined by the most frequent task. If task T appears k times, the schedule must span at least (k-1) * (n+1) + 1 slots to accommodate T. If multiple tasks share this max frequency, they simply append to the end of the sequence, increasing the "+1" term.
The formula (max_freq - 1) * (n + 1) + num_max_tasks strictly constructs a schedule with the minimum necessary idle slots to satisfy the bottleneck tasks.
If len(tasks) is greater than this calculated value, it means there are enough "other" tasks to fill all the idle slots generated by the bottleneck. We can insert these additional tasks into the gaps arbitrarily (or by widening the gaps beyond n). Since the constraint is a minimum gap of n, widening the gap is valid. Thus, if we have more tasks than the minimum frame, the answer is just the total number of tasks (zero idle time).
cpp#include <vector> #include <algorithm> #include <cmath> class Solution { public: int leastInterval(std::vector<char>& tasks, int n) { // Frequency array for A-Z std::vector<int> freqs(26, 0); for (char task : tasks) { freqs[task - 'A']++; } // Find the maximum frequency among all tasks int max_freq = 0; for (int f : freqs) { max_freq = std::max(max_freq, f); } // Count how many tasks have this max frequency int num_max_tasks = 0; for (int f : freqs) { if (f == max_freq) { num_max_tasks++; } } // Calculate the minimum length based on the most frequent task // (max_freq - 1) groups of size (n + 1), plus the final execution of max freq tasks int part_count = max_freq - 1; int part_length = n + 1; int min_length = part_count * part_length + num_max_tasks; // The answer is the max of the calculated frame or the actual number of tasks return std::max(min_length, (int)tasks.size()); } };
javaclass Solution { public int leastInterval(char[] tasks, int n) { // Frequency array for A-Z int[] freqs = new int[26]; for (char task : tasks) { freqs[task - 'A']++; } // Find the maximum frequency int maxFreq = 0; for (int f : freqs) { maxFreq = Math.max(maxFreq, f); } // Count how many tasks have that maximum frequency int numMaxTasks = 0; for (int f : freqs) { if (f == maxFreq) { numMaxTasks++; } } // Calculate theoretical minimum slots required // We have (maxFreq - 1) full cycles of length (n + 1) // Plus the tasks that need to be executed in the final cycle int minSlots = (maxFreq - 1) * (n + 1) + numMaxTasks; // Return the larger of the theoretical minimum or the raw task count return Math.max(minSlots, tasks.length); } }
pythonfrom collections import Counter class Solution: def leastInterval(self, tasks: list[str], n: int) -> int: # Count frequencies of each task counts = Counter(tasks) # Find the maximum frequency max_freq = max(counts.values()) # Count how many tasks have this max_freq num_max_tasks = sum(1 for count in counts.values() if count == max_freq) # Calculate the minimum intervals required by the bottleneck # (max_freq - 1) chunks of size (n + 1) + the final tasks part_count = max_freq - 1 part_length = n + 1 min_intervals = part_count * part_length + num_max_tasks # The result is constrained by the actual number of tasks if no idles are needed return max(min_intervals, len(tasks))
Time Complexity:
We iterate through the tasks array once to count frequencies, which takes time. Finding the maximum frequency and the count of max-frequency tasks involves iterating over a fixed-size alphabet (26 characters), which is . Thus, the overall complexity is linear with respect to the input size.
Space Complexity: We use a fixed-size array (size 26) or a hash map to store the frequencies of English uppercase letters. Since the alphabet size is constant, the space complexity is constant.
The greedy frequency-based strategy used in Task Scheduler applies directly to other arrangement problems:
n=1. The logic checks if the most frequent character exceeds half the length, which is a variation of the "frame" validation.Why do I get the wrong answer when n is small?
tasks.length.(max_freq - 1) * (n + 1) + num_max_tasks calculates the size of the frame assuming we need idle slots. If n is small or there are many distinct tasks, we might fill all gaps and still have tasks left over.Why is my simulation approach getting Time Limit Exceeded?
n is large (though constraints here are small, in similar variations can be large) or the logic involves repeated sorting of the remaining tasks, the overhead becomes significant.nWhy does num_max_tasks matter?
+ 1 at the end instead of + num_max_tasks.