Loading problems...
TL;DR: The optimal solution utilizes dynamic programming to generate bit counts in time by reusing the count of set bits from the number i / 2.
The LeetCode 338 problem, titled Counting Bits, asks us to compute the number of 1s (set bits) in the binary representation for every integer from 0 to n. Instead of returning a single value, we must return an array of length n + 1 where the index i stores the population count (Hamming weight) of the integer i.
This is a fundamental problem for understanding how bitwise operations can be combined with dynamic programming to optimize runtime from superlinear to linear.
The most straightforward way to solve this problem is to iterate through every number from 0 to n and count the bits for each number individually.
For a specific integer k, we can count bits by repeatedly checking the least significant bit (LSB) and right-shifting the number until it becomes 0.
Pseudo-code:
textfunction countBits(n): result = [] for i from 0 to n: count = 0 num = i while num > 0: if num is odd: count = count + 1 num = num >> 1 result.append(count) return result
Why this fails (or is suboptimal): While this approach produces the correct result, it performs redundant calculations. The time complexity is because counting bits for a number takes time (proportional to the number of bits). The problem specifically requests a linear time solution, . In a strict interview setting, the brute force approach demonstrates a lack of insight into the relationship between binary representations of consecutive or related numbers.
The key intuition for the LeetCode 338 Solution lies in observing how the binary representation changes as numbers increase. Specifically, consider the relationship between a number and (integer division).
In binary, performing integer division by 2 is equivalent to a right shift operation (>> 1).
Notice that contains all the set bits of , plus potentially one more bit at the Least Significant Bit (LSB) position.
This allows us to formulate a recurrence relation (transition equation) suitable for Dynamic Programming:
Here, i >> 1 looks up the result for a smaller number that has already been computed, and i & 1 checks if the current number is odd (adding 1 if true, 0 if false).
Visual Description:
Imagine filling the array from left to right. To find the value for index 14 (1110), you look at index 7 (111). Since 14 is 7 shifted left with a 0 appended, they have the same bit count. To find the value for index 15 (1111), you look at index 7 (111) and add 1 because 15 is 7 shifted left with a 1 appended. The algorithm builds the solution based on this "prefix" property of binary numbers.

Let's trace the algorithm for n = 5. We initialize ans array of size 6 with ans[0] = 0.
i = 1 (Binary 1):
1 >> 1 which is 0.1 & 1 which is 1.ans[1] = ans[0] + 1 => 0 + 1 = 1.[0, 1, ...]i = 2 (Binary 10):
2 >> 1 which is 1.2 & 1 which is 0.ans[2] = ans[1] + 0 => 1 + 0 = 1.[0, 1, 1, ...]i = 3 (Binary 11):
3 >> 1 which is 1.3 & 1 which is 1.ans[3] = ans[1] + 1 => 1 + 1 = 2.[0, 1, 1, 2, ...]i = 4 (Binary 100):
4 >> 1 which is 2.4 & 1 which is 0.ans[4] = ans[2] + 0 => 1 + 0 = 1.[0, 1, 1, 2, 1, ...]i = 5 (Binary 101):
5 >> 1 which is 2.5 & 1 which is 1.ans[5] = ans[2] + 1 => 1 + 1 = 2.[0, 1, 1, 2, 1, 2]The final output is [0, 1, 1, 2, 1, 2].
The correctness relies on the inductive property of the integers.
ans[k] correctly contains the bit count of .
i >> 1 (integer division by 2) is strictly less than for all . Therefore, ans[i >> 1] is already computed and correct by our assumption.ans[i >> 1] + (i & 1) captures this logic perfectly.cppclass Solution { public: vector<int> countBits(int n) { // Initialize vector of size n + 1 with 0s vector<int> ans(n + 1, 0); // Iterate from 1 to n to fill the DP table for (int i = 1; i <= n; ++i) { // relation: ans[i] = ans[i / 2] + (i % 2) // implemented using bitwise operators for efficiency ans[i] = ans[i >> 1] + (i & 1); } return ans; } };
javaclass Solution { public int[] countBits(int n) { // Initialize result array of size n + 1 int[] ans = new int[n + 1]; // ans[0] is implicitly 0, so we start from 1 for (int i = 1; i <= n; i++) { // ans[i >> 1] retrieves the count for i / 2 // (i & 1) checks if the current number is odd ans[i] = ans[i >> 1] + (i & 1); } return ans; } }
pythonclass Solution: def countBits(self, n: int) -> List[int]: # Initialize DP array with 0 for the base case ans = [0] * (n + 1) # Iterate from 1 to n for i in range(1, n + 1): # i >> 1 is equivalent to i // 2 # i & 1 is equivalent to i % 2 ans[i] = ans[i >> 1] + (i & 1) return ans
Time Complexity: We iterate through the numbers from to exactly once. Inside the loop, we perform constant time bitwise operations (shift, AND, addition) and array access. Thus, the total time is linear.
Space Complexity: (Auxiliary)
We ignore the space required for the output array ans as per standard complexity analysis conventions for this type of problem. If the output array is counted, the space complexity is . We do not use any additional data structures like recursion stacks or hashmaps.
The Bitwise DP pattern and the specific logic used here (relating a number to its shifted version) is useful in several other scenarios:
Why does my solution get Time Limit Exceeded (TLE)?
While no other problems in the current curriculum map identically to this subpattern, mastering the i vs i >> 1 relationship is foundational for advanced bit manipulation questions.
Why is my array size incorrect?
n instead of n + 1.0 through n inclusive. This requires n + 1 indices (0 to n).ans[n].Why use i >> 1 instead of i - 1?
ans[i] to ans[i-1] without a clear logic.ans[i] is related to ans[i & (i-1)] (clearing the least significant set bit), it is not directly related to ans[i-1] by a constant addition. The i >> 1 relation is often easier to visualize as a "prefix" reuse.