Loading problems...
TL;DR: Iterate through the array using a stack-like simulation to cancel out adjacent equal pairs in a single pass (), then calculate the mean of the remaining elements.
The problem asks us to process an integer array nums by repeatedly removing adjacent pairs of equal values. This process simulates a chain reaction: removing a pair might bring two other equal values together, requiring further removals. Once the array is fully "compressed" (no adjacent equal pairs remain), we must calculate and return the mean (average) of the remaining elements. If the array becomes empty, the result is 0.0.
This is a popular interview question that tests your ability to optimize a recursive reduction process into a linear scan.
The naive approach directly simulates the problem statement instructions. We repeatedly scan the array to find the first occurrence of . When found, we create a new array excluding these two elements and restart the scan from the beginning. We repeat this until no adjacent equal pairs exist.
nums[i] == nums[i+1]python# Pseudo-code for Brute Force def brute_force(nums): while True: found_pair = False for i in range(len(nums) - 1): if nums[i] == nums[i+1]: # Remove pair and rebuild array nums = nums[:i] + nums[i+2:] found_pair = True break # Restart scan if not found_pair: break if not nums: return 0.0 return sum(nums) / len(nums)
Time Complexity: . In the worst case (e.g., [1, 2, 3, ..., 3, 2, 1]), we remove one pair at a time. Removing elements from an array (shifting) takes , and we might do this times.
Why it fails: The constraints for typical interview problems () make an solution result in a Time Limit Exceeded (TLE) error.
The key intuition is that we don't need to restart the scan after every removal. The "compression" only affects the boundary between the processed prefix and the remaining suffix.
When we iterate through nums, we maintain a collection of "surviving" elements (the compressed array so far). For each new element in our iteration (the "sliding" pointer), we only need to look at the last surviving element.
[x, x]. Both are annihilated. The last surviving element is removed, and the current element is ignored.This logic enforces the invariant that the "surviving" list never contains adjacent duplicates. By maintaining this invariant incrementally, we avoid the re-scanning.
Visual Description: Imagine a window focused on the "gap" between the processed list and the unprocessed list.
nums we are processing.
Let's trace nums = [1, 2, 2, 1, 3].
compressed = [].compressed is empty.1.compressed = [1][1, 2] (compare last element 1 with current 2).1 != 2.2.compressed = [1, 2][2, 2] (compare last element 2 with current 2).2 == 2. Match found!2. Ignore current 2.compressed = [1][1, 1] (compare last element 1 with current 1).1 == 1. Match found!1. Ignore current 1.compressed = []compressed is empty.3.compressed = [3]The algorithm is correct because of the associative property of the removal operation. Removing an adjacent pair [x, x] does not affect the relative order of the elements surrounding it. By processing left-to-right and canceling immediately upon detection, we ensure that any new adjacency created (like the 1, 1 pair in the example above) is detected as soon as the inner pair (2, 2) is removed. The compressed list always represents the fully reduced state of the prefix processed so far.
cpp#include <vector> #include <numeric> #include <iomanip> class Solution { public: double calculateCompressedMean(std::vector<int>& nums) { std::vector<int> compressed; for (int num : nums) { // Check fixed-size window: [back of compressed, current num] if (!compressed.empty() && compressed.back() == num) { compressed.pop_back(); // Remove the pair } else { compressed.push_back(num); // Add to survivors } } if (compressed.empty()) { return 0.0; } // Calculate mean using long long to prevent overflow during sum long long sum = 0; for (int val : compressed) { sum += val; } return (double)sum / compressed.size(); } };
javaimport java.util.ArrayList; import java.util.List; class Solution { public double calculateCompressedMean(int[] nums) { // Use ArrayList as a stack for the compressed elements List<Integer> compressed = new ArrayList<>(); for (int num : nums) { // Check fixed-size window: [last element, current num] int size = compressed.size(); if (size > 0 && compressed.get(size - 1) == num) { compressed.remove(size - 1); // Remove the pair } else { compressed.add(num); // Add to survivors } } if (compressed.isEmpty()) { return 0.0; } long sum = 0; for (int val : compressed) { sum += val; } return (double) sum / compressed.size(); } }
pythonclass Solution: def calculateCompressedMean(self, nums: list[int]) -> float: compressed = [] for num in nums: # Check fixed-size window: [compressed[-1], num] if compressed and compressed[-1] == num: compressed.pop() # Pair detected, remove both else: compressed.append(num) if not compressed: return 0.0 return sum(compressed) / len(compressed)
Time Complexity:
We iterate through the array nums exactly once. Each element is pushed onto the compressed list at most once and popped at most once. All stack operations are .
Space Complexity:
In the worst case (e.g., no duplicates like [1, 2, 3]), the compressed list will store all elements.
The logic of maintaining a running state or window as we iterate is a fundamental pattern.
While LeetCode 2985 uses a "stack-like" frontier, it shares the core "single pass with state maintenance" philosophy found in these sliding window problems.
Why does the naive solution TLE? Re-scanning the array from index 0 after every removal creates an complexity. For , this performs operations, far exceeding the typical limit.
Why is my mean calculation incorrect for large inputs?
A common mistake is using a 32-bit integer (int) to accumulate the sum. If the array has many large integers (e.g., elements of value ), the sum will overflow. Always use long long (C++) or long (Java) for the sum accumulator.
Why do I get a Division by Zero error?
If the input is [1, 1] or [2, 2, 3, 3], the compressed array becomes empty. You must explicitly check if compressed is empty before dividing by its length.