Loading problems...
TL;DR: The optimal solution treats this as a shortest path problem using Dijkstra's Algorithm, but instead of minimizing the sum of edge weights using a Min-Heap, we maximize the product of edge probabilities using a Max-Heap.
In LeetCode 1514, "Path with Maximum Probability," you are provided with an undirected graph where nodes are connected by edges, and each edge has a specific success probability. Your objective is to find a path from a starting node to an ending node such that the total probability of success (calculated by multiplying the probabilities of all edges in the path) is maximized. If no path exists, the result is 0.
This problem is a variation of the classic shortest path problem. While standard algorithms usually look for the "shortest" distance (sum of weights), this problem asks for the "maximum" reliability (product of probabilities).
The naive approach to solving the Path with Maximum Probability problem involves exploring every possible simple path from the start node to the end node using Depth First Search (DFS).
visited set to avoid cycles.end node is reached, record the probability.Pseudo-code:
textfunction dfs(node, current_prob, visited): if node == end: max_prob = max(max_prob, current_prob) return visited.add(node) for neighbor, weight in graph[node]: if neighbor not in visited: dfs(neighbor, current_prob * weight, visited) visited.remove(node)
Why this fails: The time complexity of this brute force approach is roughly in the worst case (a complete graph), where is the number of nodes. With constraints allowing up to nodes, this factorial complexity will immediately result in a Time Limit Exceeded (TLE) error. We need a more efficient approach that avoids re-exploring suboptimal paths.
The key intuition for solving LeetCode 1514 efficiently lies in understanding how probabilities behave as path weights.
In a standard shortest path problem (like finding driving distance), edge weights are added, and we seek to minimize the total sum. In this problem, edge weights are probabilities between 0 and 1, and they are multiplied.
Crucially, multiplying by a factor where always results in a value less than or equal to the original. This property—monotonicity—is analogous to adding positive weights in standard Dijkstra. Because the total probability can only decrease (or stay the same) as we extend a path, we can apply a greedy strategy.
The Strategy: Instead of a Min-Heap (used to find the smallest distance), we use a Max-Heap (Priority Queue). The Max-Heap allows us to always expand the path with the currently highest known probability.
Key Invariant:
When we extract a node u from the Max-Heap, we are guaranteed to have found the optimal (maximum probability) path to u. Any other path reaching u later would come from a node with a lower or equal probability and would be multiplied by a factor , resulting in a strictly worse or equal probability. This eliminates the need to visit u again.
Visual Description: Imagine the graph as a network of pipes. We start at the source with full pressure (probability 1.0). As we move through edges, the pressure drops based on the edge's constriction (probability). The algorithm functions like a flood fill that prioritizes the highest-pressure pipes first. The Max-Heap ensures that the "wavefront" of our search always advances from the node that currently retains the most probability, ensuring we don't waste time on paths that have already become too unlikely to be optimal.

We will implement a modified version of Dijkstra's Algorithm.
edges and succProb arrays into an adjacency list where each node points to a list of pairs (neighbor, probability).probs of size n, initialized to 0.0, to store the maximum probability found so far to reach each node. Set probs[start] = 1.0 (100% chance to be at the start).(current_probability, node). Initially, push (1.0, start).end node, we have found the answer immediately.probs for that node, skip it (this represents a stale/suboptimal entry).new_prob = current_prob * edge_prob.new_prob is greater than probs[neighbor], update probs[neighbor] and push (new_prob, neighbor) into the Max-Heap.end, return 0.0 (path impossible).Let's trace the algorithm with a simple example: start=0, end=2.
Edges: 0-1 (0.5), 1-2 (0.5), 0-2 (0.2).
Initialization:
probs array: [1.0, 0.0, 0.0][(1.0, 0)]Step 1: Pop (1.0, 0). Current node is 0.
new_prob = 1.0 * 0.5 = 0.5. Since 0.5 > probs[1], update probs[1]=0.5, push (0.5, 1).new_prob = 1.0 * 0.2 = 0.2. Since 0.2 > probs[2], update probs[2]=0.2, push (0.2, 2).[(0.5, 1), (0.2, 2)] (sorted by prob).Step 2: Pop (0.5, 1). Current node is 1.
0.5 * 0.5 = 0.25. 0.25 < probs[0] (which is 1.0). Ignore.new_prob = 0.5 * 0.5 = 0.25. Since 0.25 > probs[2] (currently 0.2), update probs[2]=0.25, push (0.25, 2).[(0.25, 2), (0.2, 2)].Step 3: Pop (0.25, 2). Current node is 2.
end node.0.25.Note that the stale entry (0.2, 2) remaining in the heap would be ignored if processed later because its probability is lower than the recorded probs[2].
The correctness of this algorithm relies on the greedy choice property.
Let be the maximum probability to reach node . When we extract a node from the priority queue with probability , we assert that .
Suppose there exists a different path to with probability . This path must pass through some node currently in the priority queue (or already processed) such that the path is . However, since edge weights , the probability along a path is non-increasing. Thus, the probability at must be greater than or equal to . Since we always extract the maximum value from the heap, we would have processed the path through before extracting with probability . This contradiction proves that the first time we extract a node from the Max-Heap, we have found the optimal path.
cpp#include <vector> #include <queue> #include <iostream> using namespace std; class Solution { public: double maxProbability(int n, vector<vector<int>>& edges, vector<double>& succProb, int start_node, int end_node) { // Build adjacency list: node -> vector of {neighbor, probability} vector<vector<pair<int, double>>> adj(n); for (int i = 0; i < edges.size(); ++i) { adj[edges[i][0]].push_back({edges[i][1], succProb[i]}); adj[edges[i][1]].push_back({edges[i][0], succProb[i]}); } // Max-Heap to store {current_prob, node} // By default priority_queue is a Max-Heap in C++ priority_queue<pair<double, int>> pq; pq.push({1.0, start_node}); // Array to store the max probability to reach each node vector<double> max_prob(n, 0.0); max_prob[start_node] = 1.0; while (!pq.empty()) { auto [curr_prob, u] = pq.top(); pq.pop(); // If we reached the target, return the probability if (u == end_node) return curr_prob; // Optimization: If current path is worse than what we already found, skip if (curr_prob < max_prob[u]) continue; for (auto& edge : adj[u]) { int v = edge.first; double edge_prob = edge.second; // If a better path is found if (curr_prob * edge_prob > max_prob[v]) { max_prob[v] = curr_prob * edge_prob; pq.push({max_prob[v], v}); } } } return 0.0; } };
javaimport java.util.*; class Solution { class Pair { int node; double prob; Pair(int node, double prob) { this.node = node; this.prob = prob; } } public double maxProbability(int n, int[][] edges, double[] succProb, int start_node, int end_node) { // Build adjacency list List<List<Pair>> adj = new ArrayList<>(); for (int i = 0; i < n; i++) { adj.add(new ArrayList<>()); } for (int i = 0; i < edges.length; i++) { int u = edges[i][0]; int v = edges[i][1]; double p = succProb[i]; adj.get(u).add(new Pair(v, p)); adj.get(v).add(new Pair(u, p)); } // Max-Heap using custom comparator PriorityQueue<Pair> pq = new PriorityQueue<>((a, b) -> Double.compare(b.prob, a.prob)); pq.offer(new Pair(start_node, 1.0)); // Array to track max probability found so far double[] maxProbs = new double[n]; maxProbs[start_node] = 1.0; while (!pq.isEmpty()) { Pair curr = pq.poll(); int u = curr.node; double prob = curr.prob; if (u == end_node) return prob; // Skip stale entries if (prob < maxProbs[u]) continue; for (Pair edge : adj.get(u)) { int v = edge.node; double newProb = prob * edge.prob; if (newProb > maxProbs[v]) { maxProbs[v] = newProb; pq.offer(new Pair(v, newProb)); } } } return 0.0; } }
pythonimport heapq class Solution: def maxProbability(self, n: int, edges: List[List[int]], succProb: List[float], start_node: int, end_node: int) -> float: # Build adjacency list adj = [[] for _ in range(n)] for i, (u, v) in enumerate(edges): adj[u].append((v, succProb[i])) adj[v].append((u, succProb[i])) # Max-Heap tracked via negative values (since Python has Min-Heap) # Store: (-probability, node) pq = [(-1.0, start_node)] # Track max probability for each node max_probs = [0.0] * n max_probs[start_node] = 1.0 while pq: curr_prob_neg, u = heapq.heappop(pq) curr_prob = -curr_prob_neg if u == end_node: return curr_prob # Skip if we found a better path already if curr_prob < max_probs[u]: continue for v, edge_prob in adj[u]: new_prob = curr_prob * edge_prob if new_prob > max_probs[v]: max_probs[v] = new_prob heapq.heappush(pq, (-new_prob, v)) return 0.0
Let be the number of nodes (n) and be the number of edges (edges.length).
Time Complexity: or
The logic used in LeetCode 1514 applies to several other advanced graph problems. The core concept is adapting Dijkstra's greedy strategy to different "cost" functions (e.g., maximizing bottleneck capacity, minimizing effort).
Why did I get Time Limit Exceeded (TLE) with a simple queue?
Why is my answer slightly wrong or 0?
Why is the start node probability important?
max_probs[start] to 0 instead of 1.Space Complexity:
probs array takes .Why use an undirected graph?
u -> v).v -> u direction.