The Only DSA Guide
You’ll Actually Need
40 problems. 11 topics. Every trick, pattern, and mental model—explained the way no textbook does.
You Don’t Need to Solve 500 LeetCode Problems
Here’s the uncomfortable truth most people won’t tell you: DSA interviews test patterns, not memory. You don’t need to have solved every problem—you need to recognize which mental model applies in the first 60 seconds of reading a question.
This guide is built around that idea. For each topic, we give you the core trick—the one insight that unlocks 80% of the problems in that category. Master the pattern, and the individual problems solve themselves.
Arrays — The Foundation of Everything
Arrays are involved in nearly every interview problem, either directly or as supporting structure. The reason they’re so dominant: they support random access, work great with two pointers, and serve as the backbone for prefix sums and hash-based lookups.
The Problems & Their Tricks
min and max. Update both in the same iteration.while (left < right) is the loop condition—not ≤.curSum = max(num, curSum + num). If current sum goes negative, reset and start fresh.maxSum separately. curSum is your running total; maxSum is the answer so far.target - num already exists before inserting.Linked Lists — All About Pointers
Linked list problems look intimidating until you internalize one truth: almost every question is solved with the slow + fast pointer technique. Two pointers moving at different speeds can detect cycles, find midpoints, and compute distances from the end—without knowing the list’s length.
Reversing a Linked List — The Code Template
The pattern is always: save next → reverse → advance. Draw it out on paper once, and it’ll never confuse you again.
Floyd’s Cycle Detection — The Full Insight
Most people know Phase 1: use slow/fast pointers to detect a cycle. But interviews often ask you to also find where the cycle starts. Here’s the trick: once slow and fast meet inside the cycle, reset slow to head. Now move both one step at a time. Where they meet again is the cycle entry point—provable with simple math.
Stack — Order, History, and “Nearest”
The stack is the data structure of recent memory. Whenever a problem asks about the nearest greater/smaller element, matching brackets, or tracking something “since the last time X happened”—reach for a stack.
Monotonic Stack for Next Greater Element
Key insight: store indices, not values. That way you can directly write answers into the result array.
Queue & Sliding Window
The queue’s superpower is the monotonic deque—a double-ended queue that maintains a max or min across a moving window. The sliding window maximum problem is the canonical example, and it appears in many disguised forms.
Recursion & Backtracking — The Template
Backtracking is just recursion with an undo step. Every backtracking problem follows the same three-part template— memorize this and you’ll never stare blankly at a “generate all X” problem again.
- 1Make a choice — add an element to your current path or state
- 2Recurse — explore all options from this new state
- 3Undo the choice — remove the element and try the next option
Trees — The Most Asked Data Structure
Binary trees appear in more interview questions than almost any other structure. The good news: almost every tree problem is a variation of one insight—apply some logic to the root, then recurse on both children.
Two traversal strategies dominate: DFS (recursion, good for height/paths/LCA) and BFS (queue-based, good for level-order and shortest distance).
Balanced Tree Check — The Sentinel Trick
The -1 sentinel propagates upward automatically. Once any subtree is unbalanced, -1 flows up and kills the whole check immediately—no extra boolean needed.
Level Order — The Snapshot Trick
When doing BFS level-by-level, take a snapshot of queue.size() at the start of each level.
Process exactly that many nodes, then increment the level counter. This cleanly separates levels without needing a null separator.
Binary Search Trees — Exploit the Property
A BST’s defining property—left < root < right—is both its strength and the thing many people forget to fully exploit. Every time you visit a node, you can eliminate an entire half of the tree based on whether the target is smaller or larger.
Hashing — Frequency & Grouping
A HashMap gives O(1) average lookup—but its real power is in how you define the key. Sorted string as key → groups anagrams. Character frequency array as key → groups permutations. The trick isn’t using a HashMap; it’s choosing what to hash.
Graphs — Traversal Fundamentals
Graph problems look complex until you realize most of them reduce to: visit every reachable node exactly once. The two tools for this are BFS (shortest path, level order) and DFS (connected components, cycle detection, topological sort).
Topological Sort (Kahn’s Algorithm)
Used for ordering tasks with dependencies (think: build systems, course prerequisites). Compute in-degrees for all nodes. Enqueue all nodes with in-degree 0. Process: dequeue a node, reduce neighbor in-degrees, enqueue any that reach 0. If you don’t process all nodes, there’s a cycle.
Dynamic Programming — The Most Tested Topic
DP is the biggest gap in most people’s preparation. It’s also the most tested topic at mid and senior levels. The secret: DP is just recursion with a cache. If you can write the recursive solution, you can write the DP solution.
dp[i] means in plain English.2. Write the recurrence relation (how dp[i] relates to smaller subproblems).
3. Identify base cases (what dp[0] or dp[1] equals).
4. Choose top-down (memoization) or bottom-up (tabulation).
DP Problems & Their Core Recurrences
| Problem | Recurrence | Space Trick |
|---|---|---|
| Fibonacci | dp[i] = dp[i-1] + dp[i-2] | Only keep 2 variables → O(1) |
| 0/1 Knapsack | dp[w] = max(dp[w], dp[w-wt]+val) | Inner loop goes right→left |
| LCS | match: 1+dp[i-1][j-1], else max(skip) | Use 2 rows instead of full table |
| LIS | dp[i] = max(dp[j]+1) where arr[j]<arr[i] | Patience sorting → O(n log n) |
| Coin Change | dp[a] = min(dp[a-coin]+1) | Init with amount+1 as “infinity” |
Heap / Priority Queue — Top-K Problems
The heap is the right tool whenever you need the K best elements from a stream or large dataset. The counterintuitive key insight is this: to find the K largest elements, use a min-heap of size K.
6 Algorithms Every Developer Must Know
Beyond data structure operations, these six algorithms appear repeatedly—sometimes as the main problem, often as a subtask inside a larger one. Know them cold.
| Algorithm | When to Use | Complexity | Key Gotcha |
|---|---|---|---|
| Binary Search | Sorted array, search for value or boundary | O(log n) | Use left + (right-left)/2 to avoid integer overflow |
| Merge Sort | Need stable sort; sorting linked lists | O(n log n) | Always O(n log n)—no worst case. Requires O(n) extra space. |
| Quick Sort | General purpose in-place sorting | O(n log n) avg | Sorted input → O(n²). Use random pivot or median-of-three. |
| BFS | Shortest path (unweighted), level order | O(V+E) | Mark visited on enqueue, not dequeue |
| DFS | Cycles, paths, connected components, topo sort | O(V+E) | Cycle detection needs in-stack tracking, not just visited[] |
| Dijkstra | Shortest path in weighted graph (no neg. weights) | O((V+E) log V) | init dist[start]=0, all others=∞. Always process min-dist node first. |
See a Problem → Know the Pattern Instantly
This is the table to study the night before an interview. Read the left column, recall the right column. If you can do that fluently, you’re ready.
| If the problem involves… | Reach for… |
|---|---|
| Find a pair/triplet with sum X | Two Pointers or HashMap |
| Max/min over a contiguous subarray | Sliding Window |
| Missing or duplicate in 1..N | XOR or Sum Formula |
| Sorted array + searching | Binary Search |
| All subsets / combinations / permutations | Backtracking |
| Nearest greater or smaller element | Monotonic Stack |
| Shortest path in unweighted graph/grid | BFS |
| Overlapping subproblems, optimal substructure | Dynamic Programming |
| Top K elements from stream or array | Heap / Priority Queue |
| Cycle detection (list or graph) | Slow+Fast Pointer / DFS w/ colors |
| Tree height, depth, or root-to-leaf path | DFS (Recursion) |
| Minimum depth / level-by-level processing | BFS with Queue |
| Character frequency or grouping strings | HashMap |
| Task ordering with dependencies | Topological Sort (Kahn’s) |
| Weighted shortest path (no negatives) | Dijkstra’s Algorithm |
Arrays → Linked Lists → Stack/Queue → Trees → Graphs → DP → Heap
Solve 2–3 problems per topic. Focus on recognizing patterns, not memorizing solutions.
The Honest Bottom Line
DSA isn’t about grinding 500 problems. It’s about building a clear mental map of which tool solves which type of problem. The developers who do well in interviews aren’t the ones who’ve seen every problem before—they’re the ones who can decompose an unfamiliar problem into a familiar pattern within 60 seconds.
Study the patterns above until they’re automatic. Implement each one once from scratch. Review this guide the week before any interview. That’s the formula—no magic required.
Good luck. You’ve got this. 🚀