My notes on Algorithms
Big 0 Notation
It is a metric for determining algorithm’s efficiency. It estimates how long it takes your code to run on different sets of input.
Where is the metric used
Time Complexity
How long does it take to execute in terms of computational steps.
Space Complexity
It measures the amount of memory your code uses.
Big0 ⇒ It is the worst case scenario of a code (like maximum time or maximum memory )
How do we measure if a code is good or bad ?
Time ? Nah it can change from device to device and it is not consistent
Counting Operations ? Yes
Example ⇒ Find sum of n numbers
 Direct Formula
function sumOfNumbers(n){
return n*(n+1)/2;
};
addition = 1 operation
multiplication = 1 operation
division = 1 operation
Total = 3 operations => O(1)
 For Loop
function sumOfNumbers(n){
let total = 0;
for (let i=0; i<=n ; i++){
total = total + i;
}
return total
};
total assignment to 0 = 1 operation
total + i = 1 operation * n times
total = sum = 1 operation * n times
Total = 1 + n + n = 2n + 1 => O(n)
 Constants doesn’t matter
$$ O(2n) ⇒ O(n) $$
$$ O(5n^2 + 3) => O(n^2) $$
 Only care about the trends and not the detail
Big 0 Shorthands
 Arithmetic operations are constant
 Variable assignment is constant
 Accessing elements in an array(by index) or object(by key) is constant
 In a loop, the complexity is the length of the loop times the complexity of whatever happens inside of the loop
Big O Cheatsheet
Space Complexity
It refers to the total amount of memory space used by an algorithm/program, including the space of input values for execution.
Rules of thumb
 Most primitives(booleans, numbers, undefined, null) are constant space
 String requires O(n) space (where n is the string length)
 Reference types are generally O(n), where n is the length(for arrays) or the number of keys (for objects)
function sumOfNumbers(n){
let total = 0;
for (let i=0; i<=n ; i++){
total = total + i;
}
return total
};
total declaration 1
for loop i declaration 1
Total = 2 => O(1)
function double(arr){
let newArr = [];
for(let i = 0 ; i < arr.length ; i++){
newArr.push(2 * arr[i]);
}
return newArr
}
newArr declaration O(n)
for loop i declaration 1
Total = O(n) + 1 => O(n)
Logarithms
It is inverse of exponent
$$ log_2(8) = 3 $$
$$
log_2(value) = e => 2^e = value
$$
$$ log_2(n) === log_9(n) === log(n) $$
Analysing Performance
Objects
Insertion  O(1)
Removal  O(1)
Searching  O(n)
Access  O(1)
Object Methods
Object.keys  O(n)
Object.values  O(n)
Object.entries  O(n)
Object.hasOwnProperty  O(1)
When to use objects ?
 When you don’t need order.
 When you need fast access insertion and removal
Arrays
Insertion  It depends
Removal  It depends
Searching  O(n)
Access  O(1)
Array Methods
Array.push & Array.pop  O(1)
Array.shift & Array.unshift  O(n)  We have to reindex every element
Array.concat  O(n) […arr1, …arr2] indexes of arr2 needs to be reindexed
Array.slice  O(n) Returns shallow copy of a portion of an array
Array.splice  O(n) Changes an array by removing existing elements and/or adding new elements
Array.sort  O(n x logn)
forEach/map/filter/reduce  O(n)
Problem Solving Approach
What is an Algorithm?
A process or set of steps to accomplish a certain task.
How do you improve?
 Devise a plan for solving problems
 Master common problem solving patterns
Problem Solving
 Understand the problem
 Explore concrete examples
 Break it down
 Solve/Simplify
 Look back and refactor
 Understand the problem
 Can i reinstate the problem in my own words
 What are the inputs
 What are the outputs that come from the solution
 Can the outputs be determined from inputs. In other words do i have enough info to solve the problem?
 Exploring concrete examples
 Start with some examples
 Progress to more complex examples
 Explore examples with Empty Inputs
 Explore examples with Invalid inputs
 Break it down
 Explicitly write out the steps you need to take
 Simplify
 Find the core difficulty in what you’re trying to do
 Temporarily ignore that difficulty
 Write a simplified solution
 Then incorporate that difficulty back in
 Look back and refactor
 Can you check the result ?
 Can you derive the result differently ?
 Can you understand it at a glance?
 Can you use the result or method for some other problem?
 Can you improve the performance of your solution?
 Can you think of other ways to refactor?
 How have other people solved this problem
Problem solving patterns

Frequency Counters
This patterns uses objects or sets to collect values/frequencies of values
This can of tern avoid the need for nested loops or O(N^2) operations with arrays / strings

Squared Array comparison problem
A naive solution  Time Complexity O(n^2)
function same(arr1, arr2){ if(arr1.length !== arr2.length){ return false } for (let i = 0;i<arr1.length;i++){ let correctIndex = arr2.indexOf(arr1[i] ** 2) if(correctIndex === 1){ return false; } arr2.splice(correctIndex,1) } }
A slight better solution  Time Complexity O(n*logn)
function same(arr1, arr2) { if (arr1.length !== arr2.length) { return false; } arr1 = arr1.map(val => val ** 2).sort(); arr2 = arr2.sort(); for (let i = 0; i < arr1.length; i++) { if (arr1[i] !== arr2[i]) { return false; } } return true; }
Frequency Counter Pattern  Time Complexity O(n)
function same(arr1, arr2) { if (arr1.length !== arr2.length) { return false; } let frequencyCounter1 = {}; let frequencyCounter2 = {}; for (let val of arr1) { frequencyCounter1[val] = (frequencyCounter1[val]  0) + 1; } for (let val of arr2) { frequencyCounter2[val] = (frequencyCounter2[val]  0) + 1; } for (let key in frequencyCounter1) { if (!(key ** 2 in frequencyCounter2)) { return false; } if (frequencyCounter2[key ** 2] !== frequencyCounter1[key]) { return false; } } return true; }


Multiple Pointers
Create pointers or values that correspond to an index or position and move towards the beginning, end or middle based on a certain condition. You can use it on some sort of linear structure, like an array, string or a linked list, to search for a pair of values, or searching for something that meets a condition. You use two reference points in the linear structure, and you work toward the middle.

Sum Zero problem
Naive Solution  Time Complexity  O(n^2)
function sumZero(arr){ for(let i=0;i<arr.length;i++){ for(let j=i+1;j<arr.length;j++){ if(arr[i] === arr[j] ){ return [arr[i], arr[j]] } } } }
Pointer Solution  Time Complexity  O(n)
function sumZero(arr){ let left = 0 let right = arr.length  1 while(left<right){ let sum = left + right if(sum=0){ return [arr[left], arr[right]] } else if(sum>0){ right } else { left++ } } }


Sliding Window
This pattern involves creating a window which can either be an array or number from one position to another. Depending on certain condition window either increases or closes. Very useful for keeping track of a subset of data in an array/string

Max Subarray sum problem
Naive Solution
function maxSubArraySum(arr,n){ if (num < arr.length){ return null } let total; for (let i=0;i<arr.length  n + 1;i++){ let temp = 0; for(let j = 0 ; j<n ;j++){ temp+= arr[i+j] } if(temp > total){ total = temp; } } return total }
Sliding Window Solution
function maxSubarraySum(arr, num){ let maxSum = 0; let tempSum = 0; if (arr.length < num) return null; for (let i = 0; i < num; i++) { maxSum += arr[i]; } tempSum = maxSum; for (let i = num; i < arr.length; i++) { tempSum = tempSum  arr[i  num] + arr[i]; maxSum = Math.max(maxSum, tempSum); } return maxSum; } maxSubarraySum([2,6,9,2,1,8,5,6,3],3)


Divide and Conquer
This pattern involves dividing a data set into smaller chunks and then repeating a process with a subset of data.

Binary Search Problem
Naive solution
const search = (arr,val) => { arr.map((item, index) => { if(item === val){ return index } }) return 1 }
Divide and conquer
const search = (arr, val) => { let min = 0 let max = arr.length  1 while(min <= max){ let middle = Math.floor((min+max)/2); let currentElement = array[middle]; if(array[middle] < val){ min = middle + 1 } else if (array[middle] > val){ max= middle  1 } else { return middle; } } }

Recursion
Recursion is a programming technique where a function calls itself to solve smaller instance of the same problem until it reaches a base case, which is a condition which stops recursion.

The Dragon and the Nested Caves Story
Recursion is a programming technique where a function calls itself to solve smaller instances of the same problem until it reaches a base case, which is a condition that stops the recursion. To explain recursion, let's use a story involving a dragon.
The Dragon and the Nested Caves
Imagine a dragon that has hidden its treasure in the deepest chamber of a series of nested caves. Each cave leads to another smaller cave, and this sequence continues until the smallest, innermost cave, which contains the treasure.
To find the treasure, you must enter the first cave, then the next cave within that, and so on, until you reach the last cave with the treasure. However, the path to the treasure is the same process repeated at each cave: enter the cave, look for the treasure or another cave, and if there's another cave, enter it and repeat the process.
Relating to Recursion
In this story:
 Each cave represents a recursive call. Just like entering a new cave is the same action repeated, a recursive function calls itself with a smaller portion of the problem.
 The treasure represents the base case. The base case in recursion is what stops the recursion from continuing indefinitely. Finding the treasure means there are no more caves to enter, just as reaching the base case means no more recursive calls are needed.
 The process of checking for treasure or another cave is the recursive condition. In a recursive function, there's usually a condition that checks whether it should make another recursive call (enter another cave) or return a value (find the treasure).
function findTreasure(cave) { if (cave.containsTreasure()) { // Base case: Treasure found, stop recursion return "Treasure" } else { // Recursive case: Enter the next cave return findTreasure(cave.nextCave()) } }
Understanding Recursion Through the Story
The dragon story helps illustrate how recursion breaks down a problem (finding the treasure) into smaller instances of the same problem (entering the next cave to look for the treasure) until a stopping condition is met (the treasure is found). Each step brings you closer to the goal, just as each recursive call brings the function closer to the base case.
Simple Count Down
function countDown(number){
for(let i=number; i > 0 ; i++){
console.log(i)
}
console.log('Coundown completed')
}
Recursive Count Down
function countDown(number){
if(number <= 0){
console.log('Count Down Completed')
return;
}
console.log(number);
number
countDown(number)
}

Factorial
Iteratively
function factorial(num){ let total = 1 for(i=1;i<=num;i++){ total = total * num; } return total; }
Recursively
function factorial(num){ let total = 1 if(num === 1){ return 1; } total = num * factorial(num  1) return total }
Avoiding These Pitfalls:
To avoid these common pitfalls, consider the following best practices when writing recursive functions:
 Clearly Define the Base Case: Ensure that your recursive function has a welldefined base case that is reached by every possible path through the function.
 Ensure Progress Toward the Base Case: Make sure each recursive call changes the state or input in a way that progresses toward the base case.
 Limit Recursion Depth: Be mindful of the potential depth of recursion and consider iterative alternatives or algorithms that minimise recursion depth, such as bottomup dynamic programming, for deep recursion scenarios.
 Use Memoization: For problems with overlapping subproblems, use memoization or other dynamic programming techniques to store and reuse the results of subproblems, avoiding redundant computations.
 Be Cautious with Global Variables and Side Effects: Minimise the use of global variables and be aware of side effects within your recursive functions to ensure predictable and correct behavior.
 Test Thoroughly: Due to the nature of recursion and the variety of paths through the recursive function, thorough testing is crucial to uncover and fix potential issues related to base cases, state management, and edge cases.
Searching Algorithms
Searching algorithms are a fundamental aspect of computer science, designed to retrieve information stored within some data structure. Their primary purpose is to find an item or group of items with specific properties within a dataset. These algorithms vary widely in their mechanisms, efficiency, and the types of data structures they are best suited for.
1. Linear Search
Linear search is a straightforward method where each element of the array is checked sequentially from the beginning to the end until the target element is found or the array ends. It's simple to implement but not the most efficient, especially for large arrays.
function linearSearch(arr, target) {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === target) {
return i; // Return the index of the target element
}
}
return 1; // Element not found
}
Best O(1) Worst O(n) Average O (n)
2. Binary Search (sorted*)
Binary search is a much more efficient algorithm but requires the array to be sorted. It works by repeatedly dividing the search interval in half. If the value of the search key is less than the item in the middle of the interval, the search continues in the lower half, or if the value is greater, it continues in the upper half. This process repeats until the target value is found or the interval is empty.
function binarySearch(sortedArr, target) {
let left = 0;
let right = sortedArr.length  1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (sortedArr[mid] === target) {
return mid; // Target found
} else if (sortedArr[mid] < target) {
left = mid + 1; // Continue search in the right half
} else {
right = mid  1; // Continue search in the left half
}
}
return 1; // Target not found
}
Best O(1) Worst O(log n) Average O(log n)
Sorting Algorithms
Sorting algorithms are methods used in computer science to rearrange a sequence of elements in a given array or list into a specific order, typically in numerical or lexicographical (alphabetical) ascending or descending order.
1. Bubble Sort
Bubble Sort is one of the simplest sorting algorithms. It repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. The pass through the list is repeated until the list is sorted. Despite its simplicity, Bubble Sort is highly inefficient for large datasets, with an average and worstcase time complexity of
$$ O(n^2) $$
n is the number of elements. Best case is O(n)
function bubbleSort(arr){
let sorted = false
while(!sorted){
let isSwapped = false
for (i=0;i<arr.length  1;i++){
if(arr[i] > arr[i+1]){
isSwapped = true;
[arr[i], arr[i+1]] = [arr[i+1], arr[i]];
}
}
if(!isSwapped) {
sorted = true
}
}
return arr
}
2. Selection Sort
Selection Sort divides the input list into two parts: a sorted sublist of items that is built up from left to right at the front of the list, and an unsorted sublist that occupies the rest of the list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, swapping it with the leftmost unsorted element, and moving the sublist boundary one element to the right. Like Bubble Sort, it has a time complexity of
$$ O(n^2) $$
making it inefficient for large lists.
function selectionSort(arr){
let left = 0
while(left !== arr.length  1){
let min = left
for (i=left;i<arr.length;i++){
if(arr[min] > arr[i]){
min = i
}
}
if(min){
[arr[left], arr[min]] = [arr[min], arr[left]]
}
left++
}
return arr
}
3. Insertion Sort
Insertion Sort builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms like quicksort, heapsort, or merge sort. However, insertion sort provides several advantages, such as simple implementation, efficiency for small data sets, and more efficient in practice than most other simple quadratic algorithms like bubble sort or selection sort. It has an average and worstcase time complexity of
$$ O(n^2) $$
function insertionSort(arr) {
for (let i = 1; i < arr.length; i++) {
let current = arr[i];
let j = i  1;
// Move elements of arr[0..i1], that are greater than current,
// to one position ahead of their current position
while (j >= 0 && arr[j] > current) {
arr[j + 1] = arr[j];
j;
}
arr[j + 1] = current;
}
return arr;
}
Comparing Bubble Sort, Selection Sort and Insertion Sort
 Algorithm  Time complexity (Best)  Time complexity (Average)  Time complexity (Worst)  Space complexity              Bubble Sort  O(n)  O(nn)  O(nn)  O(1)   Selection Sort  O(n)  O(nn)  O(nn)  O(1)   Insertion Sort  O(nn)  O(nn)  O(n*n)  O(1) 