Posts

CST370 Week 8

Image
This was our eighth week in CST 370 - Algorithms. Dijkstra's Algorithm The last algorithm that we covered in this class was Dijkstra's Algorithm. It is a greedy algorithm that solves the single-source shortest path problem efficiently. To do the algorithm we store a table that has number of vertices rows and number of vertices columns. In the first row we write down the cost to get to each vertex directly from the root, and infinity for any vertex not directly connected to it.  Then, we visit the node with the lowest cost from the root (this is the greedy part). Then we recalculate the cost for every node that is connected to the current node, combining the cost to get to the current node to the cost to get to each connected node from the current node. We repeat this process until we have visited every node. When writing down the cost to visit each node, we also write down which node we are coming from. Finally, we can get the optimal path if we start from the last node and wor...

CST370 - Week 7

Image
This was our seventh week in CST 370 - Algorithms. Counting Sort Counting Sort is a sorting algorithm that doesn't directly compare elements to sort them. First, we loop through the unsorted array once. During this loop we get the range of values in the array (for example, integers 10-15), we count how many times each number appears, then we calculate their distribution by adding the frequency of each number with the previous. Then, we loop through the unsorted array again, starting from the end. For each number, we place it at the index listed in its distribution value, decrement the distribution value by 1, then continue to the next number.    The time efficiency of Counting Sort is O(n + k), where k is the range of values in the input. Counting Sort is also stable, numbers with the same value in the appear in the same order in both the input and output. Radix Sort Radix Sort is another sorting algorithm that doesn't sort by directly comparing numbers. Instead, it runs Count...

CST370 - Week 6

Image
This was our sixth week in CST 370 - Design and Analysis of Algorithms. AVL Tree An AVL Tree is a balanced binary search tree where the difference in height between the left and right subtrees is either -1, 0, or 1. The difference between subtrees is called a balance factor or balancing factor.   What do we do if a Binary Tree is unbalanced (not an AVL Tree)? We can perform a R, L, RL, or LR rotation to make it balanced.   This AVL Tree Visualizer may be helpful. 2-3 Tree A 2-3 Tree is a type of search tree where each node can have one or two values.    If a node has three values, we promote the middle value to the parent node (or make it the new root node).   Visualizations: 2-3 Trees B-Trees Heaps A heap is a type of Complete Binary Tree. There are two types of heaps: a Max Heap and a Min Heap. Max Heap:  The number of each node is greater than or equal to the numbers of its children.   Min Heap:  The number of each node is smaller than or...

CST370 - Week 5

Image
  This was our fifth week in CST 370 - Design and Analysis of Algorithms. Quick Sort This week we discussed Quick Sort, a array sorting algorithm that uses the divide-and-conquer technique. In Quick Sort we pick a pivot (the first, last, or a random value of the array), and put all elements that are less than the pivot on the left, and all elements that are greater than the pivot on the right. Then we split the array into two halves, solving each half recursively using the same steps. The steps of Quick Sort: We pick a pivot. This can be the first, last, or a random element of the array. We initialize i as the first element of the array and j as the second element of the array. We increment i until we find an element that is greater than the pivot. Then we decrement j until we find an element that is less than the pivot. Then we swap i and j. We repeat steps 3-5 until i and j are the same. Then we decrement j until we find an element that is less than the pivot. Then we s...

CST370 Week 4

Image
 This was our fourth week in CST 370 - Design and Analysis of Algorithms. Merge Sort This week we discussed Merge Sort. Mergse Sort is a divide-and-conquer algorithm that sorts an array by dividing into two smaller arrays, sorting those two arrays recursively, and recombining (or merging) the arrays back together.       We also briefly discussed the time efficiency of the Merge Sort algorithm: Recurrence Relation: T(n) = 2*T(n/2) + n-1 => 2*T(n/2) +  Θ(n) Using Master Theorem: a = 2, b = 2, d = 1 Case 2: a = b^d = 2 => Θ(n log n) Merge Sort Exercise We also had an exercise to use Merge Sort on an array of numbers:  [5, 3, 1, 9, 8, 2, 4, 7] We first split this array into two halves: [5, 3, 1, 9] [8, 2, 4, 7] We continue splitting the arrays: [5, 3, 1, 9] => [5, 3] [1, 9] => [5] [3] [1] [9] [8, 2, 4, 7] => [8, 2] [4, 7] =>   [8] [2] [4] [7] Then we merge the arrays, sorting as we go:   [5] [3] [1] [9] => [3, 5] [1, 9] ...

CST370 Week 3

Image
  This was our third week in CST 370 - Design and Analysis of Algorithms.  Brute Force String Matching The first algorithm we looked at this week was Brute Force String Matching. In this algorithm we loop through each character of a string. Then for each character we loop again, starting at that character, until we find the pattern we are looking for (in which case we return true) or find a character that does not match the pattern (in which case go back to the first loop. Exhaustive Search The class lecture defines Exhaustive Search (ES) as a Brute-Force approach to solve combinatorial problems. The basic concepts of Exhaustive Search are: Generate all potential solutions (or all possible cases) to the problem. Evaluate each case one by one. Determine the solution among the possible cases. Our class lecture discusses using ES to solve some different problems.  Traveling Salesman Problem (TSP) To solve this problem using ES we compute each permutation of the gra...

CST370 Week 2

Image
 This was our second week in CST 370 - Design and Analysis of Algorithms.  Asymptotic Notations  This week we discussed three different notations for measuring algorithm efficiency. O notation, also called Big Oh, is defined as all functions with the same or lower order of growth as f(n). It is used to measure the upper bound of an algorithm's efficiency. For example, the O notation of a sequential search would be O(n). Θ notation, also called Big  Theta, is defined as all functions with the same order of growth as f(n). It is used to measure the exact efficiency of an algorithm. For example, an algorithm with an order of growth of n would be  Θ(n).  Θ notation cannot be used for an algorithm with a varying order of growth, such as a sequential search. Ω notation, also called Big Omega, is defined as all functions with the same or higher order of growth as f(n). It is used to measure the lower bound of an algorithm's efficiency. For example, the Ω...