Posts

CST370 - Week 6

Image
This was our sixth week in CST 370 - Design and Analysis of Algorithms. AVL Tree An AVL Tree is a balanced binary search tree where the difference in height between the left and right subtrees is either -1, 0, or 1. The difference between subtrees is called a balance factor or balancing factor.   What do we do if a Binary Tree is unbalanced (not an AVL Tree)? We can perform a R, L, RL, or LR rotation to make it balanced.   This AVL Tree Visualizer may be helpful. 2-3 Tree A 2-3 Tree is a type of search tree where each node can have one or two values.    If a node has three values, we promote the middle value to the parent node (or make it the new root node).   Visualizations: 2-3 Trees B-Trees Heaps A heap is a type of Complete Binary Tree. There are two types of heaps: a Max Heap and a Min Heap. Max Heap:  The number of each node is greater than or equal to the numbers of its children.   Min Heap:  The number of each node is smaller than or...

CST370 - Week 5

Image
  This was our fifth week in CST 370 - Design and Analysis of Algorithms. Quick Sort This week we discussed Quick Sort, a array sorting algorithm that uses the divide-and-conquer technique. In Quick Sort we pick a pivot (the first, last, or a random value of the array), and put all elements that are less than the pivot on the left, and all elements that are greater than the pivot on the right. Then we split the array into two halves, solving each half recursively using the same steps. The steps of Quick Sort: We pick a pivot. This can be the first, last, or a random element of the array. We initialize i as the first element of the array and j as the second element of the array. We increment i until we find an element that is greater than the pivot. Then we decrement j until we find an element that is less than the pivot. Then we swap i and j. We repeat steps 3-5 until i and j are the same. Then we decrement j until we find an element that is less than the pivot. Then we s...

CST370 Week 4

Image
 This was our fourth week in CST 370 - Design and Analysis of Algorithms. Merge Sort This week we discussed Merge Sort. Mergse Sort is a divide-and-conquer algorithm that sorts an array by dividing into two smaller arrays, sorting those two arrays recursively, and recombining (or merging) the arrays back together.       We also briefly discussed the time efficiency of the Merge Sort algorithm: Recurrence Relation: T(n) = 2*T(n/2) + n-1 => 2*T(n/2) +  Θ(n) Using Master Theorem: a = 2, b = 2, d = 1 Case 2: a = b^d = 2 => Θ(n log n) Merge Sort Exercise We also had an exercise to use Merge Sort on an array of numbers:  [5, 3, 1, 9, 8, 2, 4, 7] We first split this array into two halves: [5, 3, 1, 9] [8, 2, 4, 7] We continue splitting the arrays: [5, 3, 1, 9] => [5, 3] [1, 9] => [5] [3] [1] [9] [8, 2, 4, 7] => [8, 2] [4, 7] =>   [8] [2] [4] [7] Then we merge the arrays, sorting as we go:   [5] [3] [1] [9] => [3, 5] [1, 9] ...

CST370 Week 3

Image
  This was our third week in CST 370 - Design and Analysis of Algorithms.  Brute Force String Matching The first algorithm we looked at this week was Brute Force String Matching. In this algorithm we loop through each character of a string. Then for each character we loop again, starting at that character, until we find the pattern we are looking for (in which case we return true) or find a character that does not match the pattern (in which case go back to the first loop. Exhaustive Search The class lecture defines Exhaustive Search (ES) as a Brute-Force approach to solve combinatorial problems. The basic concepts of Exhaustive Search are: Generate all potential solutions (or all possible cases) to the problem. Evaluate each case one by one. Determine the solution among the possible cases. Our class lecture discusses using ES to solve some different problems.  Traveling Salesman Problem (TSP) To solve this problem using ES we compute each permutation of the gra...

CST370 Week 2

Image
 This was our second week in CST 370 - Design and Analysis of Algorithms.  Asymptotic Notations  This week we discussed three different notations for measuring algorithm efficiency. O notation, also called Big Oh, is defined as all functions with the same or lower order of growth as f(n). It is used to measure the upper bound of an algorithm's efficiency. For example, the O notation of a sequential search would be O(n). Θ notation, also called Big  Theta, is defined as all functions with the same order of growth as f(n). It is used to measure the exact efficiency of an algorithm. For example, an algorithm with an order of growth of n would be  Θ(n).  Θ notation cannot be used for an algorithm with a varying order of growth, such as a sequential search. Ω notation, also called Big Omega, is defined as all functions with the same or higher order of growth as f(n). It is used to measure the lower bound of an algorithm's efficiency. For example, the Ω...

CST 370 Week 1

Image
This was our first week in CST 370 - Design and Analysis of Algorithms. What is an algorithm? Our textbook, Introduction to the design and analysis of algorithms, defines an algorithm as a sequence of unambigous instructions for solving a problem. It also gives a list of important points for algorithm design (p. 29):  The nonambiguity requirement for each step of an algorithm cannot be compromised.  The range of inputs for which an algorithm works has to be specified carefully.  The same algorithm can be represented in several different ways.  There may exist several algorithms for solving the same problem.  Algorithms for the same problem can be based on very different ideas and can solve the problem with dramatically different speeds.   Another important idea is that algorithms are system agnostic. They can be implemented in any programming language, or even done manually. The textbook presents algorithms using psuedocode. Algorithm Examples  Euclid'...

Week 32 - CST334 Final Week

 CST 334 - Week 8 This is our eighth and final week in CST334, also known as Operating Systems. In this class we learned about virtualization, concurrency, and persistence. Virtualization The OS gives each process the illusion of having the CPU to itself, when in reality it is sharing it with the OS and other processes. The OS does this through a variety of ways: Processes are executed directly on the CPU. The CPU uses interrupts and voluntary yields to regularly switch between different processes. The OS saves the current 'context' of each process when it is paused, and loads it when the process is ran again. Many scheduling policies, such as FIFO, Round Robin, or Multi-level Feedback, exist to give different processes a fair amount of CPU time. The OS also gives each process the illusion of having all memory to itself, when it is likewise sharing memory with other processes. Each process is presented virtual address, which the CPU translates to a physical address. The CPU use...