go-ds

Data structures implementation in Golang

View on GitHub

go-ds

A library which aims to explain and implement data structures in Golang.

Types of data structures

Arrays

An array is a data structure that contains fixed-sized collection of elements of same data type, such as an integer or string. Arrays are commonly used in computer programs to organize data so that a related set of values can be easily sorted or searched.

Operations on Arrays:

Searching

1. Linear Search: A linear search or sequential search is a method for finding an element within a list. It sequentially checks each element of the list until a match is found or the whole list has been searched.

Worst-case complexity: o(n)
Best-case complexity: o(1)

2. Binary Search: A binary search or half-interval search aims to find the element in a sorted array by first comparing target element with middle element to see whether the element is in first half or second half and keep repeating this until it finds the target element.

Worst-case complexity: o(logn)
Best-case complexity: o(1)

Sorting

1. Bubble Sort: It is a comparision-based sorting algorithm where each pair of adjacent elements is compared and elements get swapped if they are not in order.

Worst Case Time Complexity: O(n2)
Best Case Time Complexity: O(n)

2. Insertion Sort: Insertion sort is based on the idea that one element from the input elements is consumed in each iteration to find its correct position i.e, the position to which it belongs in a sorted array.

Worst Case Time Complexity: O(n2)
Best Case Time Complexity: O(n)

3. Selection Sort: The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from unsorted part and putting it at the beginning. The algorithm maintains two subarrays in a given array.

Worst Case Time Complexity: O(n2)
Best Case Time Complexity: O(n2)

4. Quick Sort: Quick sort is highly efficient algorithm which is based on Divide and Conquer strategy. So, the technique is to divide the given array into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than a specific value, say pivot, based on which the partition is made and another array holds values greater than the pivot value.

Worst Case Time Complexity: O(n2)
Best Case Time Complexity: O(nlog(n)) 
Average Time Complexity: O(nlog(n))

5. Merge Sort: Like quick sort, merge sort is based on divide and conquer methodology. It divides input array in two halves, calls itself for the two halves and then merges the two sorted halves.

Worst Case Time Complexity: O(nlog(n))
Best Case Time Complexity: O(nlog(n)) 
Average Time Complexity: O(nlog(n))

6. Heap Sort: Heap sort in comparision based, in-memory sorting technique based on binary heap data structures. It involves building a Heap data structure from the given array and then utilizing the Heap to sort the array.

Worst Case Time Complexity: O(nlog(n))
Best Case Time Complexity: O(nlog(n)) 
Average Time Complexity: O(nlog(n))

7. Radix Sort: It is not based on comparision-based algorithms, instead it is an integer sorting algorithm that sorts data with integer keys by grouping the keys by individual digits that share the same significant position and value.

time complexity: o(kn) 
where k is total number of digits in the largest number bcs that is number of iterations

8. Shell Sort: Shell sort is a highly efficient sorting algorithm and is based on insertion sort algorithm. This algorithm avoids large shifts as in case of insertion sort, if the smaller value is to the far right and has to be moved to the far left.

This algorithm uses insertion sort on a widely spread elements, first to sort them and then sorts the less widely spaced elements. This spacing is termed as interval. This interval is calculated based on Knuth’s formula(h = h * 3 + 1).

Worst complexity: Depends on gap sequence
Average complexity: n*log(n)^2 or n^(3/2)
Best complexity: n

8. Cycle Sort: Cycle sort is an in-place, unstable sorting algorithm, a comparison sort that is theoretically optimal in terms of the total number of writes to the original array, unlike any other in-place sorting algorithm. It is based on the idea that the permutation to be sorted can be factored into cycles, which can individually be rotated to give a sorted result.

Worst-case performance	Θ(n2)
Best-case performance	Θ(n2)
Average performance	Θ(n2)
Worst-case space complexity	Θ(n) total, Θ(1) auxiliary

Linked lists

It is a data structure for storing collections of data with the following properties:

Types of linked list

  1. Singly linked list
  2. Doubly linked list
  3. Circular linked list

Operations:

Insertion:

  1. Insertion at the beginning
  2. Insertion at the end
  3. Insertion in the middle

Deletion:

  1. Deletion from the beginning
  2. Deletion from the end
  3. Deletion from the middle

Stack

A stack is an ordered list in which insertion and deletion are done at one end, called top. The last element inserted is the first one to be deleted. Hence it is called LIFO(Last In First Out) or First In Last Out(FILO) list.

Operations:

  1. Push
  2. Pop

Tree

Tree is an example of non-linear data structures. It is a data structure similar to a linked list but instead of each node pointing simply to the next node in a linear fashion, each node points to a number of nodes. A tree structure is a way of representing the hierarchical nature of a structure in a graphical form.

In trees, order of the elements is not important. If we need ordering information linear data structures like linked lists, stacks, queues can be used.

                                3         <-- Level-0
                                /\
                               1  4        <-- Level-1
                                  /\
                                 2  6       <-- Level-2

Traversal in trees

  1. Pre Order Traversal
  2. In Order Traversal
  3. Post Order Traversal