We will thoroughly examine the insertion sort algorithm, and we will calculate the running time of it. Two points are important to understand the working philosophy of sorting algorithms:

- Sorting a hundred elements will take longer than sorting five elements.
- The running time is proportional to the magnitude of the input.

Insertion sort is analogous to sorting cards by hand. It operates in the same manner that people are accustomed to sorting cards. One card is taken from the deck at a time and put into the appropriate spot in the hand. Upcoming cards are handled in the same manner.

To insert a new card, all cards in hand with a value greater than the new card are moved to the right by one. The new card is put into the gap made on the right side by moving certain k cards.

Insertion sort in the in-place method requires no additional memory. The sorting is done in the input array. The first k items are always sorted in iteration k.

The first k elements are always sorted in iteration k.

The number of steps necessary to solve the problem on the RAM model is referred to as the running time. Each instruction may necessitate a distinct length of time. Consider the cost of i-th instruction, which is c_{i}.

Image Source: https://www.dotnetlovers.com/article/127/implementation-and-analysis-of-insertion-sort

## Example of Insertion Sort

**Example:** **Sort the given set of number using insertion sort and also show the result of each pass. < 11, 7, 17, 3, 9, 29, 85, 9 >**

**Solution:**

Initial array of numbers:

- One by one element is scanned from array. In iteration k, if incoming character is greater than previous element, then copy newly read character at location k.
- If k
^{th}element is smaller than (k – 1)^{st}element, then keep movie elements on right side until we get element smaller than newly coming element. Insert new element in created free space. Step by step simulation of insertion sort is shown here

**Step 1 : **Read 11. It is the only number, so shifting or swapping is not required

**Step 2 : **Read 7. The numbers 7 and 11 are out of order, so move 11 on 2^{nd} position in array. Now there are no more elements left in array on left of side, so insert 7 on 1^{st} vacant position

**Step 3 : **Read 17. It is in correct position, so shifting or swapping is not required

**Step 4 : **Read 3. It is not in correct position, so shift elements greater than 3 on the right side by one position until the correct location for 3 is found

**Step 5 : **Read 9. It is not in correct position, so shift elements greater than 9 on the right side by one position until the correct location for 9 is found

**Step 6 : **Read 29. It is in correct position, so shifting or swapping is not required.

**Step 7 : **Read 85. It is in correct position, so shifting or swapping is not required.

**Step 8 : **Read 9. It is not in correct position, so shift elements greater than 9 on the right side by one position until the correct location for 9 is found.

This is the final sorted sequence.

**Example: Let us sort the letters of word “EXAMPLE” in alphabetical order using insertion sort.**

An initial array of alphabets:

The array is scanned one element at a time. If the incoming character is bigger than the preceding element in iteration k, then copy the newly read character to position k.

If the kth element is smaller than the (k – 1)th element, keep shifting elements to the right until we obtain an element that is smaller than the newly arriving element. Insert a new element into the newly generated free space. The following is a step-by-step simulation of insertion sort:

**Step 1:** Read E. E is the only character, so no shifting or swapping is required.

**Step 2:** Read X. E and X are in order, so no shifting or swapping is required.

**Step 3:** Read A. A and X are out of order, so move X on 3rd position in the array. Compare A with E. Again, A and E are out an order, so move E on 2nd position in the array. Now there are no more elements left in the array, so insert A on 1st vacant position

**Step 4**: Read M. M and X are out of order, so move X on 4th position in the array. Compare M with E. They are in order, so insert M on 3rd vacant position

**Step 5 : **Read P. P and X are out of order, so move X on 5^{th} position in the array. Compare P with M. They are in order, so insert P on 4^{th} vacant position

**Step 6:** Read L. L is less than X, P and M, so move all three characters on right by position and insert L on 3^{rd} vacant position.

**Step 7:** Read E. E is less than X, P, M and L, so move all four characters on right by position and insert E on 3^{rd} vacant position.

**Step 8:** No more elements are left in array. The array is sorted.

This is the final sorted sequence

## Algorithm for Insertion Sort

## Complexity Analysis

Size of the input array is *n*. Total time taken by algorithm is the summation of time taken by each of its instruction.

### Best case analysis:

The best case offers the lower bound of the algorithm’s running time, which indicates that for any other data sequence’s running time cannot be less than its best case running time. When data is already sorted, the best scenario for insertion sort happens. In this case, the condition in the while loop will never be satisfied, resulting in t_{j} = 1. So

Where,

= c_{1} **×** n + c_{2} **×** n – c_{2} + c_{3} **×** n – c_{3} + c_{4} **×** n – c_{4} + c_{7} **×** n – c_{7}

= (c_{1} + c_{2} + c_{3} + c_{7}) n – (c_{3} + c_{4} + c_{4} + c_{7})

= a n + b Which is linear function of n

= O(n)

**Worst case analysis**:

The worst-case running time gives an upper bound of running time for any input. It indicates that for any arbitrary sequence of input data, running time of algorithm cannot get worse than its worst case running time. Worst case for insertion sort occurs when data is sorted in reverse order. So we must have to compare A[j] with each element of sorted array A[1 … j – 1]. So, t_{j} = j

**Average case analysis:**

Average case is often roughly as bad as worst case. On an average, half of the elements are greater than A[j] and other is less than that. So, t_{j} = (j / 2). It again turns out to be quadratic function of *n*. Average case running time of insertion sort is O(n^{2}).

Complexity of insertion sort in summarized in following table:

Best case | Average case | Worst case |

O(n) | O(n^{2}) | O(n^{2}) |

## Simulation

Elements in black square are sorted. Elements in red box is selected to be placed on its correct position. Rest of the elements are ye to be processed.

Image Source: https://commons.wikimedia.org/wiki/File:Insertion-sort-example.gif [link]

## Properties of Insertion Sort

Insertion sort is a straightforward sorting algorithm that constructs the final sorted array (or list) one item at a time. On big lists, it is significantly less efficient than more sophisticated algorithms such as quicksort, heapsort, or merge sort. Insertion sort, on the other hand, has numerous advantages:

- Easy to implement
- Like other quadratic sorting methods, it is effective for (relatively) small data sets.
- In reality, it is more efficient than most other basic quadratic (i.e., O(n
^{2})) algorithms like selection sort or bubble sort. - Adaptive: Runs in near linear time for almost sorted array
- Stable: Relative order of identical elements does not alter even after sorting
- In-place. Requires constant amount (O(1)) of additional space to sort the given array.
- Online: It can sort the list as data arrives. Do no require the entire array to be in memory while algorithm runs

## Discussion and Comparison

Insertion sort and selection sort are quite similar. After k passes over the array, the first k elements are in sorted order, just like in selection sort.

The primary distinction between the two methods is that insertion sort scans backwards from the current key, whereas selection sort scans ahead. As a result, the first k items in selection sort are the k smallest elements of the unsorted input, but the first k elements in insertion sort are simply the first k elements of the input.

Insertion sort performs the same number of comparisons as selection sort in the worst-case scenario (where the input array is reverse-sorted).

However, one disadvantage of insertion sort over selection sort is that it requires more writes because, on each iteration, inserting the (k + 1)-st element into the sorted portion of the array requires many element swaps to shift all of the following elements, whereas selection sort only requires a single swap.

In general, insertion sort will write to the array O(n2) times, but selection sort would only write O(n).

As a result, in instances when writing to memory is substantially more expensive than reading, such as with EEPROM or flash memory, selection sort may be preferred.

While some divide-and-conquer algorithms like quicksort and merge sort outperform insertion sort for bigger arrays, non-recursive sorting algorithms like insertion sort or selection sort are often quicker for extremely small arrays (the exact size varies by environment and implementation, but is typically between 7 and 50 elements).

As a result, a helpful optimization in the implementation of those algorithms is a hybrid method, which employs the simpler technique when the array is split to a small size.

## Some Popular Sorting Algorithms

Following are the few popular sorting algorithms used in computer science: