Linear Search vs. Binary Search: A Practical Guide in Go

Hello, in this post I am going to talk about different types of search algorithms. In essence, I am going to focus on linear search and binary search, illustrating how these algorithms work and comparing them in terms of time complexity, also known as Big O Notation. All the code examples are provided in the Go programming language. However, I am confident that even if you're not familiar with Go but have programming experience, you'll be able to understand the examples with minimal effort. Alright, let's dive in. Firstly, we'll start with linear search, which is one of the simplest (perhaps even the most straightforward) algorithms available. I would even call this algorithm intuitive because, in most cases when faced with the task of finding a specific element within a collection, our initial thought is often to examine each element sequentially. This algorithm works exactly like that: we begin at the start of the list or array and compare each element with the target element one by one, step by step, until we find it. If the target element is not present in the list, we can simply return -1 or nil to indicate its absence. Here is an example implementation in Go: package main import "fmt" func linearSearch(arr []int, target int) int { for i, element := range arr { if element == target { return i // Return the index if the target is found } } return -1 // Return -1 if the target is not found } func main() { numbers := []int{2, 5, 1, 8, 3, 9, 4, 7, 6} target := 3 index := linearSearch(numbers, target) if index != -1 { fmt.Printf("Element %d found at index %d\n", target, index) } else { fmt.Printf("Element %d not found in the array\n", target) } target = 10 index = linearSearch(numbers, target) if index != -1 { fmt.Printf("Element %d found at index %d\n", target, index) } else { fmt.Printf("Element %d not found in the array\n", target) } } In this Go code, the linearSearch function takes an integer slice arr and an integer target as input. It iterates through each element and its corresponding index in the slice. If the element matches the target, the function immediately returns the index. If the loop completes without finding the target, the function returns -1. The main function demonstrates how to use the linearSearch function with a sample array and target values. Why is it called linear search, though? Remember Big O notation? Well, it turns out that Big O describes the so-called time complexity of this algorithm, which represents the time required for this algorithm to perform the operation, in our case, finding the element in the collection. Does it tell us the exact time needed to complete the search? No, of course not. Even though this term is called time complexity, it doesn't actually describe the precise time required to do the work. Instead, it tells us the number of operations this algorithm would require to complete the given task. This algorithm can have different scenarios or cases. Let's say the best-case scenario is when the target element is the first in the collection (or the element with the index of 0). In that case, we don't need to go through the entire collection to find the target; it will be found in the first iteration, meaning we would require just one operation. Now, let's consider the worst case. This basically means that the target element is at the end of the collection. And yes, as you might have guessed, we will need to compare every element in the collection with our target to find out that the target is the last element (or the element with the index of len(arr)-1). In that case, we can say that we require N operations to complete the task, where N is the size of our array. If there are 10 elements in the array, that's not a big deal; our code iterates 10 times to find the target. But what happens if the size of the array is 1 million elements or even worse, 1 billion elements, and the target is still the last element? Well, the code will iterate either 1 million or 1 billion times, thus resulting in linear time complexity—the number of operations required to complete the task grows linearly with the size of the input. With that, we can say that the time complexity of linear search in the worst case is proportional to the size of the input, which is N, or simply O(n). That's how Big O notation looks :) One more thing: Big O often describes the worst-case scenario. If linear search has a time complexity of O(n) and can indeed be slow with very large inputs, is there a more efficient alternative? Yes, there is, and it's called binary search. Binary search is significantly faster compared to linear search, boasting a time complexity of O(log n). This means that the number of operations required to complete the task grows logarithmically with the input size. What does O(log n) actually mean? Let's recall some basic math. What is 2 raised

Apr 21, 2025 - 10:02
 0
Linear Search vs. Binary Search: A Practical Guide in Go

Hello, in this post I am going to talk about different types of search algorithms. In essence, I am going to focus on linear search and binary search, illustrating how these algorithms work and comparing them in terms of time complexity, also known as Big O Notation.

All the code examples are provided in the Go programming language. However, I am confident that even if you're not familiar with Go but have programming experience, you'll be able to understand the examples with minimal effort.

Alright, let's dive in. Firstly, we'll start with linear search, which is one of the simplest (perhaps even the most straightforward) algorithms available. I would even call this algorithm intuitive because, in most cases when faced with the task of finding a specific element within a collection, our initial thought is often to examine each element sequentially. This algorithm works exactly like that: we begin at the start of the list or array and compare each element with the target element one by one, step by step, until we find it. If the target element is not present in the list, we can simply return -1 or nil to indicate its absence.

Here is an example implementation in Go:

package main

import "fmt"

func linearSearch(arr []int, target int) int {
    for i, element := range arr {
        if element == target {
            return i // Return the index if the target is found
        }
    }
    return -1 // Return -1 if the target is not found
}

func main() {
    numbers := []int{2, 5, 1, 8, 3, 9, 4, 7, 6}
    target := 3
    index := linearSearch(numbers, target)

    if index != -1 {
        fmt.Printf("Element %d found at index %d\n", target, index)
    } else {
        fmt.Printf("Element %d not found in the array\n", target)
    }

    target = 10
    index = linearSearch(numbers, target)

    if index != -1 {
        fmt.Printf("Element %d found at index %d\n", target, index)
    } else {
        fmt.Printf("Element %d not found in the array\n", target)
    }
}

In this Go code, the linearSearch function takes an integer slice arr and an integer target as input. It iterates through each element and its corresponding index in the slice. If the element matches the target, the function immediately returns the index. If the loop completes without finding the target, the function returns -1. The main function demonstrates how to use the linearSearch function with a sample array and target values.

Why is it called linear search, though? Remember Big O notation? Well, it turns out that Big O describes the so-called time complexity of this algorithm, which represents the time required for this algorithm to perform the operation, in our case, finding the element in the collection. Does it tell us the exact time needed to complete the search? No, of course not. Even though this term is called time complexity, it doesn't actually describe the precise time required to do the work. Instead, it tells us the number of operations this algorithm would require to complete the given task.

This algorithm can have different scenarios or cases. Let's say the best-case scenario is when the target element is the first in the collection (or the element with the index of 0). In that case, we don't need to go through the entire collection to find the target; it will be found in the first iteration, meaning we would require just one operation.

Now, let's consider the worst case. This basically means that the target element is at the end of the collection. And yes, as you might have guessed, we will need to compare every element in the collection with our target to find out that the target is the last element (or the element with the index of len(arr)-1). In that case, we can say that we require N operations to complete the task, where N is the size of our array. If there are 10 elements in the array, that's not a big deal; our code iterates 10 times to find the target. But what happens if the size of the array is 1 million elements or even worse, 1 billion elements, and the target is still the last element? Well, the code will iterate either 1 million or 1 billion times, thus resulting in linear time complexity—the number of operations required to complete the task grows linearly with the size of the input. With that, we can say that the time complexity of linear search in the worst case is proportional to the size of the input, which is N, or simply O(n). That's how Big O notation looks :) One more thing: Big O often describes the worst-case scenario.

If linear search has a time complexity of O(n) and can indeed be slow with very large inputs, is there a more efficient alternative? Yes, there is, and it's called binary search. Binary search is significantly faster compared to linear search, boasting a time complexity of O(log n). This means that the number of operations required to complete the task grows logarithmically with the input size.

What does O(log n) actually mean? Let's recall some basic math. What is 2 raised to the power of 2? The answer is 4. What is 2 raised to the power of 3? It's 8. Now, what is a logarithm? Simply put, it's the inverse operation of exponentiation. For example, the logarithm base 2 of 8 (written as log₂(8)) is 3, because 2 raised to the power of 3 equals 8.

So, why is this relevant to search algorithms? Imagine we have a sorted collection of integers with a size of 128, and we need to find the element 78. With linear search, we would potentially have to examine every element until we find the target, which isn't very efficient. Binary search offers a much better approach.

To use binary search, the collection must be sorted in ascending order (from the smallest to the largest). So, our input would look something like this: [0, 1, 2, 3, 4, 5, ..., 127].

Looking at this sorted array, we can think about dividing it into two roughly equal halves. The first half would be from index 0 to 63 (since the middle index of 128 elements is around 63.5), and the second half from index 64 to 127. Where do we think our target, 78, might be? It's clearly in the second half. So, do we still need to consider the first half? Not really; we can focus our search on the second half (indices 64-127). With just one comparison (with the middle element), we've effectively halved the search space from N (128) to N/2 (64).

What do we do next? We repeat the same process, but now within our new search boundaries (indices 64 to 127). We find the middle element of this sub-array (around index 95). We compare our target (78) with this middle element. Since 78 is smaller than 95, we know our target must lie in the first half of this sub-array (indices 64 to 94). Again, we've halved our search space.

We continue this process of dividing the search interval in half. In each step, we compare the target value with the middle element of the current interval. If the target is equal to the middle element, we've found it. If the target is smaller than the middle element, we narrow our search to the left half. If the target is larger, we focus on the right half. We keep doing this until the target is found or the search interval becomes empty (meaning the target is not present in the array).

Because we are halving the search space in each step, the number of steps required to find the target (or determine its absence) grows logarithmically with the size of the input. For an array of size N, in the worst case, binary search will take approximately log₂(N) steps. For our example of 128 elements, log₂(128) is 7, which is significantly fewer operations than the potential 128 operations in linear search!

Here's an example implementation of binary search in Go:

package main

import (
    "fmt"
    "sort"
)

func binarySearch(arr []int, target int) int {
    left := 0
    right := len(arr) - 1

    for left <= right {
        mid := (right + left) / 2 // finding the middle index
        if arr[mid] == target {
            return mid // Target found at the middle index
        } else if arr[mid] < target {
            left = mid + 1 // Target is in the right half
        } else {
            right = mid - 1 // Target is in the left half
        }
    }
    return -1 // Target not found
}

func main() {
    numbers := []int{9, 2, 5, 1, 8, 3, 10, 4, 7, 6}
    sort.Ints(numbers) // Important: Binary search requires a sorted array
    fmt.Println("Sorted array:", numbers)

    target := 7
    index := binarySearch(numbers, target)

    if index != -1 {
        fmt.Printf("Element %d found at index %d\n", target, index)
    } else {
        fmt.Printf("Element %d not found in the array\n", target)
    }

    target = 11
    index = binarySearch(numbers, target)

    if index != -1 {
        fmt.Printf("Element %d found at index %d\n", target, index)
    } else {
        fmt.Printf("Element %d not found in the array\n", target)
    }
}

In this Go code, the binarySearch function takes a sorted integer slice arr and an integer target as input. It initializes left and right pointers to the beginning and end of the slice, respectively. The for loop continues as long as the left pointer is less than or equal to the right pointer. In each iteration, it calculates the middle index mid. If the element at arr[mid] is equal to the target, the function returns mid. If arr[mid] is less than the target, it means the target must be in the right half of the slice, so we update left to mid + 1. Otherwise, if arr[mid] is greater than the target, the target must be in the left half, and we update right to mid - 1. If the loop finishes without finding the target, the function returns -1. The main function demonstrates the usage of binarySearch with a sample (unsorted) array, which is then sorted using sort.Ints before the search.

In conclusion, we've explored two fundamental search algorithms: linear search and binary search. While linear search offers a straightforward and intuitive approach by examining each element sequentially, its O(n) time complexity makes it less efficient for large datasets. Binary search, on the other hand, leverages the power of a sorted input to achieve a significantly faster O(log n) time complexity by repeatedly halving the search space. Understanding the strengths and weaknesses of each algorithm, particularly their time complexities, is crucial for choosing the most appropriate search method for a given task and dataset size. As we've seen, even a seemingly simple change in approach can lead to substantial performance improvements, especially when dealing with vast amounts of data.