What exactly is an algorithm?

What exactly is an algorithm?

·

3 min read

When giving instructions to a computer, every step must be defined. These steps are called algorithms.

An algorithm is a series of defined instructions that perform a task.

For example, we could break the task of making a sandwich into this algorithm:

  1. Retrieve the ingredients and tools e.g. a knife.

  2. Slice the bread.

  3. Spread ranch on the bread.

  4. Place bacon, cheese and tomatoes on the bread.

  5. Combine the pieces of bread to make your sandwich complete.

Knowing this, almost every line of code you write in C# is part of some algorithm; they are instructions towards achieving some task. The task might be to find the even numbers in an array or to validate whether a string has certain properties.

Also, you can take different sets of steps to complete the same task. From the example above, what if you had to make a trip to the store for each ingredient? That is, you make multiple trips to the store and you start making the sandwich after you gather all the ingredients.

So, some sets of steps are more efficient than others, though they all have defined instructions that complete the same task.

How to optimize an algorithm

In the first sandwich algorithm, we assumed that all the proper ingredients and tools were already gathered in the kitchen. This saved us time by not running to the store.

The same is true for algorithms. To make an algorithm more efficient, you have to make an assumption about your data.

Maybe your data will only contain strings with only alphabetical characters. With this assumption, we can skip the steps of normalizing the string, or in the case of the sandwich analogy, skip running to the grocery store.

If you make the wrong assumption about your data, the algorithm will likely make errors while performing the task. So note that:

A slow algorithm that works all the time is preferred to a fast algorithm that works sometimes.

This is why it is important to make informed assumptions about your data, so you can leverage these assumptions to optimize your algorithms.

Now let's make a C# algorithm more efficient.

Find maximum number

Prompt: Find the maximum number between three numbers.

Note: If there are duplicate entries, we still return whatever the maximum is.

class Program {
    static int findMaximum(int a, int b, int c) 
    {
        if(a > b) {
            if(a > c || a == c) {
                return a;
            }
        }

        if(b > c) {
            if(b > a || b == a) {
                return b;
            }
        }

        return c;
    }
}

The method findMaximum contains the algorithm used to find the maximum number between the three integers (parameters a, b and c).

There is one issue: the algorithm can be better. We can use a different set of steps to get the same results.

Let's try an algorithm that tracks the maximum number.

class Program {
    static int findMaximum(int a, int b, int c) 
    {
        int max = a;

        if(b > max) {
            max = b;
        }

        if(c > max) {
            max = c;
        }

        return max;
    }
}

Why is this better optimized?

The first algorithm has a minimum of 2 comparisons per method call, up to 6 comparisons, and a total of 9 operations.

The second algorithm has a maximum of 2 comparisons and a total of 6 operations. This algorithm is also easier to read as it keeps track of the maximum number.

But why did we analyze the algorithms based on the number of operations?

Up next is Big-O notation.