# How should I design my algorithms , dynamic or greedy?

How should I design my algorithms , Dynamic ? Or Greedy?

We use software to automate tedious , repetitive tasks. First we’ll define our problem and the problem space. This is usually done with ease. The next step is to devise a set of instructions that our computer can execute and solve that problem . The latter is what we shall be spending most (if not all) of our efforts to understanding how we could do this with ease too.

Before we set off exploring some of the ways we can design our algorithms I think it is worth conceptualising what a ‘good’ algorithm looks like.

We would say our algorithm should be efficient . I think that is definitely a characteristic of a good algorithm.

But how do we define efficiency ?

Well , we would say our algorithm should be relatively time efficient i.e.

What affects the programs runtime ? (For example , overheads produced by recursion)

How complex / time consuming is each instruction ? (How time consuming is it finding the next prime number)

Over time does our program become very slow? (Are there many loops in the algorithm or are we printing to the console)

Another criteria would be how space efficient is this algorithm , which means :

How much memory does the data structures we use take up ?

What kind of data structure can we use , and how does our choice affect the runtime?

Over time does our algorithm take up a lot of memory?

I mention the concept of “over a period of time” in both criteria but what relevance does that have to an algorithm?

Well , let’s look at an example.

If we are writing a function that takes a string and returns the length of the string in pseudocode.

```
function str_length (string)
char = string[0]
length = 0
while char != null
length ++
char = string[length]
endwhile
return length
endfunction
```

“Over a period of time” refers to the growth of the data and the growing complexity of our algorithm , in this case how do we react when the word is 173 characters long?

That would mean length gets incremented 173 times. So we could say , given a string of length n we will be looping n many times.

The notation for this idea is called “Big O” . If we to put what we just said into “Big O” form then it would look like this

`O(n) , for our function f(n) `

and that is it really. What this says is , for a function f(n) the amount of time taken to execute , time complexity , is proportional to to our input.

What about the space complexity of this function ?

Well , char is always going to be the size of one character , as we just change the index each time.

The size of the counter itself isn’t really taken into account because it is stored as a 64 bit integer , meaning numbers from -2^{63} -1 to 2^{64} . And no word is longer than that ! So memory is only allocated for one integer.

Moreover , because nothing grows as the word gets larger , then we would say this is

`O(1) , or constant space complexity. `

Let’s look at an algorithm where space increases as the size of our inputs also increase.

Say I’m working in a vet centre , with all kinds of cats and dogs of different sizes and breeds. The centre needs to record important details of the pet like breed , age , name etc.

If we think about the characteristics of an array we could say that elements are stored contiguously (collectively in one block) which makes it much easier for us to search.

Our search time would actually be

`O(1)`

However , if we wanted to delete and insert elements we have to shift the indexes of all the other elements. While we would consider this n – 1 shifts in the array , for massive data structures the “-1” becomes negligible and so we have

`O(n)`

As we can see if we were to add more and more pets to our array , the size of our structure has to increase. But there is a problem . With the array implementation memory allocation is static. Moreover if we got to the point where we needed more memory , then all the elements would have to be moved into the larger block !

On the other hand , if we where to use a linked list we wouldn’t worry about this kind of thing. We just add a pointer to the last element of the current list to the new element. But then searching for a particular element can take a long time !

It is this kind of back and forth that can make devising a set of instructions difficult. All in all , we reach a trade off between access and modification . We would have to take a deeper look into our design and choose what to prioritise.

But sometimes being ‘good enough’ is OK. Sometimes an optimal solution isn’t needed , or sometimes isn’t practical to solve at all.

An example of this would be the travelling salesperson problem.

This is where "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city and returns to the origin city?"

Because we are finding combinations , the complexity of this algorithm is O(n!) . n! means n factorial , so

6! is equivalent to 6 x 5 x 4 x 3 x 2 x 1 = 720

As we can see , if we where travelling around 50 states, that would be 50! which is

30414093201713378043612608166064768844377641568960512000000000000

This sort of complexity is mind numbing , and not only is working out a solution of this complexity impractical to say the least , I’m sure we would save much more time working with an approximation than twiddle our thumbs for eternity for the “perfect answer”.

Problems like this are where **greedy algorithms** come in !

Greedy Algorithms have the mindset to make the best decision for each decision it has to make. Moreover we don’t stop to think of the the global optimum solution but the local instead.

They are useful because they are easy to write and by giving us an approximate , we have in affect solved the problem.

Alright , let’s get started with a problem that greedy algorithms can help us solve.

Description :

Given the closing price of a share over n days, find the maximum profit that you can make by buying and selling the share over the n day period.

So given the prices of a stock (in £’s)

`prices = [8, 10, 5, 30, 25, 35]`

the maximum amount of profit that can be made from 1 buy and 1 sell is £25.

By buying for 5 and selling for 35.

- We will need to buy as low as possible on a certain day , k , but we need to make sure we don’t sell for profit on k-1 days. Which in reality is impossible.
- We need to make sure that the price we sell at is the highest price in the list.

Profit can be broken down into two things. Costs and revenue.

So we can search through the list of share prices to find the lowest price (smallest cost) . Each time we make a decision , we shall pick the lowest of the two.

```
lowest_share_price = prices[0]
for i = 1 to prices.length() - 1
if prices[i] < lowest_share_price then
lowest_share_price = prices[i]
endif
endfor
```

This finds the optimum global solution , for our set of prices .

Alright , now the next problem of finding the highest price of the share (largest revenue) . We can find the highest price of the share by doing

```
highest_share_price = prices[0]
for i = 1 to prices.length() - 1
if prices[i] > howest_share_price then
howest_share_price = prices[i]
endif
endfor
```

This is almost identical to finding the lowest share price . What could we do instead is use our model to find the profit of each share pair and then make the decision to update the amount of profit we could make if it is larger.

```
function find_share_profit (prices[])
profit = 0
lowest_stock_price = prices[0]
for i = 0 to prices.length() - 1
// check to see that the stock price is the lowest , otherwise update it
lowest_stock_price = min(lowest_stock_price, prices[i])
// get the profit from this (if any)
potential_profit = prices[i] – lowest_stock_price
// check to see that the profit is the higher , otherwise update
profit = max(profit , potential_profit)
endfor
endfunction
```

But !

We are not done yet. Because there are other sub-problems to solve.

For example , what if the stock keeps going down over the course of the n days?

`prices = [18 , 15, 11, 8, 4, 3] `

But it can be argued that we could just raise an exception and be done here , as the challenge was to find the most amount of profit not the smallest loss.

If we where to accommodate this sort of thing we could set a “base profit” instead of just leaving it at 0.

So replacing

`profit = 0 `

with

`profit = prices[1] – prices[0]`

All in all we end up with an algorithm that does O(n) time and O(1) space.

Now while greedy helped us to create a solution that was intuitive and relatively efficient , what if we where to buy and sell stocks at most twice. This would mean we would have to consider the affects of our local decisions to buy and sell on day 3 , for example , impacting the potential profits we could have made on other days. For example , given the same set of prices

`prices = 8, 10, 5, 30, 25, 35`

We could have actually made more profit by buying for 5 , selling for 30. Then buying for 25 then selling again for 35.

But how could we know that the decisions we make may not be the best overall?

What we could do is utilise the techniques of **dynamic programming**.

Unlike greedy where we just solve the problems as they come as best we can , with dp all the possible problems are split up into simpler sub-problems and then solved . By breaking down the main problem into groups it is easier to visualise the steps we need to take.

Once we have broken down the problem and solved our sub-problems , we can store the solutions.

We only need to work out the sub-problem once , as we store the answer.

This technique of storing answers instead of recomputing them is called “memoization”.

Memoization can become very time efficient if our solution involves many iterations. All we would need to do is lookup the answer we got for that problem each time it arises. However with this increase in time comes the increase in memory usage. This is handy in our case as we can store the stock values of previous days , and then on the subsequent days check if there is any further profit to be made.

But how do we know when we can break our problem down into sub-problems?

There are cases where we cannot , but a good rule of thumb is to see whether there are multiple decisions that occur over a period of time. If this is true , it usually refers to the property that there are sub-problems at different points in time in which we can solve.

Having problems of this nature mean they can break apart recursively. Which means that we can use the result of a smaller problem to help solve the next , larger , problem. This process continues until we find our solution to the main problem.

Let’s walk-through the solution to this problem modelled using dp :

```
//n represents the number of stock values
//stocks[] is the array of all the values
function max_profit (int n , stocks[])
i //counter
max_price // this is the maximum share price
min_price // is the minimum share price
profit
// initialise with 0 to 1 on the x and 0 to n -1 on the y.
// moreover the columns indicate the number of buys and sells we have (2)
// and the n-1 is to allocate enough memory for all the stocks given to us
profit_record = [1][n -1]
profit_record[0][0] = 0
min_price = stocks[0]
for i = 1 to n - 1
profit_record[0][i] = max(profit_record[0][i-1], (stocks[i] - min_price))
/*
So as we sift through the stocks , update the profit-record to show more profit for that day (if any profit can be made the next day).
We can show more profit by taking the difference of the nth day stock to the (n-1)th stock.
If our minimum is actually larger than another share price , then we will want to update the minimum.
*/
if (stocks[i] < min_price)
min_price = stocks[i]
endif
endfor
//now i is the value at which stock will be purchased
//backtracking through the stocks array to see if it was worth to buy and sell twice
profit_record[1][n-1] = 0
max_price = stocks[n-1]
// this would hold the most profit from one buy and sell from the loop we did earlier
profit = profit_record[0][n-1]
i = n - 2
while i >= 2
// update the record here to show more profit could have been made from buying and
// selling again
profit_record[1][i] = max(profit_record[1][i+1], (max_price - stocks[i]))
/*
what is happening here is that we will store the maximum , indicating that there is more
profit to be made from the next item if the (max_price - stocks [i]) is in fact smaller.
*/
// if there was profit to be made from another buy and sell , then update the profit
// otherwise there is no point to buy and sell again
profit = max(profit , (profit_record[1][i] + profit_record[0][i -1]))
if stocks[i] > max_price
//if there is a higher share price then make it the new maximum
max_price = stocks[i]
i--
endwhile
return profit
endfunction
```

Memoization really helps here to see the changes in the stock prices over time , and get a good “birds-eye-view” of where and when would be the best decision to make a buy and sell.

By breaking it down into sub-problems we had one loop for one buy and sell , and then another for backtracking and to see if there further profit to be made .

While the solution is fairly slow , it is O(n2) time and O(2n) space (which would be classed as O(n) space as they are both of linear complexity).

Because dp aims to work out the optimal solution , there are more edge cases and pairs that we have to consider.

But could we have used greedy for this?

Because the greedy methodology only makes the best local decisions , it does not make sense to store answers. It wouldn’t consider the fact there may be another round of potential buying and selling until it hits that decision.

With the challenge of buying and selling a stock at most twice , greedy would just take the first couple lots of profit that it sees and potentially miss out on larger profits in the long run.

In conclusion , for certain problems where we would want/need an optimal solution , then dynamic programming is the technique that we should consider using (if our problem does break apart nicely). But then we would use greedy when an approximation is acceptable and we would prefer a simpler , more intuitive solution.

References :

Wikipedia article on DP

Stock Market Problem

Grokking Algorithms

## Related Jobs

## Related Articles

## Related Issues

- Started
- 0
- 17
- Intermediate

- Submitted
- 1
- 0
- Intermediate

### Get hired!

#### Sign up now and apply for roles at companies that interest you.

Engineers who find a new job through WorksHub average a *15%* increase in salary.