# Packing up CP307

As the class wraps up, we turn our attention to our final project. This project finds a fast way to solve the well-known 1/0 Knapsack problem. This problem provides us with a maximum weight and a list of items. Each item has a name, a value, and a weight associated with it. The objective of the problem is to find a combination of items that has the highest value under the maximum weight limit set. Take this for example:

If the right side is the list of items and corresponding weights and values with a max weight of 80, and the left side is all possible packing combinations, we can see multiple things. Because this is such a small set of problems, this process wouldn’t take that long but for the amount of items(k) that there are (in this case 3), there are 2k subsets that could be possibly calculated. Therefore, the more items there are, the longer the problem will take to find all subsets and then find the correct answer. It was our job to find a faster way to do this. Looking at these subsets above, you can see that there was no need to calculate all possible subsets. For instance, anything with a weight above 80 can automatically be eliminated from consideration (the last subset). Additionally, you can see that the combination of binoculars and Captain Crunch might meet the weight limit, yet still have a smaller value than a combination of textbooks and Captain Crunch. Therefore, we can see that there is a balance between finding a higher value that is less than or equal to the maximum weight. We calculated a ratio between the value of the item and the weight; our program then sorted those items based on that ratio. We then calculate subsets starting at the most optimal ratio and eliminate all subsets where the weights exceed the maximum.

Our implementation used a modified version of a greedy algorithm. Greedy algorithms are exactly what they sound like: greedy. Greedy algorithms do what is best at that moment in the hopes that it will have the best outcome. Or, as we learn them, it makes a “locally” optimal solution in the hopes of it leading to a “globally” optimal solution. If I were a greedy algorithm, I would be making the decision to watch tv and eat everything in my kitchen (locally optimal), in the hopes that after block break is over, despite the work I had to do over block break, I will feel content after having not moved from the couch (globally optimal). If we are being realistic, procrastinating will likely not pay off in the long run and will, in the end, cause me more stress and time, but at this moment I am being greedy with what seems best at this moment. That’s the general idea of a greedy algorithm except with fancy computer lingo. Kinda. Similarly in our project, we used a modified version of a greedy algorithm. It first sorted through those ratios and chose subsets based on the ideal ratio in the hopes that in the end, it would pay off. In certain cases such as this one:

[item1, 10, 1],[item2, 10, 1],[item3, 50, 50],[item4, 50, 50] max_weight=100

Our program will want to calculate the subset containing Item1 and Item2 since they have a large value and a small weight. It will then add item3, bringing the total weight up to 53 and the value up to 70. It will not be able to add item4 since it will exceed the max weight. However, in this case, simply having item3 and item4 in the backpack would lead to a weight of 100 and a value of 100 which is the most optimal solution. In this case, our algorithm will fail to choose the true optimal subset.

A greedy algorithm is just a small example of the material we learned in this class. This knowledge allowed us to speed up the 1/0 knapsack problem which is a crucial skill needed in any CS field.

I hope you found these posts interesting and entertaining! 