Since my last post I’ve become a masters student at Chalmers, enrolled in a programme called Computer Science: Algorithms, Languages and Logic. One of the first classes that we take is about algorithms.
In the class we learn more about different algorithmic paradigms, such as dynamic programming, divide and conquer, and greedy algorithms. We also study ways of proving algorithms, that they both solve the problem and function optimally.
In comparision with courses on algorithms that I’ve studied earlier at Chalmers, this one is more about solving problems yourself instead of just learning common solutions such as Dijkstra’s algorithm, quicksort and so on.
A thought that struck me during the course, is how often we just assume that something works. We develop an algorithm and throw a test case or two at it - which is enough for us to declare the problem as solved and move on.
Of course I understand that it isn’t always feasible to develop proofs, nor necessary. But a valuable question to ask should be whether you actually can prove that your algorithm work the intended way, or if you’re just assuming.