About a year ago, I wrote a quick little program to help the judging process for the 59 Days Of Code contest. Please note that I need to emphasize the word ‘quick’ before I go any further, and that’s a word that should scare any professional programmer. It’s alright to write a ‘quick’ prototype, but trying to do anything that needs to be production quality ‘quickly’ is a dangerous idea. Testing is important, not just to catch the bugs you expect, but the bugs you don’t expect.
In the case of the program in question, we had an issue where somehow the data stream between device and server was corrupted. We had all the original data stored in logs, but trying to sort through JSON strings stored in a log is a time-consuming, annoying process. It’s technically doable, but as I rapidly realized on the day of the event, trying to do it on the fly simply isn’t practical. I needed to have already written or found an application to help handle the process, and I hadn’t.
But restoring the data, while important, isn’t half as good as preventing the corruption. Towards that goal, I hope to re-write that software this year to do two things different. Firstly, devices won’t download data from the server willy-nilly in an attempt to ‘just work'; they download on request only. Secondly, I need some way to make sure the data gets to the server in good format.
The question of course, was how? I could have dug in and done some research on the problem, and oh boy oh boy are there a lot of solutions out there fore it. But when the opportunity to re-write the software was first discussed, I didn’t have the internet handy. I had to rely on what I already knew, and for some reason verifying messages against accidental corruption isn’t something I really learned in school.
And from that quick solution, come up with in about 5 seconds of thought while on the phone, comes the point of this post. I didn’t learn anything about how to handle accidental issues, but something else came to mind: when studying security, I learned several techniques to handle man-in-the-middle attacks. Sure, the cryptographic portions of those techniques would be a pain to manage, but the basic underlying concept didn’t necessarily need to apply against an actual ‘attack’. Having the packet become malformed between device and server is, in a very real sense, a ‘man in the middle’ attack by random chance. And since I’m worried about random chance, not an accident, suddenly I have a solution. Just take the basic message, generate a simple, non-cryptographic MD5 hash, and voila! Message integrity checking made ‘easy’.
The point of this, of course, is that I took an old solution for a different problem, and re-purposed it to a new one. The fact that historically speaking, the solution for security probably evolved out of the solution previously used for verifying data integrity is simply a rather amusing joke on me.
The ability to take a new problem, and use an old solution, is important. In programming, old does not automatically mean bad. Sure, A* pathfinding isn’t as cool or good as flow pathfinding; for the context of a game with many units I’d probably pick to implement flow any day of the week or twice on Sundays! But if I’m writing a GPS program, A* is probably still a better route to take, because my concerns regarding other ‘units’ (other cars on the road) can be better expressed by modifying the relative weight of various connections between nodes on my map. More than that, some day I might find that the basis of the A* algorithm could be useful for something else entirely. What, I don’t know.
The only knowledge that is ever wasted is knowledge you forget because it’s ‘useless’. Maybe it doesn’t apply immediately, but keep it tucked away somewhere. Maybe you can use it somewhere down the line, for something totally unrelated to it’s original source.