I recently read an blog post on how to make binary branches in a function definition more readable. In and of itself this is a laudable goal, but frankly I think it’s completely backwards.
Instead of trying to find ways to use a given programming language in a more readable manner, focus instead on developing a new paradigm for programming syntax. The majority of languages are so similar that the syntactical differences are largely a matter of learning the appropriate keywords and to use a period in place of an arrow (the -> you see in C++ for example). Python is more or less identical to C++, just with a slightly different way of expressing the exact same concepts, and a different way of typing out various STL features. The implementation is different, but the underlying logic behind the syntax remains the same.
Objective-C is my preferred language whenever possible because the developers actually took the time to consider how humans think, and then programmed the language to work more naturally with us. Instead of changing your code to make it easier to read, the changed the language.
The only real ‘downside’ to their approach is that it isn’t just object-oriented, it actually requires that you use objects if you want to gain the benefits of the language. And, to me, this is *not* a downside. OOP has proven itself a superior paradigm for most programming efforts, so enforcing it’s use just makes sense, even if it does require a degree of ‘magic’ to get started with.
I don’t care how you try to pretty it up; runSomeLogic(account, person, TRUE, FALSE); is always going to be harder to read than [account runSomeLogicWithPerson:person usingCache:YES withSecurity:no]; It’s just… human nature. The self-documenting function of the Objective-C method name is a huge mnemonic aid, and the only downside to it is the extra typing required. And that downside is easily solved by the incredibly helpful feature commonly reffered to as ‘intelisense': the ability of an IDE to guess what you’re trying to type, and help you finish it. That feature may be a ‘crutch’, but it’s a crutch that is far more helpful than harmful. In my experience, maybe one time in ten the reason intelisense can’t finish for me is because I made a mistake typing out a function or variable name (and almost invariably it’s because the function or variable name isn’t following standard conventions). The other nine times, I either had an error elsewhere in my code that was causing intelisense to be confused, or I had a data-type mis-match.
In the end, I’ll choose Objective-C over other languages precisely because, whether it’s wordy or not, it’s easier to read and understand. The hard work of programming isn’t typing, it’s understanding what to write and then fixing the mistakes you’ve made. Anything that helps in those two goals, even if it requires extra typing, is a huge aid.