I often hear views such as “language A is better than language B” or “I don’t like programming in this language, I prefer that other language”. There is a right way of comparing languages, and there are many wrong ways: for example, comparing Java to C makes no sense: it’s the proverbial apples and oranges problem. Let’s elaborate.

A programming language is a tool. Like any other tool, it’s built for a specific job. One can (more or less) safely compare languages built for the same job: for example, Java and C#. Comparing Prolog and Javascript, on the other hand, makes no sense: one is a language for logic programming and inference, whilst the other is a language for web development (the cynic in me really wants to make the joke “whilst the other isn’t even a language”, but I’ll abstain from such comments). If you say “I like Javascript more than I like Prolog”, you’re not making a case about the languages: what you’re really saying is that you like web development more than you like logic programming. It’s a statement about the job (the application domain, to be more precise), not about the language. Certainly, there are syntactical, semantic and idiomatic aspects of languages that appeal to us more than others: and in that regard, we can compare, within the same domain.

C is a “difficult” language, because it is very easy to make mistakes, typically through pointers and dynamic memory (segmentation faults anyone?), precisely because it is built for direct hardware manipulation: and those features are essential for hardware access. That’s the reason C is still widely used as a systems programming language (albeit Rust is starting to become the first serious contender for the throne). Its cousin, C++, combines that expressive power (and associated dangers) with the encapsulation power of Object Oriented (OO) programming, making it ideal for large systems close to the hardware level (for example, for Operating System development). If you’re working above the hardware details, don’t use C! There are better languages for that, because the domain is different. On any type of programming above a general purpose OS, Python or Go do a much better job.

Why should we learn multiple languages, and embrace them as unique (except Javascript of course), rather than look at them as inferior to our favorite language? For two reasons. First, every language we learn becomes another tool; a tool that opens the door to a new domain. If you’re a Java programmer, learning a systems language unlocks hardware access for you. Second, every time we learn a new idiom (be it a paradigm, a design pattern, or simply a clever algorithm) it makes us better programmers in every other language we know. Everyone should be familiar with at least the basics of functional programming (either with Haskell, OCaml, one of the many dialects of Lisp, etc.) because learning code in a functional style makes us better programmers in imperative and OO programming. In my specific case, for example, I now remember functional programming principles whenever I’m programming in C (where I spend most of my time), and I avoid shared global state much better than I used to: my functions are more re-usable, more easily refactored, and my code is less likely to break.

By all means, choose a favorite application domain: most of us tend to have one. Master the languages purposely built for that domain. But don’t let those languages and your affection for them stand in the way of writing better code, for more applications. More importantly, don’t let misconceptions about apples and oranges get in the way of ideas for improving tomorrow’s languages and their users.