This post comes from an email conversation going on related to programming languages vs. libraries. The story goes that these days, the major productivity gains come not from new languages but from the existence of libraries that already do almost everything for you. That is unquestionable. These days people don’t choose programming languages as much as they choose libraries and frameworks that already do most of the work for them, and that happen to be written in some programming language or another. One can argue that these powerful libraries and frameworks stand in the shoulders of a lot of work in programming languages that became invisible by design. But that work has already been done, to the point that these powerful libraries are already out there.
So where does this leave programming languages? Are we done yet?
During the discussion, Mark Miller said something really interesting. He said: “Libraries *cannot* provide new inabilities.” What he meant was that certain concepts we include in programming are actually inabilities, or constraints, over what we can do in, say, assembly. Think, for example, automatic garbage collection that comes in the heels of disallowing programmers from allocating memory manually. For inabilities, libraries are fairly useless. Yes, we can have libraries that use additional constraints from those of the language, but those constraints can’t be enforced. If we want enforcement, constraints need to be included in the design of the language itself. (The whole concept of constraints is central in my book)
Indeed, the whole history of programming languages can be summarized as a yin-yang between abstraction affordances and constraints over what can be done in assembly. We tend to cover the affordances really well. But we don’t cover the constraints that well, because, well, once they are in place, they become invisible.
So, by means of focusing on constraints, here are a few things that were disallowed along the way, in some language or another:
- The ability to refer to registers (all ‘high-level’ languages)
- The ability to refer to memory addresses (all languages, except the C family)
- The ability to jump to arbitrary lines of the program (Java, Python, functional langs)
- The ability to define named functions (Smalltalk, Java, C#)
- The ability to search objects in the heap (Java, C#)
- The ability to define co-routines or even generators (C, Java)
- The ability to define variables whose values change over time (Haskell, ML)
- The ability to share state among concurrent threads (Erlang)
- The ability to bind the same name to different types of values (statically typed langs)
- … (name more in the comments)
Programming languages are usually presented in terms of what they do enable, rather than what they don’t let us do. But, really, language design is also an exercise in deciding what not to let people do — everything else that comes wrapped in a language can usually be done with libraries in modern languages that have good support for all sorts of composition. Which, if you think about it, it’s a funny business to be in: who wants to present themselves as being the bringer of constraints?
Constraints sound bad and authoritarian, but they are really necessary to tame complexity. The real question is which ones are beneficial, in general and for specific purposes.
For fun, here are a few possibilities of future constraints that I can already imagine:
- The ability to program, at all. Let the program learn from data.
OK, this took an unexpected turn. I’ll stop here