An interesting point, by Bob Harper at the Existential Type blog and before him by Dana Scott: Dynamically languages are really unityped in the sense that all terms t : T or in words: "All terms t has the same type T". That is, we define a type which any term has. There is only a single type so both integers, strings, floats, and lists of tuple pairs of references and pids are all of type T. In Erlang we call T for term() or any().
So what can we actually compile-time check for in a unityped language? It turns out we can do quite some things:
- A variable that is used but is never defined is a type error.
- A variable that is defined but is never used is a warning. It may be benign but often, it isn't.
- A local function call to the wrong name is an error.
- A local function call with the right name but wrong arity (number of function arguments) is an error.
- A number of lexical binding errors can be caught without the need for a type system.
It is interesting how a lot of programmers - even those working in unityped langauges - does not even have the above list at their disposal. I've seen lots of languages that happily accepts a program where a function is not defined at compile time. But Erlang does not accept the above. In fact all of them are caught at compile time.
Erlang also does another thing, through the powerful dialyzer tool by Sagonas, et.al. Since every erlang term has type any(), we could ask: "Can we create a subtype hierarchy for any()?". It turns out that we can. integer() is a subtype of any(). so is atom() and char() (which is unicode codepoints). And so is [A] - lists of containing type A. string() is then [char()]. But we can do even better. pos_integer() is a refinement of integer() where only positive integers are in the domain. The type 1 | 2 | 3 is the type of exactly one of these three possible values. For notational convenice, we define 0..255 to be 0 | 1 | ... | 255 which is also known as a byte().
The subtyping hierarchy we have so formed is in some sense stronger than what Ocamls type system can provide. There is a limit to the refinement and we can't capture that an integer value is 3 | 7 | 37. The next step is to use a trick already present in Ocaml: type inference. If we run the inference engine on Erlang we get a notion of the most specific type we can derive from an expression. The inference is conservative in the sense that if we can't derive anything specific, we just assign it type any(). Which is perfectly valid in our unityped world. It is just an indication that the piece of code is too complex to correctly infer.
An observation by Sagonas is that an Erlang program is already safe by virtue of the run time system carrying dynamic type information. It thus checks, at runtime, for type errors and exits if such an error happens. A large part of Erlang is devoted to handling these kinds of errors gracefully by the mentality of "Let is crash". In fact, Erlang is among the only languages that has a sound reactive approach to unforseen errors in a system. The strong belief present in almost all other languages - that you by some mixture of sheer luck and all-knowing divination can forsee all kinds of trouble your program might end up in - is ... well ... kind of irresponsible. So rather than trying to solve the problem of giving normal types to Erlang programs, Sagonas and Lindahl looked into using another concept, Success Types.
The grand idea of a success type is this: suppose we have a function f. If we explore all calls to f that will make it return normally - that is not with an exception or a fatal error, we can determine - through inference - the type of the value B which f returns. And we can determine what inputs A that will make the function return normally. We say that f has a success type A -> B.
If we have success types for expressions and functions we can begin looking for things that will be problematic. For instance, if we have a sequence, f(X), g(X), and we have figured out that f will only return if given an integer() and g only when given an atom(), we know that this expression will always fail. That is a guaranteed error in our program and we can thus report it to the user.
The consequence is that we have a static analysis tool that finds different kinds of errors than what a static semantics (type checker) will find: If we can't type an Ocaml program, we are pessimistic and report it as an error in the program. Reject. If we can't type a dialyzed Erlang program, we are optimistic and take it for granted that the programmer knows better than our puny tool. On the other hand, it does mean that when the dialyzer finds a problem via its success type analysis, it is bound to be an error. Thus the slogan "The dialyzer is never wrong".
In practice, the dialyzer is so powerful that it can find many errors in programs. Coincidentally, the closer your program is to the style of Ocaml or Haskell, the more errors it finds :)
The other part, which is orthogonal is the reactive approach to program error I mentioned above. The ideas can probably be traced back to Peter J. Denning and his work on Virtual Memory. The idea is to protect an operating system by compartamentalizing it. Each program running, the process, will have its own memory space. Thus by construction, it can't mess with the state of other processes in the system. The only way to make processes talk to each other is by messaging - communication of the state by copying the message from one space to the other.
Erlang extends this notion of separate processes into the OS process at a much finer granularity. A typical Erlang program has thousands of (userland) processes messaging each other to coordinate and orchestrate the a program. Thus mistakes in the program become less of a problem: only a small part of the volatile state space can be in error - namely the part tied to the failing process.
It gives rise to a sane model of programming: You try to anticipate errors in your programs and handle them. But if you don't know how to handle a given error - or you don't think that the case can occur at all, it is better to let the process crash. Another process will then get informed and handle the crash gracefully. Usually your programs are built to the best of effort and laced sparingly with assertions about the sanity of internal state. As soon as that state is violated you fail fast and crash.
The secret is that you attack the error bar from below. You place it somewhere such that you think you got all errors in the system. Then you test the system and you fix the few remaining errors that places the bar just right from the perspective of implementation time versus failure rates. Errors that occur from here on out are probably in the class of benign errors. They may occur, but they won't make the system fail. And they will have little impact. In other words, it is about controlling the risk of a fatal error. In Erlang an error usually has to penetrate through several layers of protection before it takes down the whole system and puts it into an unusable state. Because Erlang programs are built such that they can withstand the unforseen and well as the forseen.
It is almost impossible to handle all errors in a program. And many errors in a program are simply not worth handling. Not all errors come equal. Some are weaker than others. But that will be a post for another time.