Misconceptions About the Philosophy Behind OOP

Alan Kay coined the term object-oriented programming, but it has since been widely misunderstood.

I tried to explain in more detail with numerous examples about what object-oriented programming was originally about in my story: Go is More Object-Oriented Than Java.

Yet judging by various comments I have gotten, many people really struggle to understand the full scope of the ideas Alan Kay had about object-oriented programming. Thus this is sort of a follow up article to clarify what I see as common misconception.

Let us start with a simplistic definition of object-oriented programming from Alan Kay. He imagined object-oriented programming as taking a networked computer and scaling it down. Each computer has internal state. It has hard drives and ability to perform calculations. It communicates with other computers through a network. This computer is essentially an object in Kay’s thinking. He imagined scaling computers down to smaller virtual entities which you could connect together in similar fashion inside a computer to build larger systems. These objects would have the following key traits:

  • Message passing — Communication only through messages. No shared state.

Most object-oriented systems don’t work like this. Smalltalk, LISP probably come closest to this vision. And perhaps Erlang is the closest example of Alan Kay’s vision, since unlike Smalltalk and LISP it has process isolation. Alan Kay agrees very much with this view:

Very much in the same spirit as I thought about it back in the 60s.

At the QCon conference with Ralph Johnson, Joe Armstrong creator for Erlang recounted a story he seemed to agree with:

Erlang might be the only object oriented language because the 3 tenets of object oriented programming are that it’s based on message passing, that you have isolation between objects and have polymorphism.

Doesn’t This Type of OOP Complicate Things Immensely?

People reading about this idea of essentially isolated processes communicating through processes as immensely complex and impractical. I had one reader Alexis Kyaw write the following comment:

So if I start splitting my simple application into dozens of processes that interact I’m just complicating things. Systems are complex enough with a handful of threadpools for request handling, task schedulers and the like. I really don’t see why I would add complexity just because I can. I tend not to see the real world as OOP object, because there are no classes things (and servers) are derived from.

The problem here is that the reader is stuck on the idea that object-oriented programming as defined by Alan Kay is only about stuff at the lowest level. In Alan Kay’s view, the whole internet is object-oriented. Object-oriented programming can exist at almost any level of granularity. Kay has e.g. admitted that he likely made objects too small in Smalltalk. In Smalltalk pretty much everything is an object, including things such as numbers and strings. That is likely too impractical when you want to create process isolation. Erlang which is considered OO by Alan Kay and its creator Joe Armstrong does not represent things like numbers and strings like communicating process separated objects. At the lowest level with regular data structures, Erlang is essentially a functional language.

However at a higher level Erlang programs are composed of processes running on the Erlang virtual machine. An Erlang program can contain millions of processes. Each Erlang process is essentially an implementation of the Actor model. The actor model is probably the closest thing in programming to an Alan Kay object.

Whether modeling things as processes make sense or not depends on what you are building but in a lot of cases this makes a lot of sense:

  • Modeling realtime systems. E.g. in a came, each character could in principle be an Actor model communicating with the rest of the environment through messages and being isolated from its environment.

Of interest might be how the 8½ Window System could run itself inside one of its own Windows. 8½, the Plan 9 Window System by Rob Pike:

This idea of multiplexing by simulation is applicable to more than window systems, of course, and has some side effects. Since 8½ simulates its own environment for its clients, it may run in one of its own windows (see Figure 1). A useful and common application of this technique is to connect a window to a remote machine, such as a CPU server, and run the window system there so that each subwindow is automatically on the remote machine. It is also a handy way to debug a new version of the window system or to create an environment with, for example, a different default font.

Kay Did Not Invent OOP, He Discovered It

The problem people have when looking at OOP as described by Alan Kay is that they think he was only thinking about objects at small scale. To repeated Alexis Kyaw again:

I really don’t see why I would add complexity just because I can. I tend not to see the real world as OOP object, because there are no classes things (and servers) are derived from.

This is getting it all in reverse. Alan Kay observed that real world systems such as the internet already was built in an object-oriented fashion:

Very much in the same spirit as I thought about it back in the 60s. I don’t think I invented “Object-oriented” but more or less “noticed” what was really powerful about just making everything from complete computers communicating with non-command messages. This was all chronicled in the HOPL II chapter I wrote “The Early History of Smalltalk”.

He didn’t make an object-oriented language and then decide all other things at a macro-level should be tailored to this philosophy. No he observed this philosophy in how actual large systems got built. Any kind of system, not just computer systems. The human body works much the same way. Cells as isolated independent structures which pass messages to each other.

Kay wanted to bring these lessons to programming language design as well as how you build things at smaller scale. They Alan Kay idea of OOP does not involve classes at all. Thus real world objects don’t need to be derived from classes to be object-oriented, because the key thing about object oriented programming is message passing and isolated state which these “real” objects very much do.

In fact classes very much goes against object-oriented programming. The creator of Java James Gosling has admitted that if he could design Java over again he would leave out classes and inheritance.

“If you could do Java over again, what would you change?” “I’d leave out classes,” said Gosling.

Where does classes and inheritance come from? They come from the Simula language. The creators of Simula Ole-Johan Dahl and Kristen Nygaard, regard classes and inheritance as a performance hack they had to do.

Thus none of the central language designers involved in creating what we associate with OOP today, ever regarded classes and inheritance as a central part of OOP. Instead it was simply a kludge several of them had used.

It is thus highly ironic that Java fans today, bash Go for not having inheritance and thus not being “proper” OOP like Java. Ironically Go simply followed the advice from the designer of Java, and the designers of the first OOP languages. Classes and inheritance has been elevated to this central thing in OOP that it never deserved.

The Internet is an Object-Oriented System

Let me hammer this idea home further by commenting on another response:

So, seeing the internet as construct in some OOP language seems to be not a helpful idea….

The Alan Kay idea is not that the internet is written in an OOP language. The idea is that the internet is an object-oriented system as it is made of objects (computers) with isolated state communicating with each other. The point here is to take lessons from the internet and apply them to software elsewhere.

and just because I could write my software with a lot independent processes I won’t do it because it usually violates the rule of simplicity.

The problem is that you are likely thinking about systems which don’t naturally split into multiple processes. Nobody says you have to do this. But experience has been that this often dramatically simplifies complex solutions.

The Plan 9 Window system as mentioned earlier got radically simplified by splitting it into independent processes. Telecoms switches benefitted from Erlang allowing code to be built around lots of independent processes. The Unix operating system itself is a testament to the power of splitting your problem up into smaller objects.

Few today have heard about Multics. Why? Because it failed. It was a large monolithic system. Unix was a paired down version of Multics where one took the radical idea of splitting the system up into lots of small programs which could be combined through pipes to create more complex functionality.

Look at version control systems like Git. It got built from dozens of smaller programs, independent processes which would communicate through pipes and the git repo. Just look at any Operating System. They are not built as large monoliths. No, they are built from huge number of independent programs running in isolation. This is how you create fault tolerant systems. One program can crash without bringing down the whole system.

This lesson had to be learned over and over again. Many large IDEs e.g. create complex plugin systems with DLLs. A bug in the DLL would cause the whole IDE to crash, because the was no process isolation. Plugin and IDE did not communicate as objects in an object-oriented system. Instead they shared state and execution thread.

Modifying Live Systems

The concept of working with live systems as you can in Smalltalk and LISP is I see profoundly misunderstood.

To be able to influence the system at runtime that is also done exactly like that, I can start new processes, replace existing ones with new version, in a controlled fashion (and after testing). Being able to directly change a running process internally is something to do in desperate situations (and it can be done in many languages in one way or the other).

The problem here is the perspective of what you think is the system. The whole internet can be a system. It can be a computer with a whole OS which is the system. The benefits of OO happens at multiple levels. The internet is the running system, if you see each server as an object in an Alan Kay OO context. Changing one object in this system, by shutting it down and upgrading it is not a desperate choice, it is how the internet works. Nobody ever flips a master switch to shut down the whole internet before modifying a piece of it. No the internet stays live all the time while it is changed and modified. Likewise the human body never shuts down for maintenance. It is constantly repaired and growing without system shutdown and reboot.

Same can be said about operating systems. On e.g. a Unix system you can make major changes to the OS without shutting the whole thing down. You can launch and shut down individual programs, install new programs, uninstallthem all without shutting down the whole OS. In this context, it is not the server which is the objects of the system but the individual programs.

What if we zoom further in? People who develop using LISP, Smalltalk, Clojure, Python, Ruby, R, Lua or Julia regularly modify running system. That is part of the whole development process. That is not a desperate situation. It is part of the development situation.

Geek dad, living in Oslo, Norway with passion for UX, Julia programming, science, teaching, reading and writing.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store