So much gold to mine in this talk. Even just this kind of throwaway line buried deep in the Q&A:
> I prefer to write code in a verb-oriented way not an object-oriented way. ... It also has to do with what type of system you're making: whether people are going to be adding types to the system more frequently or whether they're going to be adding actions. I tend to find that people add actions more frequently.
Suddenly clicked for me why some people/languages prefer doThing(X, Y) vs. X.doThing(Y)
Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2) and x+y as examples of problems caused by that view.
The main benefit of x.f(y) IMO isn't emphasizing x as something special, but allowing a flat chain of operations rather than nesting them. I think the differences are more obvious if you take things a step further and compare x.f(y).g(z) and g(f(x, y), z). At the end of the day, the difference is just syntax, so the goals should be to aid the programmer in writing correct code and to aid anyone reading the code (including the original programmer at a later point in time!) in understanding the code. There are tradeoffs to using "method" syntax as well, but to me that mostly is an argument for having both options available.
That's exactly the context of where this quote comes from. He wanted to introduce Unified call syntax[1] which would have made both of those equivalent.
But he still has a preference for f(x,y). With x.f(y) gives you have chaining but it also gets rid of multiple dispatch / multimethods that are more natural with f(x,y). Bjarne has been trying to add this back into C++ for quite some time now.
That makes sense! It wasn't immediately obvious to me from the context of this discussion though, since it seemed like you were responding to the parent comment's framing of it as a choice between one or the other. This might just be from me never having heard of the term "unified call syntax" though, and I admit that I didn't expect to need to read through the PDF you linked before super carefully after opening it and seeing a bunch of C++-specific details that I knew would go over my head. (On a good day, I can remember what words SFINAE is an acronym for, but I don't think I ever really felt like I got comfortable enough with C++ to fully understand how the template code I saw actually encoded behavior that fit what I'd expect those words to mean).
If my memory isn't failing me, that was part of the reason rust went with a postfix notation for their async keyword ("thing().await") instead of the more common syntax ("await thing()")
Yep, and that itself was similar to the rationale for introducing `?` as a postfix operator where the `try!(...)` macro had previously been used. In retrospect, it's kind of funny to look back and see how controversial that was at the time, because despite there being plenty of criticism of the async ecosystem in once, the postfix `.await` might be the one thing that seems to consistently be praised by people needing to use it. People might not like using async, but when we do use it, it seems like we're pretty happy with the syntax for `.await`.
There is the most important argument regardless, whether it is a method or regular function - first one (or last one in languages supporting currying). While it's true that there are functions with parameters of equal importance, most of them are commutative anyway.
There are a handful of (somewhat exotic) languages that support multiple dispatch - pretty much, all those listed by you. None of the mainstream ones (C++, Java, C# etc) do.
(also Common Lisp is hardly a poster child of OOP, at best you can say it's multi-paradigm like Scala)
> Since when do OOP languages have to be single paradigm?
What I really meant to say with that was that it's lisp at its core -i.e. if one wants to place it squarely in one single paradigm, imo that one should be "Functional".
I was just surprised to see it listed as an example of OOP language, because it's not the most representative one at that.
Yeah, the dot operator is not a particularly strong signal of whether something is OOP or not. You could change the syntax of method calls in OO languages to not use the object as a prefix without the underlying paradigm being affected.
This is an excellent talk. It digs really deep into the history of OOP, from 1963 to 1998. The point is that in 1998 the commercial game "Thief" was developed using an entity component system (ECS) architecture and not regular OOP. This is the earliest example he knows of in modern commercial programming.
During his research into the history of OOP he discovered that ECS existed as early as 1963, but was largely forgotten and not brought over as a software design concept or methodology when OOP was making its way into new languages and being taught to future programmers.
There's lots of reasons for why this happened, and his long talk is going over the history and the key people and coming up with an explanatory narrative.
You can do ECS in any programming paradigm. It’s not incompatible with OO at all. There’s no need for the object model to be congruent to a static representation of a domain, in a line-of-business app it is much better for it to be congruent to workflow and processing.
Heck I’ve even done ECS in Rails for exactly this reason.
I never accepted the Java/C++ bastardisation of OOP and still think that Erlang is the most OO language, since encapsulation and message passing is so natural.
I am not sure what the link between ECS and protocols, traits, and multiple inheritance is? ECS is mostly about memory layout not typed interfaces as far as I know.
> Nope, that is the gist that game devs then apply to ECS story.
What do you mean? ECS is simply a game programming pattern to implement an entity system.
> If you want to go down the rabbit hole, lets start with the first question, how are ECS systems implemented in any random C++ or Rust game code?
Conversely how would you implement it in a procedural language like C or Pascal? ECS is just a switch in emphasis from an array of structures (entities) to a structure of arrays (systems) paradigm. I fail to see what the Object Oriented paradigm has to do with any of it.
> Entity–component–system (ECS) is a software architectural pattern mostly used in video game development for the representation of game world objects. An ECS comprises entities composed from components of data, with systems which operate on the components.
> Entity: An entity represents a general-purpose object. In a game engine context, for example, every coarse game object is represented as an entity. Usually, it only consists of a unique id. Implementations typically use a plain integer for this
> Common ECS approaches are highly compatible with, and are often combined with, data-oriented design techniques. Data for all instances of a component are contiguously stored together in physical memory, enabling efficient memory access for systems which operate over many entities.
> History
> In 1998, Thief: The Dark Project pioneered an ECS.
So, according to wikipedia:
- An entity is typically just a numeric unique id
- Components are typically physically contiguous (i.e an array)
- Their history began with Thief pioneering them in 1998
I was rather expecting code examples, so that we could deconstruct the language primitives being used for the implementation from a CS language semantics point of view.
Thank you for this summary. I'm a hobbyist programmer and want to watch this, but having a few concepts to hang my hat helps contextualize this for me.
I had difficulty understanding what irked me about the comment, but indeed, that's it. The mix of superficiality, congeniality, and random details sound like an AI response. However, I don't think it is. But AI surely fucks up our trust.
I think I have autism. Anyway this is how I tend to organize thoughts and relay them. If it looks like AI then I don't know what to do about and is kinda scary really.
I guess it's more common in comments to read someone's opinion on different parts, whether they agree or disagree, a related anecdote or personal experience with the topic, or a remark focused on a specific detail that stands out.
It doesn't matter how you organize thoughts, it's about how other people read your text, and AI has fucked up our perceptions.
First your praise the talk. That sounds AI like. LLMs are trained in the annoying American way of starting with something positive, even if it's irrelevant, or isn't meant. You're probably somewhat conditioned to do the same. But in this forum, those comments are not encouraged. The upvote button should be enough to express that.
The rest reads like a summary, also an LLM feature, but nobody asked for a summary, and you're not announcing that you want to give one. It sets the reader up for some conclusion or evaluation, which never comes.
There's no personal thought, except for the praise, anecdote, criticism, or supplementary information. If all you wanted was to recommend the talk to people, I think an effective way to do would be something like
> I liked the talk. So much that I didn't know about the history of OOP, or how ECS (Entity Component System) could have been a competitor. Recommended, but it's a bit long though.
Not that that will get you a lot of upvotes (which shouldn't be a goal anyway), but it expresses someone's reflection on the link, which others can understand as support for their decision to check the link, or not.
AI has fucked up our perception. That's not your fault, of course, but you can try to skirt around it. But not everybody has to write every opinion everywhere. It's fine if your communication doesn't always fall on fertile ground. You don't have to apologize or blame the spectrum. Some people have better ways with words than others.
Thank you for your reply. Good food for thought.
-----
"LLMs are trained in the annoying American way of starting with something positive, even if it's irrelevant, or isn't meant." Agree. LLM's make me feel like I'm in kindergarten and need positive reinforcement at every step of the way to be intellectually curious.
I complained a lot about OOP all throughout my 25 years a developer. I wrote a ton of C++ and then Java and nobody can refute my expertise with those languages. I saw so many people make mistakes using them, particularly with forcing taxonomies into situations that weren't conducive to having them. Then, when I began complaining to my colleagues about my feelings, I was ostracized and accused of "not having a strong enough skillset." In other words, the dogma of the time overrode the Cassandras saying that the emperor had no clothes. Meanwhile, the simple nature of C and even scripting languages was considered out date. The software dev community finally realized how bad things had gotten with Java and then the walls came a tumbling down. I far prefer writing non-OO Python (or minimal use of classes) to anything else these days. I went all around the language world--did projects involving Lua, Clojure, tons of Groovy, then moved on to Functional Java, Kotlin, and Golang.
Similar experience. I would be the one the entire team would turn to when a really hard problem to debug came up. Yet, when I would say that OOP is not great and is over-complicating the code, I would be scoffed at. I never could reconcile how I was "leaned on" to fix things, but ignored in proposing different paradigms.
I recently read a quote, paraphrasing, Orthodoxy is a poor man's substitute for moral superiority.
This is a pervasive phenomenon in programming. Many times suboptimal solutions remain for a long time because, well, it does solve the problem. Once you are taught X by authoritative figures, you tend to lean on it. It takes experience and an open mind to do anything else.
The use of GOTO is another example. Yes, you probably wouldn't want in your codebase, but the overzealousness against it removes expressions like break statements or multiple return statements from languages.
This is less about what metaphors are under the hood, but the patterns used on top of them. You can get technical, but it's definitely possible to write primarily functional, imperative or object-oriented code in Python, irrespective of what the syntax is for dealing with primitives.
I think you have things backwards; you're the one complaining about the the underlying implementation, whereas the others are talking about the interface for it. They're talking about whether they prefer to drive cars or trains, and you're basically claiming that there's no difference between a car and a train if the train also runs on gas.
That's fair, but I'd still argue that there's a difference in whether code has to be written by defining objects directly or being able to work at a different abstraction level where objects are an underlying implementation detail that get leveraged by other language constructs that are familiar from non-OO paradigms. Whether or not that difference is important or good is a subjective opinion, but I don't see an argument about the underlying implementation as a particularly strong refutation of someone stating that they like having that level of abstraction available.
You stopped too soon - those apparently OOP semantics ultimately require decidedly non-OOP machine code at the real level it is executed. Everything else is just abstraction.
Trying to argue that you are, in fact driving a steam engine, requires one to assume a level of abstraction and definition, in order to set an arena in which a discussion can occur.
The talk makes a very specific complaint. That complaint is not that you are associating data with the functions operating on that data.
What the talk is about compile time (and maybe execution time in the case of python) hierarchies being structured as a mapping of real objects. This is how I was taught OOP and this is what people are recognizing as "OOP".
>So for the anti-OOP folks out there using languages like Python as an example,
Just because a language associates data with functions, does not mean that every program hierarchy has to map onto a real world relationship.
Why are you even commenting on this with your nonsense? Do you really think that if someone is complaining about OOP they are complaining that data types store functions for operating on that data? Has literally anyone ever complained about that?
"The simplistic approach is to say that object-oriented development is a process requiring no transformations, beginning with the construction of an object model and progressing seamlessly into object-oriented code. …
While superficially appealing, this approach is seriously flawed. It should be clear to anyone that models of the world are completely different from models of software. The world does not consist of objects sending each other messages, and we would have to be seriously mesmerised by object jargon to believe that it does. …"
"Designing Object Systems", Steve Cook & John Daniels, 1994, page 6
I encourage you to listen to the talk, because it gives a very specific reason why historically that thinking about OOP was so common. Notably it was Stroustrups motivation, to allow exactly that kind of thinking to be implemented in C, which became C++. Simula was developed to allow this structuring.
>While superficially appealing, this approach is seriously flawed. It should be clear to anyone that models of the world are completely different from models of software.
A great line. I just wished it wasn't out shone by all the lectures, tutorials, books which explain OOP by saying "a Labrador is a dog is an animal" and then tell you how this abstraction is exactly what you should be doing.
OOP revisionism is always very surprising, because the only people aware of it are OOP revisionists, the vast majority of developers are completely unaware of it.
I listened to the presenter tell us Alan Kay "kind of soured on it [inheritance]" 13:45
The source doesn't say that.
I listened to the presenter tell us "… look at what they were actually talking about when they were talking about Smalltalk in the times before they had chance to reflect and say that it [inheritance] didn't work". 14:10
The source doesn't say that.
I listened to the presenter tell us "… literally representing in the hierarchy what our domain model is. … They have a Path class and from that Path class they have different shapes derived from it." 14:46
Chapter 20 "Smalltalk-80: The Language and its Implementation" describes how Graphics was implemented in the Smalltalk-80 system —
"Class Path is the basic superclass of the graphic display objects that represent trajectories. Instances of Path refer to an OrderedCollection and to a Form. The elements of the collection are Points. … LinearFit and Spline are defined as subclasses of Path. … Class Curve is a subclass of Path. It represents a hyperbola that is tangent to lines … Straight lines can be defined in terms of Paths. A Line is a Path specified by two points." page 400
As they say in the Preface — "Subclasses support the ability to factor the system in order to avoid repetitions of the same concepts in many different places. … subclassing as a means to inherit and to refine existing capability."
What a silly objection. You just made up a random argument in your head what that talk was about or what arguments are being made.
It actually does matter a whole lot what developers think. It matters far more than any "CS definition", not that anything here is about computer science.
>What matters are language implementations
No, they do not matter at all. They are totally irrelevant to the topic. In python you can construct your hierarchies to match real world hierarchies. You also can not do that. It is totally irrelevant how the language is implemented.
OOP has lots of flaws and is not a good choice in every context, but I still don't understand the universal hatred it seems to get now.
I think OOP techniques made most sense in contexts where data was in memory of long-running processes - think of early versions of MS Office or such.
We've since changed into a computing environment in which everything that is not written to disk should be assumed emepheral: UIs are web-based and may jump not just between threads or processes but between entire machines between two user actions. Processes should be assumed to be killed and restarted at any time, etc etc.
This means it makes a lot less sense today to keep complicated object graphs in memory - the real object graph has to be represented in persistent storage and the logic inside a process works more like a mathematical function, translating back and forth between the front-end representation (HTML, JSON, etc) and the storage representation (flat files, databases, etc). The "business logic" is just a sub-clause in that function.
For that kind of environment, it's obvious why functional or C-style imperative programming would be a better fit. It makes no sense to instantiate a complicated object graph from your input, traverse it once, then destroy it again - and all that again and again for every single user interaction.
But that doesn't mean that the paradigm suddenly has always been bad. It's just that the environment changed.
Also, there may be other contexts in which it still makes sense, such high-level scripting or game programming.
I'm a bit confused. What does any of this have to do with the central thesis of the talk? ("Compile time hierarchies of encapsulation that match the domain model were a mistake")
I understand that OOP is a somewhat diluted term nowdays, meaning different things to different people and in different contexts/communities, but the author spent more than enough time clarifying in excruciating detail what he was talking about.
Ok, meanwhile the admins seem to have combined all the comments here, so the link now points to an empty post; and I made this comment a few days (not four hours) ago.
And on the other end of the spectrum, you have the proponents of Domain-driven design (DDD)[0], where they use an ML descended language such as F# and the aim is to make invalid states unrepresentable by the program [1]
How is this "the other end of the spectrum"? The Typestate pattern described at https://geeklaunch.io/blog/make-invalid-states-unrepresentab... (especially wrt. its genericized variety that's quite commonly used in Rust) is precisely a "compile-time hierarchy of encapsulation that matches the domain model", to use Casey Muratori's term for what he's talking about. It's literally inheritance-based OOP in a trenchcoat.
There are very few F# specific features used in the book. I imagine you could follow along pretty easily with any other functional language. You can easily use F# for the book and then apply the lessons learned to another language when you're done too. It mainly shows how to use sum types, product types and function composition to implement DDD.
I'm not sure what tendencies you're referring to though. F# has been around for 20 years and has only gotten better over time.
Entertaining. The presenter obviously doesn't like the class hierarchy to correspond to the domain model. He seems to think that this was an essential feature of OOP, supported by some quotations by Smalltalk exponents. But not even the Smalltalk world could agree on what OOP actually is (just compare the statements by Kay with the actual architecture of Smalltalk-76ff) and as quickly as Smalltalk lost its significance, there is no need to mention it further. I would rather look at a reputable industry organization such as IEEE, which even publishes its own standards and best practices, what OOP is about. E.g. the OOP Milestone (see https://ethw.org/Milestones:Object-Oriented_Programming,_196...) which names Simula 67 the first OO language, specifies OO as "the combination of three main features: 1) encapsulation of data and code 2) inheritance and late binding 3) dynamic object generation." No mention of the class hierarchy should correspond to the domain model. So maybe we should just not mix up a programming paradigm with how it is used by some folks in practice? The fact that the loudest proponents of a paradigm are not usually those who apply it in practice remains true even today. Takes far less than 2.5 hours to state.
He literally gives extensive primary source citations to show that the originators of OOP presented this class-domain correspondence as the correct way to think about and do OOP. Bjarne Stroustrup is not just some random guy.
> He literally gives extensive primary source citations to show that the originators of OOP presented this class-domain correspondence
In case of Dahl/Nygaard it seems logical since their work focus was on simulation. Simula I was mostly a language suited to build discrete-event simulations. Simula 67, which introduced the main features we subsume under "Object-Orientation" today, was conceived as a general-purpose language, but still Dahl and Nygaard mostly used it for building simulations. It would be wrong to conclude that they recommended a class-domain correspondence for the general case.
> Bjarne Stroustrup is not just some random guy
Sure, but he was a Simula user himself for distributed systems simulation during his PhD research at Cambridge University. And he learned Simula during his undergraduate education at Aarhus, where he also took lectures with Nygaard (a simulation guy as well). So also here, not surprising that he used examples with class-domain correspondence. But there was also a slide in the talk where Stroustrup explicitly stated that there are other valid uses of OO than using it for modeling domains.
The source citations are facts. We can check that Alan Kay "The Early History of Smalltalk" shows this on page 82:
"Unfortunately, inheritance — though an incredibly powerful technique — has turned out to be very difficult for novices (and even professionals) to deal with."
When the presenter tells us — 13:45 "he was already saying he kind of soured on it" — that is not a fact, it's speculation. That speculation does not seem to be supported by what follows in "The Early History of Smalltalk".
One page later — "There were a variety of strong desires for a real inheritance mechanism from Adele and me, from Larry Tesler, who was working on desktop publishing, and from the grad students." page 83
And "A word about inheritance. … By the time Smalltalk-76 came along, Dan Ingalls had come up with a scheme that was Simula-like in it's semantics but could be incrementally changed on the fly to be in accord with our goals of close interaction. I was not completely thrilled with it because it seemed that we needed a better theory about inheritance entirely (and still do). … But no comprehensive and clean multiple inheritance scheme appeared that was compelling enough to surmount Dan's original Simula-like design." page 84
yes, because java and c# (and others, python to a certain extent) basically copied it. even ruby, which at its core is about "message passing" sure does a hell of a lot to hide that and make it feel c++ ish. i would bet at least 25% of ruby practitioners arent aware that message passing is happening.
The main issue I have with Java is that the JVM was built to be portable and then we got a superior kind of portability using containers, which makes the JVM totally redundant and yet whenever I point that out, I get funny looks from people!
I guess I have a lot of other problems with Java--jar hell, of course, but also the total inability for corporations to update their junk to newer versions of Java because so many libraries and products were never kept up with and then you get security people breathing down your neck to update the JVM which was LITERALLY IMPOSSIBLE in at least two situations I became involved with. We even tried to take over 3rd party libraries and rewrite/update them and ended up at very expensive dead ends with those efforts. Then, to top it all off, being accused of lacking the skill and experience to fix the problem! Those a-holes had no idea what THEY were talking about. But in corporate America, making intelligent and well-documented arguments is nothing. That's when I finally decided I needed to just stop working on anything related to Java entirely. So after about 15 years of that crap, I said no more. But I'm the under-skilled one.
You do realize that it's time to sunset a codebase if you can't find anyone to maintain it, right? "LITERALLY IMPOSSIBLE" means the code is dead. It's worthless garbage dragging the company down. There is no nothing else to do except shut it down. Software written in the last century isn't something like an irreplaceable artifact from an ancient technologically superior civilization that can never be replicated.
If humanity's technological progress depended on impossibly rare events that never happen again, then humanity would miss the vast majority of them. It would be as if those events never existed in the first place.
you realize the person you’re responding to didnt make this decision on what to use in their stack at all and this is likely what eventually happened anyway?
If only! Java gives you just enough non-objects to make pass-by-value or pass-by-reference something you need to be aware of, likewise with == and equals.
What do you mean by this? Because everything in Python is object, even classes and functions are objects.
Do you just mean that Python lets you write functions not as part of a class? Because yeah there's the public static void main meme but static functions attached to a class is basically equivalent to Python free functions being attached to a module object.
OOP is not shoved down your throat with Python, though. With Python, I can choose what taxonomies deserve an OOP treatment and which do not. Spoiler: Almost nothing is a taxonomy of any remarkable nature.
you also can get away with completely ignoring the underlying oop semantics in tons of cases whereas java and similar languages dont give you that option
No. Functional programming is quite a bit more involved than just writing "functions"--it is taking advantage of the fact that functions are first-class "objects" that can be passed as arguments to other functions, which allows for a far more intuitive and flexible form of programming. But FP is even more than that.
I don't hate C++ as much as I hate Java, though. That probably has more to do wit the time I was working with C++ and the kinds of projects versus the mind-numbing corporate back-end garbage I worked on with Java.
Did you actually watch it? He talks A LOT more about Simula and C++ than Smalltalk. He goes back to the original sources: Kristen Nygaard and Bjarne Stroustrup. Seems odd to focus on Smalltalk when that’s not what the talk was about.
It's funny because I read this comment and then watched the video, and it's like the first 15 minutes of the talk are dedicated to debunking this exact comment. Freaky.
i love casey and I love this talk. Always good to see people outside of academia doing deep research and this corroborates allot of how I have understood the subject.
I find it funny that even after he goes into explicit detail about describing oop back to the original sources people either didn't watch it or are just blowing past his research to move the goal post and claim thats not actually what OOP is because they don't want to admit the industry is obsessed with a mistake just like waterfall and are too stockholm syndromed to realize
Except that his talk is not anti-OOP. It's anti-a-specific-way of using OOP. Namely representing the Domain Model as the compile time hierarchy. He goes to great lengths that he himself uses OOP concepts in his code. OOP wasn't a mistake per-se. The mainstream way of using as promulgated by a number of experts was the mistake.
The problem is when you take out mistakes, there's not much left of OOP.
We take out 'dog-is-an-animal' inheritance.
We take out object-based delegation of responsibility (an object shall know how to draw itself). A Painter will instead draw many fat structs.
Code reuse? Per the talk, the guy who stumbled onto this was really looking for a List<> use-case, (not a special kind of Bus/LinkedList hybrid. He was after parametric polymorphism, not inheritance.
> "OO is not bad when there is an actual entity - i.e. a stream, a socket, a window, etc to which an object corresponds, or in a simulation of actual entities. That's where it was born and where it shines. … For instance, don't customers buy products? Which should own the functions that involve both?"
What is the purpose of the customers / products app?
I had whisper make a transcript and skimmed some but I ended up watching the talk at ~1.5x speed in the end anyway. https://pastebin.com/EngTq9ZA If you want the timestamps kept in I can paste that too.
A few lines of Javascript in the console can copy that to the clipboard for you. Maybe someone's packaged that up already. (It's on my todo list to look around...)
Control Panel for YouTube [1] has an option to add a new "Download" item to the menu in the video Transcript box (which I've just noticed is currently broken, as YouTube must have changed the implementation again recently)
Here's a fixed version [2] - run this when the transcript is open and loaded.
I really dont understand his reasoning. If you have a pointer to the base class different implementations are polymorphic and its hidden from the caller. That is the whole point and it means you can have an engine with a base class in a library, then different people can derive from it and use that engine.
I think his definition of OO is different to what we've got used to. Perhaps his definition needs a different name.
> I think his definition of OO is different to what we've got used to. Perhaps his definition needs a different name.
I've seen "OOP" used to mean different things. For example, sometimes it's said about a language, and sometimes it's unrelated to language features and simply about the "style" or design/architecture/organization of a codebase (Some people say some C codebases are "object oriented", usually because they use either vtables or function pointers, or/and because they use opaque handles).
Even when talking about "OOP as a programming language descriptor", I've seen it used to mean different things. For example, a lot of people say rust is not object-oriented. But rust lets you define data types, and lets you define methods on data types, and has a language feature to let you create a pointer+vtable construct based on what can reasonably be called an interface (A "trait" in rust). The "only" things it's lacking are either ergonomics or inheritance, or possibly a culture of OOP. So one definition of "OOP" could be "A programming language that has inheritance as a language feature". But some people disagree with that, even when using it as a descriptor of programming languages. They might think it's actually about message passing, or encapsulation, or a combination, etc etc.
And when talking about "style"/design, it can also mean different things. In the talk this post is about, the speaker mentions "compile time hierarchies of encapsulation that match the domain model". I've seen teachers in university teach OOP as a way of modelling the "real world", and say that inheritance should be a semantic "is-a" relationship. I think that's the sort of thing the talk is about. But like I mentioned above, some people disagree and think an OOP codebase does not need to be a compile time hierarchy that represents the domain model, it can be used simply as a mechanism for polymorphism or as a way of code reuse.
Anyways, what I mean to say is that I don't think arguing about the specifics of what "OOP" means in the abstract very useful, and that since in this particular piece the author took the time to explicitly call out what they mean that we should probably stick to that.
>I think his definition of OO is different to what we've got used to.
No. His definition is exactly what people are taught OOP is. It is what I was taught, it is what I have seen taught, it is what I see people mean when they say they are doing OOP.
> Perhaps his definition needs a different name.
No. Your definition needs a different name. Polymorphic functions are not OOP. If you give someone standard Julia code, a language entirely built around polymorphic functions, they would tell you that it is a lot of things, except nobody would call it OOP.
Importantly polymorphic functions work without class hierarchies. And calling anything without class hierarchies "OOP" is insane.
"The expression problem matrix … In object-oriented languages, it's easy to add new types but difficult to add new operations … Whereas in functional languages, it's easy to add new operations but difficult to add new types"
what is your reasoning? if you make your own object system, that is indeed polymorphic. do you now feel the need to model the world in your application?
Really short: ECS existed in the earliest implementations of OOP in 1963 and was being used in the software he showed.
When OOP went mainstream it pretty much was entirely about "compile time hierarchy of encapsulation that matches the domain model" and nothing else. His opinion is the standard way of doing OOP is a bad match for lots of software problems but became the one-size-fits-all solution as a result of ignorance.
Also he claims that history is being rewritten to some extent to say this wasn't the case and there was never a heavy emphasis on doing things that way.
OOPs = "object-oriented programming", BUT it's a more restrained and thoughtful complaint than just "objects suck" or "inheritance sucks". He cabins it pretty clearly at 11:00 minutes in: "compile-time hierarchy of encapsulation that matches the domain model was a mistake"
To unpack that a little, he looks to the writings of the early developers of object oriented programming and identifies the ways this assumption became established. People like Bjarne Stroustrup (developer of C++) took on and promulgated the view that the inheritance hierarchy of classes in an object oriented system can be or should be a literal instantiation of the types of objects from the domain model (e.g. different types of shapes in a drawing program).
This is a mistake is because it puts the broad-scale modularization boundaries of a system in the wrong places and makes the system brittle and inflexible. A better approach is one where large scale system boundaries fall along computational capability lines, as exemplified by modern Entity Component Systems. Class hierarchies that rigidly encode domain categorizations don't make for flexible systems.
Some of the earliest writers on object encapsulation, e.g. Tony Hoare, Doug Ross, understood this, but later language creators and promoters missed some of the subtleties of their writings and left us with a poor version of object-oriented programming as the accepted default.
Only as a brief aside (don't have the timestamp right now) to talking about Smalltalk, which he mostly discusses to argue that Smalltalk was not different from C++ in seeking (most of the time) to model programs in terms of static hierarchies (according to the primary source documentation from the time of Smalltalk's design):
> And another thing is if you look at the other branch,
> the branch that I'm not really covering very much
> in this talk, because again,
> we don't program in small talk these days, right?
> The closest thing you would get
> is maybe something like Objective-C.
> If there's some people out there using Objective-C,
> you know, like Apple was using that for a little while,
> so Objective-C kind of came
> from a small talk background as well.
Objective-C is basically Smalltalk retrofitted onto C, even more than C++ was Simula retrofitted onto C (before C++ gained template metaprogramming and more modern paradigms), so it makes sense that Muratori doesn't go much into it, given that he doesn't discuss Smalltalk much.
Inheritance sucks if you wish to write good unit tests easily. It just totally freaking sucks due to encapsulation. When you step back, you realize that composition is a far better approach to writing testable code.
- Encapsulation / interfaces is a good idea, a continuation of the earlier ideas of structured programming.
- Mutable state strewn uncontrollably everywhere is bad idea, even in a single-threaded case.
- Inheritance-based polymorphism is painful, both in the multiple (C++) and single (Java) inheritance cases. Composable interfaces / traits / typeclasses without overriding methods are logically and much more useful.
Over multiple decades, I have come to reject all of it! Even interfaces.
I watched over and over again people writing code to interfaces, particularly due to Spring, and then none of those interface ever got a second implementation done and were never, ever going to! It was a total waste of time, even for testing it was almost a total waste of time, but I guess writing stubbed test classes that could pretend to return data from a queue or a database was somewhat useful. The thing is, there were easier ways to achieve that.
Those interfaces that never got a second implementation were still defining the contract for interacting with another part of your system and that compile time enforced contract provides value. I have plenty of complaints about Spring but interfaces is not one of them.
Basically his “35-year mistake” thesis is that we almost had ECS, the entity/component/system pattern, in 01963 with Sketchpad, but it took until 01998. He explains this near the end of the talk proper, and explains how Looking Glass in 01998 introduced the pattern in Ultima II Underworld, but really introduced it with Tom Leonard’s Thief: The Dark Project. Later though he seems to be saying that he's not sure ECS is actually a good idea, but he thinks encapsulation is, if not a bad idea, at least an idea that should be applied carefully to keep it from getting in your way, and definitely not in a way that reflects a division among problem-domain objects such as cars, trucks, bridges, circular arcs, lanterns, etc.
Stroustrup took out object hierarchy introspection feature that was available before, which turned out to be a pretty handy feature that people kept trying to reimplement.
Finally coming into C++26, but boy the syntax, C++ keeps competing with Perl on that regard, and I say this as someone that enjoys coding in C++ on my free time.
The 35 year mistake was the idea that in order to have a well structured program, your compile time hierarchies have to represent real world relationships.
The talk traces that mistake to Simula, where it was appropriately used, because it was intended to simulate the real world hierarchies. Then to C++ where it started to become used inappropriately, then to Java, where it became a universal Praxis to model all real world relationship as compile time hierarchies.
While people are likely to hand useful links or videos, I will attempt another method of understanding.
When it comes to organising your code in an ECS-esque fashion, it is much closer to normalising a database except you are organising your structs instead of tables.
With databases, you create tables. You would have an Entity table that stores a unique Id, and tables that represent each Component.. which there would be a EntityId key, etc.
Also, each table is representative of a basic array. It is also about knowing a good design for memory allocation, rather than 'new' or 'delete' in typical OOP fashion. Maybe you can reason the memory needed on startup. Of course, this depends on the type of game.. or business application.
An 'Entity' is useless on its own. It can have many different behaviours or traits. Maybe, if referring to games, you can have an entity that:- has physics, is collidable, is visible, etc.
Each of these can be treated as a 'Component' holding data relevant to it.
Then you have a 'System' which can be a collection of functions to initialise the system, shutdown the system, update the system, or fetch the component record for that entity, etc.. all of which manipulates the data inside the Component.
Some Components may even require data from other Components, which you would communicate calling the system methods.
You can create high level functions for creating each Entity. Of course, this is a very simplified take :-
var entity1 = create_player(1)
var boss1 = create_boss1()
function create_player(int player_no) {
var eid = create_entity();
physics_add(eid); // add to physics system
collision_add(eid); // add to collision system
health_add(eid, 1.0); // add health/damage set to 1.0
input_add(eid, player_no); // input setup - more than 1 player?
camera_set(eid, player_no); // camera setup - support split screen?
return eid;
}
function create_boss1() {
var eid = create_entity();
physics_add(eid);
health_add(eid, 4.0) // 4x more than player
collision_add(eid);
ai_add(eid, speed: 0.6, intelligence: 0.6); // generic AI for all
return eid;
}
1. You have entities, which may just be identifiers (maybe a u64 used as an index elsewhere) or some more complex object.
2. You have components, which are the real "meat and potatoes" of things. These are the properties or traits of an entity, the specifics depend on your application. For a video game or physics simulator it might be velocity and position vectors.
3. Each entity is associated with 0 or more components.
4. These associations are dynamic.
5. You have systems which operate on some subset of entities based on some constraints. A simple constraint might be "all entities with position and velocity components". Objects lacking those would not be important to a physics system.
In effect, with ECS you create in-memory, hopefully efficient, relational databases of system state. The association with different components allows for dynamically giving entities properties. The systems determine the evolution of the state by changing components, associating entities with components, and disassociating entities from components.
The technical details on how to do this efficiently can get interesting.
Compared to more typical OO (exaggerated for effect), instead of constructing a class which has a bunch of properties (say implements some combination of interfaces) and manually mixing and matching like:
Or creating a bunch of traits inside a god object version of the Wizard or Player class to account for all conceivable traits (most of which are unused at any given time), you use the dynamic association of an entity with Wizard, Flying, and Flameproof components.
So your party enters the third floor of a wooden structure and your Wizard (a component associated with an entity) casts "Fly" and "Protection from Elements" on himself. These associate the entity with the Flying and Flameproof components (and potentially others). Now when fireball is cast and the wizard is in the affected area, he'll be ignored (by virtue of being Flameproof) while everything around him catches fire, and when the wooden floor burns away the physics engine will leave him floating rather than falling like his poor non-flying, currently on fire Fighter compatriot.
It's a bit of a long read, but I think the best introduction is still this [0] and the comments were here [1]. Yes, it's presented in the context of rust and gamedev, but ECS isn't actually specific to a particular programming language or problem domain.
They’re very common in video game programming and visual effects and uncommon elsewhere. I enjoyed this article, though it’s still about using ECS in a simulation / computer graphics context.
Basically is programming against composable interfaces like COM, Objective-C protocols, and anything else like that, but sold in a way that anti-OOP folks kind of find acceptable, while feeling they aren't using that all mumbo jumbo bad OOP stuff some bad Java teachers gave them on high school.
Most of them tend to even ignore books on the matter, like "Component Software: Beyond Object-Oriented Programming." [0], rather using some game studios approach to ECS as the genesis of it all.
I have a PhD in computer science and I'm also able to stop for two seconds and understand what ECS as used by game devs means. It is pointed out in the talk that the design pattern was used in sketchpad of all things and reinvented in 1998. They call that pattern ECS. It is unfortunate the name is overloaded but that doesn't mean that when they say ECS they're referring to the ECS you are and it is somehow a gotcha since that other ECS is still very much OOP.
Then lets sort this out, point an github repo of your choice for a game engine using ECS and lets discuss the implementation from CS point of view regardling programming language features used for the implementation.
Given that both of us have the required CS background should be kind of entertaining.
Sure, the Bevy engine is the one I'm most familiar with:
- An entity is a 64 bit integer wrapped in a struct for typesafety (newtype pattern). This is a primary key.
- A component is a struct that implements the "component" trait. This trait is an implementation detail to support the infrastructure and is not meant to be implemented by the programmer (there is a derive macro). It turns the struct into a SoA variant, registers it into the world object (the "database") plus a bunch of other things. It is a table.
- A query is exactly what it sounds like. You do joins on the components and can filter them and such.
- A system is just code that does a query and does something with the result. It's basically a stored procedure.
It is a relational database.
EDIT: forgot to link the relevant docs: https://docs.rs/bevy/latest/bevy/ecs/component/trait.Compone....
It is really critical to note a programmer is not expected to implement the methods in this trait. Programmers are only supposed to mark their structs with the derive macro that fills in the implementation. The trait is used purely at compile time (like a c++ template).
Either way, it doesn't matter if OOP is used in the implementation of an ECS, just as it doesn't matter if MySQL uses classes and objects to implement SQL.
It seems the community is severely overexposed to bad practices and implementations of OOP and conversely severely underexposed to the success stories.
153 comments as of time of writing, let's see.
C-F:
Java: 21
C++: 31
Python: 23
C#: 2
And yet:
Pascal: 1 (!)
Delphi: 0
VCL: 0
Winforms: 0
Ruby: 2 (in one comment)
This is not a serious conversation about merits of OOP or lack thereof, just like Casey's presentation is not a serious analysis - just a man venting his personal grudges.
I get that, it's completely justified - Java has a culture of horrible overengineering and C++ is, well, C++, the object model is not even the worst part of that mess. But still, it feels like there is a lack of voices of people for whom the concept works well.
People can and will write horrible atrocities in any language with any methodologies; there is at least one widely used "modern C++" ECS implementation built with STL for example (which itself speaks volumes), and there is a vast universe of completely unreadable FP-style TypeScript code out there written by people far too consumed by what they can do to stop for a second and think if they should.
I don't know why Casey chose this particular hill to die on, and I honestly don't care, but we as a community should at least be curious if there are better ways to do our jobs. Sadly, common sense seems to have given way to dogma these days.
As someone who has always found popular OOP stupid (programming is closer to math, not linguistics—write functional programs!) I'm glad Casey is going out there and giving talks like this. If extensive academic research and extensively documented benefits couldn't convince the industry to abandon OOP in favor of functional style maybe an everyman like Casey finally can.
A lot of so-called programmers and systems "engineers" act like religious zealots. Even challenging the ideas of OOP is blasphemy to them, even though there are many legitimate reasons to do so.
I was around when OOP became popular in the 90s. I think it was a huge step forward. Problem is that with almost every useful paradigm at some point consultants and zealots take over and push things to an extreme that doesn't work. And when problems show up, it's because you didn't do it right. Happened with OOP, NoSQL, Agile and probably many others. I don't see how functional style won't go differently.
Despite spending untold hours learning and using C++ and Java, I never fully believed that OOP was anything great. It always felt so forced to code everything in terms of classes rather than just modules of code that have similar responsibilities.
You must not mean C++ when you write about having to write everything in terms of classes. Java, yes, with the requirement that the nearest thing to a free standing function is a static method in a class (which becomes in effect a regular old module). But C++? You could, and many did, pretend it was fancy C and only deal with classes when it came to using things like collections and streams (because they were useful).
The whole code everything in classes idea came up when the purists took over. I (and most reasonable people I know) wrote most of their code in single functions and used instantiable classes only where it made sense. A class with only static methods is basically a module.
In fact, as Casey mentions in this talk, a lot of the earliest ideas in software architecture came from trying to parse languages - both human and computer.
Yes, of course there is a connection through the Chomskian mathematization of syntax.
But when we think about solving problems over domains and relations (e.g think about realizing that the problem of parsing requires traversing a tree like structure) we are dealing with mathematical-logical structures, not linguistic concepts. This is what I meant. I've seen a lot of OOP code that tried desperately to make code reflect the fuzzier relationships between linguistic concepts, rather than the precise ones of logical structure (a lot of this is a consequence of over-encapsulation and excessive information hiding)
Yes! It became extremely tiresome to defend my position as a functional programmer from less experienced people who had big egos. Time and time again I would be accused of "not having enough experience" simply because I disagreed with just so many stupid things that have gone on over the years--the zealots I ran into where people with big chips on their shoulders who just had to be "right" instead of being inquisitive and thoughtful. I never had anything to prove, I just wanted to write tight, testable, maintainable code.
> A lot of so-called programmers and systems "engineers" act like religious zealots.
Rather ironic given that in this very comment section I'm largely seeing that behavior associated with people appealing to Casey as an authority as an excuse not to engage with intelligently written counterpoints.
I certainly won't defend the historic OOP hype but a tool is not limited by how the majority happen to use it at any given time. Rallying against a tool or concept itself is the behavior of a zealot. It's rallying against a particular use that might have merit.
Just let me ask you this: How many years have you been in this game? I came in around 1990, but my first professional coding job wasn't until after I finished a BS in CS and Math in 1995. To me, having the perspective I have, OOP looks in retrospect to have been an enormous boondoggle championed by the Boomer generation. It as all people who did Waterfall, wrote endless requirements documents before coding anything, and did quarterly or even yearly code releases, if you can even imagine that.
Not quite as long as you, but I don't think it's relevant to the point at hand. I entirely agree with what you wrote, and yet I think it's entirely in keeping with what I said. It's the things that actually happened that were the boondoggle, not the paradigm itself.
Similarly I'd like to suggest that there exist situations where waterfall is the obviously correct choice. Yet even then someone could still potentially manage to screw it up.
This is completely true, of course—the only reason I'm being a bit hyperbolic is because some rebalancing is still in order.
I agree that once the field matures what we will really (hopefully) finally see are people adopting different modes of organization based on the second order systems properties they support, rather than ideology or personal experience—but we aren't there yet.
I think there are certain cases in which using an object oriented approach makes sense, but man, it has led to so many bloated, needlessly complicated systems in which the majority of the work is dealing with inanities imposed by OOP discipline and structure rather than dealing with the actual problem the system is supposed to solve.
> I prefer to write code in a verb-oriented way not an object-oriented way. ... It also has to do with what type of system you're making: whether people are going to be adding types to the system more frequently or whether they're going to be adding actions. I tend to find that people add actions more frequently.
Suddenly clicked for me why some people/languages prefer doThing(X, Y) vs. X.doThing(Y)
reply