OOP has lots of flaws and is not a good choice in every context, but I still don't understand the universal hatred it seems to get now.
I think OOP techniques made most sense in contexts where data was in memory of long-running processes - think of early versions of MS Office or such.
We've since changed into a computing environment in which everything that is not written to disk should be assumed emepheral: UIs are web-based and may jump not just between threads or processes but between entire machines between two user actions. Processes should be assumed to be killed and restarted at any time, etc etc.
This means it makes a lot less sense today to keep complicated object graphs in memory - the real object graph has to be represented in persistent storage and the logic inside a process works more like a mathematical function, translating back and forth between the front-end representation (HTML, JSON, etc) and the storage representation (flat files, databases, etc). The "business logic" is just a sub-clause in that function.
For that kind of environment, it's obvious why functional or C-style imperative programming would be a better fit. It makes no sense to instantiate a complicated object graph from your input, traverse it once, then destroy it again - and all that again and again for every single user interaction.
But that doesn't mean that the paradigm suddenly has always been bad. It's just that the environment changed.
Also, there may be other contexts in which it still makes sense, such high-level scripting or game programming.
So much gold to mine in this talk. Even just this kind of throwaway line buried deep in the Q&A:
> I prefer to write code in a verb-oriented way not an object-oriented way. ... It also has to do with what type of system you're making: whether people are going to be adding types to the system more frequently or whether they're going to be adding actions. I tend to find that people add actions more frequently.
Suddenly clicked for me why some people/languages prefer doThing(X, Y) vs. X.doThing(Y)
Unified function call: The notational distinction between x.f(y) and f(x,y) comes from the flawed OO notion that there always is a single most important object for an operation. I made a mistake adopting that. It was a shallow understanding at the time (but extremely fashionable). Even then, I pointed to sqrt(2) and x+y as examples of problems caused by that view.
The main benefit of x.f(y) IMO isn't emphasizing x as something special, but allowing a flat chain of operations rather than nesting them. I think the differences are more obvious if you take things a step further and compare x.f(y).g(z) and g(f(x, y), z). At the end of the day, the difference is just syntax, so the goals should be to aid the programmer in writing correct code and to aid anyone reading the code (including the original programmer at a later point in time!) in understanding the code. There are tradeoffs to using "method" syntax as well, but to me that mostly is an argument for having both options available.
Yeah, the dot operator is not a particularly strong signal of whether something is OOP or not. You could change the syntax of method calls in OO languages to not use the object as a prefix without the underlying paradigm being affected.
There are a handful of (somewhat exotic) languages that support multiple dispatch - pretty much, all those listed by you. None of the mainstream ones (C++, Java, C# etc) do.
(also Common Lisp is hardly a poster child of OOP, at best you can say it's multi-paradigm like Scala)
I complained a lot about OOP all throughout my 25 years a developer. I wrote a ton of C++ and then Java and nobody can refute my expertise with those languages. I saw so many people make mistakes using them, particularly with forcing taxonomies into situations that weren't conducive to having them. Then, when I began complaining to my colleagues about my feelings, I was ostracized and accused of "not having a strong enough skillset." In other words, the dogma of the time overrode the Cassandras saying that the emperor had no clothes. Meanwhile, the simple nature of C and even scripting languages was considered out date. The software dev community finally realized how bad things had gotten with Java and then the walls came a tumbling down. I far prefer writing non-OO Python (or minimal use of classes) to anything else these days. I went all around the language world--did projects involving Lua, Clojure, tons of Groovy, then moved on to Functional Java, Kotlin, and Golang.
Similar experience. I would be the one the entire team would turn to when a really hard problem to debug came up. Yet, when I would say that OOP is not great and is over-complicating the code, I would be scoffed at. I never could reconcile how I was "leaned on" to fix things, but ignored in proposing different paradigms.
I recently read a quote, paraphrasing, Orthodoxy is a poor man's substitute for moral superiority.
This is a pervasive phenomenon in programming. Many times suboptimal solutions remain for a long time because, well, it does solve the problem. Once you are taught X by authoritative figures, you tend to lean on it. It takes experience and an open mind to do anything else.
The use of GOTO is another example. Yes, you probably wouldn't want in your codebase, but the overzealousness against it removes expressions like break statements or multiple return statements from languages.
This is less about what metaphors are under the hood, but the patterns used on top of them. You can get technical, but it's definitely possible to write primarily functional, imperative or object-oriented code in Python, irrespective of what the syntax is for dealing with primitives.
I think you have things backwards; you're the one complaining about the the underlying implementation, whereas the others are talking about the interface for it. They're talking about whether they prefer to drive cars or trains, and you're basically claiming that there's no difference between a car and a train if the train also runs on gas.
You stopped too soon - those apparently OOP semantics ultimately require decidedly non-OOP machine code at the real level it is executed. Everything else is just abstraction.
Trying to argue that you are, in fact driving a steam engine, requires one to assume a level of abstraction and definition, in order to set an arena in which a discussion can occur.
The talk makes a very specific complaint. That complaint is not that you are associating data with the functions operating on that data.
What the talk is about compile time (and maybe execution time in the case of python) hierarchies being structured as a mapping of real objects. This is how I was taught OOP and this is what people are recognizing as "OOP".
>So for the anti-OOP folks out there using languages like Python as an example,
Just because a language associates data with functions, does not mean that every program hierarchy has to map onto a real world relationship.
Why are you even commenting on this with your nonsense? Do you really think that if someone is complaining about OOP they are complaining that data types store functions for operating on that data? Has literally anyone ever complained about that?
What a silly objection. You just made up a random argument in your head what that talk was about or what arguments are being made.
It actually does matter a whole lot what developers think. It matters far more than any "CS definition", not that anything here is about computer science.
>What matters are language implementations
No, they do not matter at all. They are totally irrelevant to the topic. In python you can construct your hierarchies to match real world hierarchies. You also can not do that. It is totally irrelevant how the language is implemented.
This is an excellent talk. It digs really deep into the history of OOP, from 1963 to 1998. The point is that in 1998 the commercial game "Thief" was developed using an entity component system (ECS) architecture and not regular OOP. This is the earliest example he knows of in modern commercial programming.
During his research into the history of OOP he discovered that ECS existed as early as 1963, but was largely forgotten and not brought over as a software design concept or methodology when OOP was making its way into new languages and being taught to future programmers.
There's lots of reasons for why this happened, and his long talk is going over the history and the key people and coming up with an explanatory narrative.
You can do ECS in any programming paradigm. It’s not incompatible with OO at all. There’s no need for the object model to be congruent to a static representation of a domain, in a line-of-business app it is much better for it to be congruent to workflow and processing.
Heck I’ve even done ECS in Rails for exactly this reason.
I never accepted the Java/C++ bastardisation of OOP and still think that Erlang is the most OO language, since encapsulation and message passing is so natural.
I am not sure what the link between ECS and protocols, traits, and multiple inheritance is? ECS is mostly about memory layout not typed interfaces as far as I know.
Thank you for this summary. I'm a hobbyist programmer and want to watch this, but having a few concepts to hang my hat helps contextualize this for me.
I had difficulty understanding what irked me about the comment, but indeed, that's it. The mix of superficiality, congeniality, and random details sound like an AI response. However, I don't think it is. But AI surely fucks up our trust.
Ok, meanwhile the admins seem to have combined all the comments here, so the link now points to an empty post; and I made this comment a few days (not four hours) ago.
And on the other end of the spectrum, you have the proponents of Domain-driven design (DDD)[0], where they use an ML descended language such as F# and the aim is to make invalid states unrepresentable by the program [1]
How is this "the other end of the spectrum"? The Typestate pattern described at https://geeklaunch.io/blog/make-invalid-states-unrepresentab... (especially wrt. its genericized variety that's quite commonly used in Rust) is precisely a "compile-time hierarchy of encapsulation that matches the domain model", to use Casey Muratori's term for what he's talking about. It's literally inheritance-based OOP in a trenchcoat.
There are very few F# specific features used in the book. I imagine you could follow along pretty easily with any other functional language. You can easily use F# for the book and then apply the lessons learned to another language when you're done too. It mainly shows how to use sum types, product types and function composition to implement DDD.
I'm not sure what tendencies you're referring to though. F# has been around for 20 years and has only gotten better over time.
Entertaining. The presenter obviously doesn't like the class hierarchy to correspond to the domain model. He seems to think that this was an essential feature of OOP, supported by some quotations by Smalltalk exponents. But not even the Smalltalk world could agree on what OOP actually is (just compare the statements by Kay with the actual architecture of Smalltalk-76ff) and as quickly as Smalltalk lost its significance, there is no need to mention it further. I would rather look at a reputable industry organization such as IEEE, which even publishes its own standards and best practices, what OOP is about. E.g. the OOP Milestone (see https://ethw.org/Milestones:Object-Oriented_Programming,_196...) which names Simula 67 the first OO language, specifies OO as "the combination of three main features: 1) encapsulation of data and code 2) inheritance and late binding 3) dynamic object generation." No mention of the class hierarchy should correspond to the domain model. So maybe we should just not mix up a programming paradigm with how it is used by some folks in practice? The fact that the loudest proponents of a paradigm are not usually those who apply it in practice remains true even today. Takes far less than 2.5 hours to state.
He literally gives extensive primary source citations to show that the originators of OOP presented this class-domain correspondence as the correct way to think about and do OOP. Bjarne Stroustrup is not just some random guy.
> He literally gives extensive primary source citations to show that the originators of OOP presented this class-domain correspondence
In case of Dahl/Nygaard it seems logical since their work focus was on simulation. Simula I was mostly a language suited to build discrete-event simulations. Simula 67, which introduced the main features we subsume under "Object-Orientation" today, was conceived as a general-purpose language, but still Dahl and Nygaard mostly used it for building simulations. It would be wrong to conclude that they recommended a class-domain correspondence for the general case.
> Bjarne Stroustrup is not just some random guy
Sure, but he was a Simula user himself for distributed systems simulation during his PhD research at Cambridge University. And he learned Simula during his undergraduate education at Aarhus, where he also took lectures with Nygaard (a simulation guy as well). So also here, not surprising that he used examples with class-domain correspondence. But there was also a slide in the talk where Stroustrup explicitly stated that there are other valid uses of OO than using it for modeling domains.
yes, because java and c# (and others, python to a certain extent) basically copied it. even ruby, which at its core is about "message passing" sure does a hell of a lot to hide that and make it feel c++ ish. i would bet at least 25% of ruby practitioners arent aware that message passing is happening.
The main issue I have with Java is that the JVM was built to be portable and then we got a superior kind of portability using containers, which makes the JVM totally redundant and yet whenever I point that out, I get funny looks from people!
I guess I have a lot of other problems with Java--jar hell, of course, but also the total inability for corporations to update their junk to newer versions of Java because so many libraries and products were never kept up with and then you get security people breathing down your neck to update the JVM which was LITERALLY IMPOSSIBLE in at least two situations I became involved with. We even tried to take over 3rd party libraries and rewrite/update them and ended up at very expensive dead ends with those efforts. Then, to top it all off, being accused of lacking the skill and experience to fix the problem! Those a-holes had no idea what THEY were talking about. But in corporate America, making intelligent and well-documented arguments is nothing. That's when I finally decided I needed to just stop working on anything related to Java entirely. So after about 15 years of that crap, I said no more. But I'm the under-skilled one.
What do you mean by this? Because everything in Python is object, even classes and functions are objects.
Do you just mean that Python lets you write functions not as part of a class? Because yeah there's the public static void main meme but static functions attached to a class is basically equivalent to Python free functions being attached to a module object.
OOP is not shoved down your throat with Python, though. With Python, I can choose what taxonomies deserve an OOP treatment and which do not. Spoiler: Almost nothing is a taxonomy of any remarkable nature.
you also can get away with completely ignoring the underlying oop semantics in tons of cases whereas java and similar languages dont give you that option
No. Functional programming is quite a bit more involved than just writing "functions"--it is taking advantage of the fact that functions are first-class "objects" that can be passed as arguments to other functions, which allows for a far more intuitive and flexible form of programming. But FP is even more than that.
I don't hate C++ as much as I hate Java, though. That probably has more to do wit the time I was working with C++ and the kinds of projects versus the mind-numbing corporate back-end garbage I worked on with Java.
It's funny because I read this comment and then watched the video, and it's like the first 15 minutes of the talk are dedicated to debunking this exact comment. Freaky.
Did you actually watch it? He talks A LOT more about Simula and C++ than Smalltalk. He goes back to the original sources: Kristen Nygaard and Bjarne Stroustrup. Seems odd to focus on Smalltalk when that’s not what the talk was about.
i love casey and I love this talk. Always good to see people outside of academia doing deep research and this corroborates allot of how I have understood the subject.
I find it funny that even after he goes into explicit detail about describing oop back to the original sources people either didn't watch it or are just blowing past his research to move the goal post and claim thats not actually what OOP is because they don't want to admit the industry is obsessed with a mistake just like waterfall and are too stockholm syndromed to realize
Except that his talk is not anti-OOP. It's anti-a-specific-way of using OOP. Namely representing the Domain Model as the compile time hierarchy. He goes to great lengths that he himself uses OOP concepts in his code. OOP wasn't a mistake per-se. The mainstream way of using as promulgated by a number of experts was the mistake.
The problem is when you take out mistakes, there's not much left of OOP.
We take out 'dog-is-an-animal' inheritance.
We take out object-based delegation of responsibility (an object shall know how to draw itself). A Painter will instead draw many fat structs.
Code reuse? Per the talk, the guy who stumbled onto this was really looking for a List<> use-case, (not a special kind of Bus/LinkedList hybrid. He was after parametric polymorphism, not inheritance.
I had whisper make a transcript and skimmed some but I ended up watching the talk at ~1.5x speed in the end anyway. https://pastebin.com/EngTq9ZA If you want the timestamps kept in I can paste that too.
A few lines of Javascript in the console can copy that to the clipboard for you. Maybe someone's packaged that up already. (It's on my todo list to look around...)
Control Panel for YouTube [1] has an option to add a new "Download" item to the menu in the video Transcript box (which I've just noticed is currently broken, as YouTube must have changed the implementation again recently)
Here's a fixed version [2] - run this when the transcript is open and loaded.
I really dont understand his reasoning. If you have a pointer to the base class different implementations are polymorphic and its hidden from the caller. That is the whole point and it means you can have an engine with a base class in a library, then different people can derive from it and use that engine.
I think his definition of OO is different to what we've got used to. Perhaps his definition needs a different name.
what is your reasoning? if you make your own object system, that is indeed polymorphic. do you now feel the need to model the world in your application?
>I think his definition of OO is different to what we've got used to.
No. His definition is exactly what people are taught OOP is. It is what I was taught, it is what I have seen taught, it is what I see people mean when they say they are doing OOP.
> Perhaps his definition needs a different name.
No. Your definition needs a different name. Polymorphic functions are not OOP. If you give someone standard Julia code, a language entirely built around polymorphic functions, they would tell you that it is a lot of things, except nobody would call it OOP.
Importantly polymorphic functions work without class hierarchies. And calling anything without class hierarchies "OOP" is insane.
Really short: ECS existed in the earliest implementations of OOP in 1963 and was being used in the software he showed.
When OOP went mainstream it pretty much was entirely about "compile time hierarchy of encapsulation that matches the domain model" and nothing else. His opinion is the standard way of doing OOP is a bad match for lots of software problems but became the one-size-fits-all solution as a result of ignorance.
Also he claims that history is being rewritten to some extent to say this wasn't the case and there was never a heavy emphasis on doing things that way.
- Encapsulation / interfaces is a good idea, a continuation of the earlier ideas of structured programming.
- Mutable state strewn uncontrollably everywhere is bad idea, even in a single-threaded case.
- Inheritance-based polymorphism is painful, both in the multiple (C++) and single (Java) inheritance cases. Composable interfaces / traits / typeclasses without overriding methods are logically and much more useful.
Over multiple decades, I have come to reject all of it! Even interfaces.
I watched over and over again people writing code to interfaces, particularly due to Spring, and then none of those interface ever got a second implementation done and were never, ever going to! It was a total waste of time, even for testing it was almost a total waste of time, but I guess writing stubbed test classes that could pretend to return data from a queue or a database was somewhat useful. The thing is, there were easier ways to achieve that.
Those interfaces that never got a second implementation were still defining the contract for interacting with another part of your system and that compile time enforced contract provides value. I have plenty of complaints about Spring but interfaces is not one of them.
OOPs = "object-oriented programming", BUT it's a more restrained and thoughtful complaint than just "objects suck" or "inheritance sucks". He cabins it pretty clearly at 11:00 minutes in: "compile-time hierarchy of encapsulation that matches the domain model was a mistake"
To unpack that a little, he looks to the writings of the early developers of object oriented programming and identifies the ways this assumption became established. People like Bjarne Stroustrup (developer of C++) took on and promulgated the view that the inheritance hierarchy of classes in an object oriented system can be or should be a literal instantiation of the types of objects from the domain model (e.g. different types of shapes in a drawing program).
This is a mistake is because it puts the broad-scale modularization boundaries of a system in the wrong places and makes the system brittle and inflexible. A better approach is one where large scale system boundaries fall along computational capability lines, as exemplified by modern Entity Component Systems. Class hierarchies that rigidly encode domain categorizations don't make for flexible systems.
Some of the earliest writers on object encapsulation, e.g. Tony Hoare, Doug Ross, understood this, but later language creators and promoters missed some of the subtleties of their writings and left us with a poor version of object-oriented programming as the accepted default.
Only as a brief aside (don't have the timestamp right now) to talking about Smalltalk, which he mostly discusses to argue that Smalltalk was not different from C++ in seeking (most of the time) to model programs in terms of static hierarchies (according to the primary source documentation from the time of Smalltalk's design):
> And another thing is if you look at the other branch,
> the branch that I'm not really covering very much
> in this talk, because again,
> we don't program in small talk these days, right?
> The closest thing you would get
> is maybe something like Objective-C.
> If there's some people out there using Objective-C,
> you know, like Apple was using that for a little while,
> so Objective-C kind of came
> from a small talk background as well.
Objective-C is basically Smalltalk retrofitted onto C, even more than C++ was Simula retrofitted onto C (before C++ gained template metaprogramming and more modern paradigms), so it makes sense that Muratori doesn't go much into it, given that he doesn't discuss Smalltalk much.
Inheritance sucks if you wish to write good unit tests easily. It just totally freaking sucks due to encapsulation. When you step back, you realize that composition is a far better approach to writing testable code.
Basically his “35-year mistake” thesis is that we almost had ECS, the entity/component/system pattern, in 01963 with Sketchpad, but it took until 01998. He explains this near the end of the talk proper, and explains how Looking Glass in 01998 introduced the pattern in Ultima II Underworld, but really introduced it with Tom Leonard’s Thief: The Dark Project. Later though he seems to be saying that he's not sure ECS is actually a good idea, but he thinks encapsulation is, if not a bad idea, at least an idea that should be applied carefully to keep it from getting in your way, and definitely not in a way that reflects a division among problem-domain objects such as cars, trucks, bridges, circular arcs, lanterns, etc.
The 35 year mistake was the idea that in order to have a well structured program, your compile time hierarchies have to represent real world relationships.
The talk traces that mistake to Simula, where it was appropriately used, because it was intended to simulate the real world hierarchies. Then to C++ where it started to become used inappropriately, then to Java, where it became a universal Praxis to model all real world relationship as compile time hierarchies.
Stroustrup took out object hierarchy introspection feature that was available before, which turned out to be a pretty handy feature that people kept trying to reimplement.
Finally coming into C++26, but boy the syntax, C++ keeps competing with Perl on that regard, and I say this as someone that enjoys coding in C++ on my free time.
While people are likely to hand useful links or videos, I will attempt another method of understanding.
When it comes to organising your code in an ECS-esque fashion, it is much closer to normalising a database except you are organising your structs instead of tables.
With databases, you create tables. You would have an Entity table that stores a unique Id, and tables that represent each Component.. which there would be a EntityId key, etc.
Also, each table is representative of a basic array. It is also about knowing a good design for memory allocation, rather than 'new' or 'delete' in typical OOP fashion. Maybe you can reason the memory needed on startup. Of course, this depends on the type of game.. or business application.
An 'Entity' is useless on its own. It can have many different behaviours or traits. Maybe, if referring to games, you can have an entity that:- has physics, is collidable, is visible, etc.
Each of these can be treated as a 'Component' holding data relevant to it.
Then you have a 'System' which can be a collection of functions to initialise the system, shutdown the system, update the system, or fetch the component record for that entity, etc.. all of which manipulates the data inside the Component.
Some Components may even require data from other Components, which you would communicate calling the system methods.
You can create high level functions for creating each Entity. Of course, this is a very simplified take :-
var entity1 = create_player(1)
var boss1 = create_boss1()
function create_player(int player_no) {
var eid = create_entity();
physics_add(eid); // add to physics system
collision_add(eid); // add to collision system
health_add(eid, 1.0); // add health/damage set to 1.0
input_add(eid, player_no); // input setup - more than 1 player?
camera_set(eid, player_no); // camera setup - support split screen?
return eid;
}
function create_boss1() {
var eid = create_entity();
physics_add(eid);
health_add(eid, 4.0) // 4x more than player
collision_add(eid);
ai_add(eid, speed: 0.6, intelligence: 0.6); // generic AI for all
return eid;
}
1. You have entities, which may just be identifiers (maybe a u64 used as an index elsewhere) or some more complex object.
2. You have components, which are the real "meat and potatoes" of things. These are the properties or traits of an entity, the specifics depend on your application. For a video game or physics simulator it might be velocity and position vectors.
3. Each entity is associated with 0 or more components.
4. These associations are dynamic.
5. You have systems which operate on some subset of entities based on some constraints. A simple constraint might be "all entities with position and velocity components". Objects lacking those would not be important to a physics system.
In effect, with ECS you create in-memory, hopefully efficient, relational databases of system state. The association with different components allows for dynamically giving entities properties. The systems determine the evolution of the state by changing components, associating entities with components, and disassociating entities from components.
The technical details on how to do this efficiently can get interesting.
Compared to more typical OO (exaggerated for effect), instead of constructing a class which has a bunch of properties (say implements some combination of interfaces) and manually mixing and matching like:
Or creating a bunch of traits inside a god object version of the Wizard or Player class to account for all conceivable traits (most of which are unused at any given time), you use the dynamic association of an entity with Wizard, Flying, and Flameproof components.
So your party enters the third floor of a wooden structure and your Wizard (a component associated with an entity) casts "Fly" and "Protection from Elements" on himself. These associate the entity with the Flying and Flameproof components (and potentially others). Now when fireball is cast and the wizard is in the affected area, he'll be ignored (by virtue of being Flameproof) while everything around him catches fire, and when the wooden floor burns away the physics engine will leave him floating rather than falling like his poor non-flying, currently on fire Fighter compatriot.
Basically is programming against composable interfaces like COM, Objective-C protocols, and anything else like that, but sold in a way that anti-OOP folks kind of find acceptable, while feeling they aren't using that all mumbo jumbo bad OOP stuff some bad Java teachers gave them on high school.
Most of them tend to even ignore books on the matter, like "Component Software: Beyond Object-Oriented Programming." [0], rather using some game studios approach to ECS as the genesis of it all.
It's a bit of a long read, but I think the best introduction is still this [0] and the comments were here [1]. Yes, it's presented in the context of rust and gamedev, but ECS isn't actually specific to a particular programming language or problem domain.
They’re very common in video game programming and visual effects and uncommon elsewhere. I enjoyed this article, though it’s still about using ECS in a simulation / computer graphics context.
As someone who has always found popular OOP stupid (programming is closer to math, not linguistics—write functional programs!) I'm glad Casey is going out there and giving talks like this. If extensive academic research and extensively documented benefits couldn't convince the industry to abandon OOP in favor of functional style maybe an everyman like Casey finally can.
A lot of so-called programmers and systems "engineers" act like religious zealots. Even challenging the ideas of OOP is blasphemy to them, even though there are many legitimate reasons to do so.
I was around when OOP became popular in the 90s. I think it was a huge step forward. Problem is that with almost every useful paradigm at some point consultants and zealots take over and push things to an extreme that doesn't work. And when problems show up, it's because you didn't do it right. Happened with OOP, NoSQL, Agile and probably many others. I don't see how functional style won't go differently.
Despite spending untold hours learning and using C++ and Java, I never fully believed that OOP was anything great. It always felt so forced to code everything in terms of classes rather than just modules of code that have similar responsibilities.
You must not mean C++ when you write about having to write everything in terms of classes. Java, yes, with the requirement that the nearest thing to a free standing function is a static method in a class (which becomes in effect a regular old module). But C++? You could, and many did, pretend it was fancy C and only deal with classes when it came to using things like collections and streams (because they were useful).
In fact, as Casey mentions in this talk, a lot of the earliest ideas in software architecture came from trying to parse languages - both human and computer.
Yes! It became extremely tiresome to defend my position as a functional programmer from less experienced people who had big egos. Time and time again I would be accused of "not having enough experience" simply because I disagreed with just so many stupid things that have gone on over the years--the zealots I ran into where people with big chips on their shoulders who just had to be "right" instead of being inquisitive and thoughtful. I never had anything to prove, I just wanted to write tight, testable, maintainable code.
> A lot of so-called programmers and systems "engineers" act like religious zealots.
Rather ironic given that in this very comment section I'm largely seeing that behavior associated with people appealing to Casey as an authority as an excuse not to engage with intelligently written counterpoints.
I certainly won't defend the historic OOP hype but a tool is not limited by how the majority happen to use it at any given time. Rallying against a tool or concept itself is the behavior of a zealot. It's rallying against a particular use that might have merit.
Just let me ask you this: How many years have you been in this game? I came in around 1990, but my first professional coding job wasn't until after I finished a BS in CS and Math in 1995. To me, having the perspective I have, OOP looks in retrospect to have been an enormous boondoggle championed by the Boomer generation. It as all people who did Waterfall, wrote endless requirements documents before coding anything, and did quarterly or even yearly code releases, if you can even imagine that.
Not quite as long as you, but I don't think it's relevant to the point at hand. I entirely agree with what you wrote, and yet I think it's entirely in keeping with what I said. It's the things that actually happened that were the boondoggle, not the paradigm itself.
Similarly I'd like to suggest that there exist situations where waterfall is the obviously correct choice. Yet even then someone could still potentially manage to screw it up.
I think OOP techniques made most sense in contexts where data was in memory of long-running processes - think of early versions of MS Office or such.
We've since changed into a computing environment in which everything that is not written to disk should be assumed emepheral: UIs are web-based and may jump not just between threads or processes but between entire machines between two user actions. Processes should be assumed to be killed and restarted at any time, etc etc.
This means it makes a lot less sense today to keep complicated object graphs in memory - the real object graph has to be represented in persistent storage and the logic inside a process works more like a mathematical function, translating back and forth between the front-end representation (HTML, JSON, etc) and the storage representation (flat files, databases, etc). The "business logic" is just a sub-clause in that function.
For that kind of environment, it's obvious why functional or C-style imperative programming would be a better fit. It makes no sense to instantiate a complicated object graph from your input, traverse it once, then destroy it again - and all that again and again for every single user interaction.
But that doesn't mean that the paradigm suddenly has always been bad. It's just that the environment changed.
Also, there may be other contexts in which it still makes sense, such high-level scripting or game programming.
reply