Witness Wednesday on How to Program

Sometimes people ask me for advice on how to program, and I have to say that Casey's advice here is right on:

http://mollyrocket.com/casey/stream_0019.html

Meanwhile, we've had a lot of Witness Wednesday posts in a row on this blog. I'll have to stop slacking and write something about the game.

12 Comments:

  1. “OOP is the new way to write spaghetti code.”

  2. Also, thanks to Casey for writing that article.

  3. The URL isn’t showing up as a link it seems

  4. Great read! I’ve mostly coded in C#, but I realized I tend to work the same way. It just makes sense.

  5. The article has no comments so I figure I’ll post this here. I’m having a hard time seeing this technique as the “one true way” to do things. Here’s an example. I’m currently building a simple synthesizer app. It has a cluster of model objects (instruments, tracks, notes), a MIDI synthesizer, a UI button layer, and a canvas for receiving non-UI touches as well as actually rendering the model. Treating each of these entities as an object or collection of objects with APIs just makes sense to me.

    Here are some of the things I can do with my current design:

    1. Observe properties on my model object from my canvas controller (Cocoa KVO), which allows the canvas view to automatically update whenever anything changes the model: another view, another app, network sync, etc. The canvas doesn’t have to care about how or why the model changed — only that it did. (It also has the option of receiving the previous value.)

    2. Hook up the MIDI synth to the canvas controller, allowing me to either start MIDI playback by tapping the play button in the UI layer, or alternatively start the MIDI synth directly and have the canvas automatically start animating.

    3. Replace the way my model gets serialized to disk without changing anything else about the system.

    4. Easily swap out the canvas for an alternate canvas that renders my model: maybe a sheet music view, or a visualization.

    5. Trivially add UI “juice” such as easing, bounce, etc. by holding the state and timers in the corresponding UI objects, or just by using Cocos2d’s excellent CCAction animation/transform objects.

    6. Fairly easily add undo/redo functionality by storing some extra state in the model object cluster.

    7. Use my canvas object in other places, with other models: maybe as a way to render preview thumbnails, or for use with side-by-side editing of two tracks. All I have to do is create multiple instances of the canvas and hook them up to their corresponding models.

    8. Change the way my UI behaves. Do I want my buttons to trigger when the user taps on them or lets go of the tap? How far can the user move their finger before the hold cancels? What if I want my button to perform an animation and add some extra UI when held down for 1 second? All this can be done easily and locally without disturbing the rest of the code.

    It seems to me that many of these features are much harder to implement with the imperative approach described by Casey, and may lead to ugly spaghetti code. And this is all aside of the fact that many systems (Cocoa, for instance) use object-oriented frameworks and even implement their own run loops, meaning that it’s easiest (or perhaps “only possible”) to call draw_ui_widget() in a certain place and not in-line with everything else.

    I’ve heard from some smart people that OO is overused, so I’m very willing to look into the argument, but flame wars in programmer circles around the internet have shown that there’s no real consensus on the issue. One person on HN framed it as “thinking about code as nouns vs. verbs”, which makes sense to me. Yes, computers are just a series of sequential commands under the hood, but that’s just an implementation detail. Our users think about our software as a series of objects: a button here, a bit of text there, a thing that plays sound and a screen that shows some pictures. Why shouldn’t we code with the same mindset? Isn’t it best to work on the highest conceptual level possible?

    YES, of course, deep inheritance is bad. There are many other object-oriented approaches that have been successful. Unity, for example, uses an intuitive component-based model that many people are fond of. I don’t think it’s necessary to go back to step-by-step code in order to fix this problem.

    Thoughts?

    • I don’t necessarily mean it as an argument in the programming debate but the highest conceptual level from user’s perspective is actually “I want to do XYZ in the software”, buttons and screens are interaction details.

      • I guess it depends on the user? For example, me and my (non-technical) friends would definitely look at a browser window and it as a collection of “the page”, “tabs”, “bookmarks”, “the menu”, etc. Plus, there’s plenty of software designed for exploration rather than performing a specific task. (Like… games, for instance.) I find it particularly relevant in the mobile space, where apps are given praise for being “fun” and “discoverable”. It might be a stretch, but I think that one of the reasons there’s so much UI innovation on mobile platforms is that all the high-level frameworks are strictly object-oriented. I certainly find it very intuitive to experiment with UI dynamicity when I can treat my UI as a collection of objects.

        • Sure, but playing a game, exploring, discovering etc are all things I want to do in/with the software too. I agree that on some level users do think about various objects in the software but it’s definitely not the highest conceptual level possible.

          I don’t actually think it’s a strong argument for choosing a programming paradigm anyway. It doesn’t say why should the programmer have the same approach as the user, especially if we’re talking about software like video games, which have a lot of complex code whose relation to individual objects the player thinks about is much more complicated and fuzzy than the examples you’ve given (various game-world-wide simulation systems, code dealing with efficient hardware utilization and so on).

        • The fallacy you are falling into is that it matters how people think about things.

          To go to the example in the article, you think about “manager”, “contractor”, “employee” as concrete things too, but that doesn’t mean that some kind of an object hierarchy is the best way to implement them.

          The reason object-oriented programming is a seductive idea, and the reason so many believe it, is because “obviously” you can draw these correspondences to categoric distinctions you recognize, so clearly that is how the program “should” be built.

          But what we have learned is that no, really, that’s not true. There is a wholly different set of purely-implementation-level concerns that determine how the program should be built. They don’t have a lot to do with what you perceive as different objects. They only have to do with what the code actually needs to do in order to function.

          That is what Casey is talking about here: Find out what the code really needs to do in order to function, *then* recognize patterns in that system, *then* build abstractions to simplify those patterns.

          This is in contrast to the usual object-oriented way (and many other schools of thought about how to program) in which you a priori decide on abstractions and then fit things into those abstractions. That almost always makes a mess.

          What we are saying is that when you first sit down and start typing, you just keep things as simple as you can. Over time you will find patterns. Those patterns may be very different for different programs or systems.

  6. A very enlightening article. I am very grateful that Casey shares his expertise with us.
    I myself am a beginner in the programming landscape. In the almost two years that I’ve been learning Java programming at school, I was indoctrinated with the exact kind of thinking Casey talks about at the beginning of the article.
    Thank you very much for offering a different perspective on OOP.

  7. It’s important to differentiate object-oriented programming from oriented-oriented design methodology and, too, to note that the iterative steps from “what I want to get done” to “I’ve ended up with some objects” need not be so pedantic as described. For example, when first presented with the UI problem used in the example I instinctively came to, in my mind, the end result the author came up with. That’s not to say I didn’t follow a similar thought progression but only that it need not always be performed as a set of iterative code changes.

    So I agree with the Casey’s general recipe here and have tended to use it throughout my career: start with getting something working and “iterate up.” The key is that this takes discipline and it won’t necessarily scale to larger teams without proper infrastructure (people and technology).

Leave a Reply

Your email address will not be published.