No, I’m not talking about mine, but about *other peoples code* that I encounter on a day to day basis. Some choice examples:

When I start aqtime ( a profiling app, ironically), it hangs for about 10 seconds. then when I load a project (the file is under 100k) it hangs for another ten seconds.

There is no discernible network activity during this time, and the CPU is not thrashed either. How is this even POSSIBLE? A quick check shows that my i7 3600 can do 106 trillion instructions per second. (106,000 MIPS). Thats insane. It also means that in this ten seconds, it can do one thousand trillion instructions. To do seemingly…nothing

Also… for my sins I own a Samsung smart TV. When I start up that pile of crap, if often will not respond to remote buttons for about eight seconds, and even then, it queues them up, and can lag processing an instruction by two or three seconds. This TV has all eco options disabled (much though that pains me), and has all options regarding uploading viewing data disabled. Lets assume its CPU runs at a mere 1% of the speed of my i7, that means it has to get buy on a mere 10 trillion operations per second. My god, no wonder its slow, as I’m quite sure it takes a billion instructions just to change channels right? (even if it did, it could respond in 1/10,000th of a second)

These smiling fools would be less chirpy if they knew how badly coded the interface was

I just launched HTMLTools, some software I use to edit web pages, and it took 15 seconds to load up an present an empty document. fifteen seconds, on an i7 doing absolutely nothing of any consequence.

Why the hell do we tolerate this mess? why have we allowed coders to get away with producing such awful, horrible, bloated work that doesn’t even come close to running at 1% of its potential speed and efficiency.

In any other realm this would be a source of huge anger, embarrassment and public shaming. My car can theoretically do about 150mph. imagine buying such a car and then realizing that due to bloated software, it can only actually manage 0.1 miles per hour. Imagine turning on a 2,000watt Oven, and only getting 2 of those watts used to actually heat something. We would go absolutely bonkers.

an intel i7. So much capability, so little proper usage.

There is a VAST discrepancy between the amount of time it takes optimized computer code to do a thing, and the amount of time the average consumer/player/citizen thinks it will take. We need to educate people that when they launch an app and it is not 100% super-responsive, that this is because it is badly, shoddily, terribly made.

Who can we blame? well maybe the people who make operating systems for one, as they are often bloated beyond belief. I use my smartphone a fair bit but not THAT much, and when it gets an update and tells me its ‘optimizing’ (yeah right) 237 apps… I ask myself what the hell is all of this crap, because i’m sure I didn’t install it. When the O/S is already a bloated piece of tortoise-ware, how can we really expect app developers to do any better.

I think a better source of blame is people who write ‘learn C++ in 7 days’ style books, who peddle this false bullshit that you can master something as complex as a computer language in less time than it takes to binge watch a TV series. Even worse is the whole raft of middleware which is pushed onto people who think nothing of plugging in some code that does god-knows-what, simply to avoid writing a few dozen lines of code themselves.

We need to push back against this stuff. We need to take a bottom-up approach where we start with what our apps/operating systems/appliances really NEED to do, and then try to create the fastest possible environment for this to happen. The only recent example of seen of this is the designing of a dedicated self-driving computer chip for tesla cars. very long explanation below: (chip stats about 7min 30 in)

11 Responses to “The curse of staggeringly slow code”

  1. jean says:

    Maybe it’s the antivirus first launch of the app scan.

  2. Harvey says:

    On PCs it seems to be malnly for security and operating system checks. I have small programs that launch more quickly under Windows 7 using a mechanical hard drive on an old Core 2 Duo machine than under Windows 10 using an SSD and an i7-7700. Really large programs do launch more quickly on the Win 10 / i7 machine, so there seems to be some fixed delay, plus the time to load and execute the program’s start up routines

    • cliffski says:

      i know that this can slow down first launching of exes, and I do have malwarebytes on my PC doing exactly that. But the Smart TV has zero excuse!

  3. Allan says:

    This is very similar thinking to the 30 million lines problem https://www.youtube.com/watch?v=kZRE7HIO3vk

    Or Jonathan Blow’s talk on what hardware can do and how many layers of abstracted bloatware we can put in the way of it performing well https://www.youtube.com/watch?v=pW-SOdj4Kkk

    Then again would it put software engineers out of a job if we stopped adding features to our programs/games?

    On the flip side of this type of thinking if you compare the number of transistors in your CPU with early CPUs and work out how many of those your current CPU could replace then maybe bloatware is also in the hardware as well?

    e.g. i7 has about 1.7 billion and an early Althon 64 bit cpu had 37 million so in theory your i7 should in theory be equal to running 47 Althon 64’s (7Gflops) and I’m betting it does not have that processing power?

    So maybe there is a diminishing of returns on layers of complexity?

    • cliffski says:

      Yup Casey and Jon talk an enormous amount of sense on this topic. hats off to Jon for writing his own compiler and language.

      I’m sure there is loads of cool stuff we could be doing if we didn’t have bloated code. Sure we wouldn’t have people re-using crappy middle-ware, but those same people could be trained to actually do *good things* with all the new CPU horsepower freed up.

  4. Joe says:

    Its an economics issue. Businesses will have to pay for more engineer time to make performant code and to prove that the performant code gives a business benefit.

    They will also have to deliver things more slowly.

    The metaphor I like to use is festival tent software vs concrete high rise software.

    We are also pushing a lot more pixels now with 4k and friends

  5. GorillaOne says:

    I’m of two sides on this. Slow software can suck and be annoying, and developers should be aware of and incentivized to write performant code. But rarely is that a binary choice, and chasing performance on its own is chasing a metric, not the value that metric is useful in measuring.

    If I have the option of developing and publishing a game for $100k, or making one that is x1.2-x1.5 slower for $50k, I’m going to do the latter because otherwise I won’t be able to make a living, and the consumer is not going to be able to play the game. If you need to implement a feature such as analytics or notifications to an application, those can be non-trivial systems. You could implement them yourself, but in doing so you will likely need to 1) Learn a new API/Tech/Skillset. 2) Write the code yourself while not an domain expert. 3) Maintain that code and make sure it stays compatible with the platforms it is expected to run on. There is an opportunity cost in spending time on these systems rather than on your core game-play functionality, and when you generalize rather than specialize it is questionable how performant your code will be.

    Alternatively, I could take a week and install Google Firebase and make API calls, add Unity AD’s for ad service, and PlayFab for leaderboards and achievements and spend the rest of the time making a fun game.

    We have computers which are leaps and bounds ahead of what was available ten, twenty years ago. While it can be grating when we run into a particularly egregious offender, it is worth noting that we have access to an abundance of software with a plethora of features at a relatively low cost.

    • dz says:

      yeah code nothing, assemble your game from 3rd party blocks. your opinion is heavily affected by vendors.

      • CdrJameson says:

        Still, it’s no excuse for the third-party components to be slow.
        Somebody still wrote them, and they’re still not doing it well.

        For me, it’s the horrendous slowness of tools, generally web-based ones, that gets me.
        Jira/bitbucket/Google apps are all click-and-wait systems that sap your will to live one button at a time.
        If I’ve got time to notice a delay, you’re too slow.
        If I’ve got time to go and make a cup of tea then I’ve completely broken flow.

  6. One cause is abstraction in code. Oftentimes, code written for reuse and clarity is not optimally efficient.

    Another cause is the proliferation of virtual machines and scripting languages.

  7. Mike Garcia says:

    Good post.
    I say work on your knowledge stack, not just use a software stack.
    Computer Science is ever higher level.. I can’t see it ever going back, unfortunately.