[Dev Diaries] CRToastSwift

TL;DR: Grab the code here

I was looking into using Swift Package Manager for iOS to try out the new features From The Future, and I happened to have a UI/UX concept with alerts (you Android people would call them toasts) that I wanted to formalize a bit and it reminded me of a piece of code I used on past projects that resonated : CRToast. It's not that this code was doing something uber exciting or fantastical. Just that there was so much care and deliberation in the code style.

I decided it was the perfect candidate for a somewhat long haul Swift project that would involve many tricks, some head scratching, and a decent amount of care to do it justice.

First things first, all of the 🤬 options that are available (plus one I will detail later). The original is 1619 lines of Objective-C code (including comments). After a naive one-to-one Swift port, the result is a thousand or so lines of Swift code (including the same comments). Take that people who say that Objective-C isn't more verbose. I'm fairly sure I can whittle it down some more because half of the helper functions aren't actually that useful once you have the built-in correctness checker of Swift. But, that's only step one, and decidedly naive at that. Similarly toast controller and window got cut in half length-wise, despite the fact I had to add a new function to make up for deprecation.

I love Objective-C, especially the fact that it will let me do whatever I want with memory management, swizzling and sending whatever message I want to whatever object (although the last one has been severely reprimanded since 2.0), but there's not denying that it's way less safe and explicit than Swift, making it a much worse language for beginners. In fact I remember remarking the same about Java vs C in the prehistoric times.

One of the two pièce-de-résistance in the code is the layoutSubviews part for the toast view itself. Given the huge amount of options, taking all the cases in account is a marathon (all things being relative, it's only 100 lines of code). The other one is the entirety of CRToastManager whose job it is to expose only a few dedicated functions for the whole thing.

The one feature I wanted to add is something I worked on but wasn't ultimately used in a past project: Alerts tend to be ignored and dismissed a few times before the users kind of realize the on-screen items they are related to. One idea to call the attention on them is to use UIKitDynamics to let the alerts fall unmolested when there's nothing related in the view, but have them bounce of important stuff otherwise:

So, here it is: CRToastSwift with callouts!

[Dev Diaries] Tasks in parallel

Context

Back in the days of the old blog, I had posted a way to manage my asynchronous tasks by grouping them and basically having the kernel of something akin to promises in other languages/frameworks.

It was mostly based on operation queues and locks, and basically handled only the equivalent of "guarantees" in promise parlance.

A few months ago, John Sundell published a task based system that tickled me greatly. I immediately proceeded to forget about it until I got an Urge again to optimize my K-Means implementation. I tweaked his code to use my terminology and avoid rewriting everything where I used the concurrency stuff, as well as added a bunch of things I needed. Without further ado, here is some code and some comments.

Core feature: perform on queue

First, an alias that is inherited from the past and facilitates some operations further down the line:

public typealias constantFunction = (() throws -> ())

Then the main meat of the system: the Task class and ancillary utilities. My code was already fairly close from Sundell's, so I mostly adopted his style.

public class Task {
    // MARK: Ancillary stuff
    public enum TaskResult {
        case success
        case failure(Error)
    }
    
    public struct TaskManager {
        fileprivate let queue : DispatchQueue
        fileprivate let handler : (TaskResult) -> Void
        
        func finish() {
            handler(.success)
        }
        
        func fail(with error: Error) {
            handler(.failure(error))
        }
    }
    public typealias TaskFunction = (TaskManager) -> Void
    
    //MARK: Init
    private let closure: TaskFunction
    
    public init( _ closure: @escaping TaskFunction) {
        self.closure = closure
    }
    
    public convenience init(_ f: @escaping constantFunction) {
        self.init { manager in
            do {
                try f()
                manager.finish()
            } catch {
                manager.fail(with: error)
            }
        }
    }
    
    //MARK: Core
    public func perform(on queue: DispatchQueue = .global(),
                 handler: @escaping (TaskResult) -> Void) {
        queue.async {
            let manager = TaskManager(
                queue: queue,
                handler: handler
            )
            
            self.closure(manager)
        }
    }
}

In order to understand the gist of it, I really recommend reading the article, but in essence, it's "just" something that executes a function, then signals the manager that the task is complete.

I added an initializer that allows me to write my code like this, for backwards compatibility and stylistic reasons:

Task {
    print("Hello")
}.perform { result in
    switch result {
        case .success: // do something
        case .failure(let err): // here too, probably
    }
}

It's important to note that the block passed to task must take no argument and return nothing. But it doesn't prevent it from doing block stuff:

var nt = 0
Task {
    nt += 42
    }.perform { result in
        switch result {
        case .success: print(nt)
        case .failure(let err): break // not probable
        }
}

Outputs: 42

Of course, the block can throw, in which case we'll end up in the .failure case.

Sequence stuff

The task sequencing mechanism wasn't of any particular interest to my project, but I decided to treat it the same way I did the parallel one. Sundell's code is perfectly fine, I just wanted operators to have some syntactic sugar:

//MARK: Sequential
// FROM: https://www.swiftbysundell.com/posts/task-based-concurrency-in-swift
// replaces "then"
infix operator •: MultiplicationPrecedence
extension Task {
    static func sequence(_ tasks: [Task]) -> Task {
        var index = 0
        
        func performNext(using controller: TaskManager) {
            guard index < tasks.count else {
                // We’ve reached the end of our array of tasks,
                // time to finish the sequence.
                controller.finish()
                return
            }
            
            let task = tasks[index]
            index += 1
            
            task.perform(on: controller.queue) { outcome in
                switch outcome {
                case .success:
                    performNext(using: controller)
                case .failure(let error):
                    // As soon as an error was occurred, we’ll
                    // fail the entire sequence.
                    controller.fail(with: error)
                }
            }
        }
        
        return Task(performNext)
    }
    
    // Task • Task
    static func •(_ t1:  Task, _ t2 : Task ) -> Task {
        return Task.sequence([t1,t2])
    }
}

The comments say it all: we take the tasks one by one, essentially having a Task(Task(Task(...))) system that handles failure gracefully. I wanted to have a operator because I like writing code like this:

(Task {
    print("Hello")
    } • Task {
        print("You")
    } • Task {
        print("Beautiful")
    } • Task {
        print("Syntax!")
    }
).perform { (_) in
        print("done")
}

Outputs:

Hello
You
Beautiful
Syntax!
done

Because of the structure of the project I'm using parallelism in, I tend to manipulate [Task] objects a lot, so I added an operator on the array manipulation as well:

// [Task...]••
postfix operator ••
extension Array where Element:Task {
    var sequenceTask : Task { return Task.sequence(self) }
    static postfix func ••(_ f: Array<Element>) -> Task {
        return f.sequenceTask
    }
}

This allows me to write code like this:

var tasks = [Task]()
for i in 1..<10 {
    tasks.append(Task {
        print(i)
    })
}
tasks••.perform { _ in
    // this space for rent
}

Outputs the numbers from 1 to 9 sequentially. It is, admittedly, a fairly useless feature to be able to create tasks in a loop that will execute one after the other, instead of "just" looping in a more regular fashion, but I tend to like symmetry, which leads me to the main meat of the code.

Parallelism

Similarly to the sequence way of doing things, Sundell's approach is pitch perfect, and much more efficient than my own, especially in regards to error handling, so I modified my code to follow his recommendations.

Before reading the code, there are two things you should be aware of:

  • DispatchGroup allows for aggregate synchronization of work. It's a fairly unknown tool that you should read about
  • Sundell's code did not include a mechanism for waiting for the group's completion. I included a DispatchSemaphore that optionally lets me wait for the group to be done ( nil by default, meaning I do not wait for completion with the syntactic sugar)
// MARK: Parallel
infix operator |: AdditionPrecedence
extension Task {
    // Replaces "enqueue"
    static func group(_ tasks: [Task], semaphore: DispatchSemaphore? = nil) -> Task {
        return Task { controller in
            let group = DispatchGroup()
            
            // From: https://www.swiftbysundell.com/posts/task-based-concurrency-in-swift
            // To avoid race conditions with errors, we set up a private
            // queue to sync all assignments to our error variable
            let errorSyncQueue = DispatchQueue(label: "Task.ErrorSync")
            var anyError: Error?
            
            for task in tasks {
                group.enter()
                
                // It’s important to make the sub-tasks execute
                // on the same DispatchQueue as the group, since
                // we might cause unexpected threading issues otherwise.
                task.perform(on: controller.queue) { outcome in
                    switch outcome {
                    case .success:
                        break
                    case .failure(let error):
                        errorSyncQueue.sync {
                            anyError = anyError ?? error
                        }
                    }
                    
                    group.leave()
                }
            }
            
            group.notify(queue: controller.queue) {
                if let error = anyError {
                    controller.fail(with: error)
                } else {
                    controller.finish()
                }
                if let semaphore = semaphore {
                    semaphore.signal()
                }
            }
        }
    }
    
    // Task | Task
    static func |(_ t1:  Task, _ t2 : Task ) -> Task {
        return Task.group([t1,t2])
    }
}

Just like with the sequential code, it allows me to write:

(Task {
    print("Hello")
    } | Task {
        print("You")
    } | Task {
        print("Beautiful")
    } | Task {
        print("Syntax!")
    }
).perform { (_) in
        print("done")
}

Note that even though the tasks are marked as being parallel, because of the way operators work, you end up grouping the tasks for parallel execution two by two, which is fairly useless in general. The above code outputs (sometimes):

Syntax!
Beautiful
Hello
You
done

This highlights the point I was making: the first "pair" to be executed is the 3 first tasks together, in parallel with the last one. Since the latter finishes early in comparison to a group of tasks, it is output first. But I included this operator for symmetry (and because I can).

Much more interestingly, grouping tasks in an array performs them all in parallel, and is the only way to have them work in a way that resembles the instinct you probably have regarding parallel tasks:

// [Task...]||
postfix operator ||
extension Array where Element:Task {
    var groupTask : Task { return Task.group(self) }
    static postfix func ||(_ f: Array<Element>) -> Task {
        return f.groupTask
    }
}

This allows to write:

var tasks = [Task]()
for i in 1..<10 {
    tasks.append(Task {
        // for simulation purposes
        usleep(UInt32.random(in: 100...500))
        print(i)
    })
}
tasks||.perform { _ in
    // this space for rent
}

This outputs the following text after the longest task is done:

2
5
1
8
3
4
6
7
9

I'd like to include a ternary operator to wait for the group to be finished, but it's not currently possible in swift (in the same way a n-ary operator is currently impossible). This means a fairly sad syntax:

infix operator ~~
extension Array where Element:Task {
    static func ~~(_ f: Array<Element>, _ s: DispatchSemaphore) -> Task {
        let g = Task.group(f,semaphore: s)
        return g
    }
}

The following test code works:

var tasks = [Task]()
let s = DispatchSemaphore(value: 0)
for i in 1..<10 {
    tasks.append(Task {
        usleep(UInt32.random(in: 100...5500))
        print(i)
    })
}
(tasks~~s).perform { _ in
    // this space for rent
}
s.wait()

Sadly, we now need to parenthesize tasks~~s, which is why I'm bothered. But at least my code can be synchronous or asynchronous, as needed.

One last thing

Because I played a lot with syntactic stuff and my algorithms, I decided to make a sort of meta function that handles a lot of things in one go:

  • it allows me to collect the output of the functions in an array
  • it works like a group
  • it is optionally synchronous
//MARK: Syntactic sugar
extension Task {
    static func parallel(handler: @escaping (([Any], Error?)->()), wait: Bool = false, functions: (() throws -> Any)...) {
        var group = [Task]()
        var result = [Any]()
        let lock = NSLock()
        for f in functions {
            let t = Task {
                let r = try f()
                lock.lock()
                result.append(r)
                lock.unlock()
            }
            group.append(t)
        }
        if !wait {
            group||.perform { (local) in
                switch local {
                case .success:
                    handler(result, nil)
                case .failure(let e):
                    handler(result,e)
                }
            }
        } else {
            let sem = DispatchSemaphore(value: 0)
            Task.group(group, semaphore: sem).perform { (local) in
                switch local {
                case .success:
                    handler(result, nil)
                case .failure(let e):
                    handler(result,e)
                }
            }
            sem.wait()
        }
    }
}

And it can be used like this:

var n = 0
Task.parallel(handler: { (result, error) in
    print(result)
    print(error?.localizedDescription ?? "no error")
}, functions: {
    throw TE()
}, {
    n += 1
    return n
}, {
    n += 1
    return n
}, {
    n += 1
    return n
}, {
    n += 1
    return n
},...
)

And it will output something like:

[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 45, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 34, 46]
The operation couldn’t be completed.

Of course, you can wait for all the tasks to be complete by using the wait parameter:

Task.parallel(handler: { (result, error) in
    print(result)
    print(error)
}, wait: true, functions: {
    throw TE()
}, {
    n += 1
    return n
},...
)
Conclusion

Thanks to John Sundell's excellent write-up, I refactored my code and made it more efficient and fairly less convoluted than it was before.

I also abstained from using OperationQueue, which has some quirks on Linux, whereas this implementation works just fine.

CryptoQuotes

This is my new favorite thing to fish for quotes:

From https://mrxor.github.io/cryptoquotes.html

There are two kinds of cryptography in this world: cryptography that  will stop your kid sister from reading your files, and cryptography that  will stop major governments from reading your files.
— Bruce Schneier

The Little Drive That Could

In the couple of decades of my professional life, nothing was more painful, stressful, long or boring than hard drive recovery.

I've lost data that wasn't backed up, I've wasted days watching a very slow moving progress report with ddrescue, I physically opened drives to move platters in a working enclosure in a freaking white room.

I've had to run to places where I could get a big enough drive because my spider-sense was tingling on a Sunday night to launch into an emergency clone of a production-running machine.

Platter hard drives are a thing of the past. They are slow, and ridiculed on a daily basis. Who the hell waits for 2 goddamn minutes for an OS to boot? Everything is backed up in the cloud, it doesn't matter if your shiny flash storage dies, because a copy will be available by the time you finish installing the new one.

News flash: most cloud-based backup system use platter drives. Why? Because they are reliable in the long run. Maybe SSD will be too, in time. It's a fairly new technology that hasn't been tested by the heavy duty storage industry just yet, and they may incorporate new techniques to prevent the inherent flaws of the system.

There was a paper in 2016 that summarizes this:

An obvious question is how flash reliability compares to that of hard disk drives (HDDs), their main competitor.
We find that when it comes to replacement rates, flash drives win. The annual replacement rates of hard disk drives have previously been reported to be 2-9%, which is high compared to the 4-10% of flash drives we see being replaced in a 4 year period.
However, flash drives are less attractive when it comes to their error rates. More than 20% of flash drives develop uncorrectable errors in a four year period, 30-80% develop badblocks and 2-7% of them develop bad chips.
In comparison, previous work on HDDs reports that only 3.5% of disks in a large population developed bad sectors in a 32 months period – a low number when taking into account that the number of sectors on a hard disk is orders of magnitudes larger than the number of either blocks or chips on a solid state drive, and that sectors are smaller than blocks, so a failure is less severe.
In summary, we find that the flash drives in our study experience significantly lower replacement rates (within their rated lifetime) than hard disk drives. On the downside, they experience significantly higher rates of uncorrectable errors than hard disk drives.
(from this paper)

I honestly don't have an opinion yet on SSD vs HDD, and I was quite happy when I switched my older computers to SSDs to extend their lifespan by speeding them up. It's just too early to tell.

Anyways, the main point was to offer a probable eulogy to my last Apple-branded HDD.

WCAR00008188 was manufactured in Thailand on April 7, 2007, as part of the WD2500AAJS (Caviar) family of 250GB drives that shipped with my first Mac Pro. It came pre-installed with Mac OS X 10.4 "Tiger", which would be replaced immediately afterwards by the next version of the OS, and served me at the time as my main development machine for tools for the DVD/Cinema industry. It was running Xcode 2 and Final Cut Pro 7 daily, and even had a Bootcamp partition for Windows-specific software.

When my new (and last) Mac Pro came in to replace its ageing predecessor (which couldn't run 10.8 because "64 bits", even though it could run windows and linux 64 bits no problem), the Mac OS part was moved to a smaller disk I had lying around, and it got NetBSD on the full disk for a while, as a Xen host for various guest Linux and Windows systems I had need for at the time, and served both as my primary backup and my primary test machine for server stuff. It ran 24/7, rebooting only once in 3 years because of a major kernel update. Don't change anything NetBSD, I love you so much.

For five straight years, it clicked and it clacked happily through life until I developped a thing for GPU shenanigans, which meant retooling that older Mac Pro into a multiple CUDA/OpenCL machine, and NetBSD doesn't excel (yet) at those things, beeing less about the bleeding edge than Linux. For the past two years, the whole disk was a Linux installation, with multiple workers for gitlab instances, as well as dockerized server testing stuff, and the main ML/3D workhorse of the house, which naturally led to more clicking and clacking of its little heads.

On Thursday, April 4, 2019, a routine Python upgrade took 2 more hours than it should have (that is, it took 2 freaking hours), and the clickety-clack became erratic, with long streches of silence. Naturally distressed, the sysop (me), finished the upgrade, powered down the computer for the first time in probably 3 years, extracted the disk from its enclosure and ran a diagnosis. The disk had 20 or so bad blocks, but what was concerning is that the test ran for 12 hours. That probably meant that the rotors were in a bad shape, but not the platters or the heads.

On Friday, April 5, 2019, a decision was made to retire the drive, and clone its entire content to a newer faster one that wouldn't - unfortunately - click and clack : a 500GB SSD I have lying around for test purposes. partclone has been at it for 26h at the time of this writing, and has failed to copy half a dozen blocks, probably some data from the latest batch of commands, but it's 90% done and WCAR00008188 gives its best to perform its duty by giving back 99.9999% of the data it was entrusted with to its successor, before retiring from a long, hard, and too often thankless life.

WCAR00008188, you have been working for almost exactly 12 years straight, by my side through all manners of data shenanigans, and I can genuinely say I wouldn't have been able to go that far without you and your brothers. I will keep you around for sentimentality's sake, and you are not to die an ignominous death of failing at the wrong time and causing the sysop (me) grief and anguish, but have succeeded to perform your last act of duty with honor and pride and for that, I thank you.

Also, take that, programmed obsolescence.

It's "Automagical"

I have been meaning to write a long form digression (as usual) about the not-so-recent rise of tools that hide the complexity of our job in a less than optimal way, in the name of "democratization", but I struggle with an entry point into the subject.

The "problem"

I am "classically" trained. I wrote compilers on paper for my exams, and have a default attitude towards "NEW AND EASIER WAYS"™️©® to do things that leans heavily towards suspicion. The thing is, once you know how things work under the hood, you learn to respect the complexity of, and the craft put into, having something like COUNT.IF(B$24:B$110;">0") actually working properly.

Recently, I gave a talk about geometry applied to iOS UI development, and it seems a lot of people are learning about these tricks for the first time, even though they are millenia-old on paper, and at least half a century old in the memory of any computer. That's not to say I think people are stupid at all, it just makes me wonder why professionals in my field don't have that knowledge by default.

New shiny things have always had an appeal for the magpie part of our genetic makeup. Curiosity and "pioneering spirit" is deeply rooted in us and we like to at least try the new fads. In other fields, it clashes heavily with the very conservative way we look at our daily lives ("if it ain't broke, don't fix it"), but somehow, not in ours. It's probably related to the fact we aren't really an industry but I don't want to beat a dead horse again, and would like to instead focus on the marketing of "automagical" solutions.

Who is it aimed at?

As mentioned, I probably am too much of an old fart to be the core target audience. It kind of sucks given my (aspiring) youthful nature, but what can you do?

When I come across the new-new-new-new-new way of doing something that a/ wasn't that old, and b/ isn't much easier than the previous way, I try to see past the shininess and look for a reason to revolutionize yet again something that gets "upgraded" once a week already. Bugs exist, there is almost always a way to coax more performance out of something, and adding features is a good thing, so there could be a good reason to make that thing. I usually fail to see the point, but that's because I'm not the target. So, who is?

First timers

There is a huge push to include non-programmers in our world. And it's a good thing. We want more ideas, more diversity and more brains.

Does it help to hide too much of the complexity from them, though? If they are to become programmers, having them believe falsehoods about the "simplicity" of our job hurts everyone in the long run:

  • we devalue our own expertise (in the case where expertise actually does apply, long debate that doesn't have its place here and now)
  • it puts a heavy burden (and reliance) on a handful of programmers to manage the edge cases and the bugs and the performance, while also taking away their visibility
  • it confuses even more the people who need our tech, who now can't reconcile the fact that everyone says it's easy while also saying it's super expensive to build

Does a website cost half a day of thinking to build it yourself and $1 a month to run, or does it cost 3 months of work by $1000-a-day specialists and upwards of $200 AWS costs per month?

Professionals will respond "it depends", and then what? How does that help first timers? Especially if the outsiders fail to see the difference, or if they saw a YouTube video how it's "click-click-boom", and that conflicts with their attempts at replicating the process?

This looks too much like a scam. Invest in this surefire way to make "TONS OF MONEY"™️©®, while needing ZERO expertise or time! When has that ever worked?

As soon as these newcomers hit their first complexity barrier, it's game over (as my students can attest).

Young professionals

So, next up the totem pole is the newly minted programmer. They are facing a very difficult challenge: putting a number in front of their work. Do they ask for $500 a day? Do they work on a flat fee per project basis? Do they go for a 9-to-5? What salary can they ask for?

Regardless of their objective level of competency (and I don't believe there is such a thing, despite what HR people and headhunters want you to think), it's too early in their career to have a definite idea of the boundary between laziness and efficiency. Is it better to work super hard 30h and then cruise along for a week or have bursts of productivity inter-spaced with slower consolidation phases? ("it depends", yea yea, I know). The fact is, young professionals tend to be overworked because they don't know better.

The appeal of a Shiny New Thing, that could cut in half their coding time is huge: they would effectively be making twice the money, or at least could sleep more. Again, we face the problem that most of the tasks they will be working on are, by definition, new, and bring along with them new bugs, new requirements, new edge cases and new problems in general, for which they have no solution beyond relying on the same couple of unavoidable maintainers.

This has the potential of bringing progress to a grinding halt until someone else figures out a way to move forward - usually the maintainers of the thing they built their work on top of -, in which case it devalues them drastically. Are they programmers?

This, to me, feels like the number one reason why young developers move towards "manager" position as fast as they can. Their expertise as actual coders has been devalued over and over and over again, and it's not their fault. But they fairly quickly think of themselves as incapable of coding anything.

Nerds and other enthusiasts

I guess this is me. "Oh cool a new toy!", which may or may not turn out to be a favorite.

Nothing much to say here, I've played with, contributed to, and used in production, cool new ideas that will become mainstream down the line (maybe?). But this is not an easy demographic to target, because we're fickle.

Old farts and other experienced programmers

Hmmmmm, this is me as well... Something that is well known about old farts is that we are reluctant to embrace "COOLER NEWER IDEAS"™️©®. Call it conservatism, call it prudence, call it whatever you want, I plead guilty. I've seen many novel ideas that were totally the future fall back to the dustbin of technology. Hell, I even spearheaded a conference about everything that was revolutionary about the Newton.

Where is my handwriting recognition now? Voice recognition was all the rage in the beginning of the naughties, and while we see decent adoption for very specific purposes, we're still very very far from the Star Trek computers.

"Deciders"

So, all that leaves is the person nominally in charge of taking decision but who is often ill-equipped to call the ball. Again, this is not about any measure of intelligence. I would be hard pressed to decide which material to build a bridge with, and I hope that doesn't make me a stupid person.

The inner monologue here seems to be very close to the one for the young professional (it would have to, since a lot of them "moved up" to this position from a few paragraphs ago): I am being sold something that will potentially halve my costs for the same results, what's not to like about it?

Again, the danger is over-reliance on a few key developers who actually know the inner working of that new framework or tool, which could go belly-up by the next iteration. Short, medium, and long term risk evaluation isn't natural or instinctive at all. I won't reiterate the points, you will just have to scroll back up if they slipped your mind.

What does it cost and who benefits?

(Yes I like complicated questions that I can answer in a broad generalizing sweep of my metaphorical arm... or die trying to)

Let's say I come up with a new way of "making a complicated website", by just "providing a simple text format and a couple of buttons to click". Are you drooling yet?

Because I want to provide a first version that will attract users, I need to deal with all the complexities of HTML, CSS, JS, browser idiosyncrasies, ease of adoption, etc. It's a huge amount of work. Let's simplify by assuming I'm a genius and I'm super cheap, and it will probably be an OK solution after a few months of work (which is absolutely not the case, I'm just trying to prove a point, ok?). We'll just round it up to 10 grands of "value" (i.e. what someone would cheaply pay for a junior developer to code it).

If I now say "hey, it cost me $10000 to make it, now pony up", the chance of adoption fall to zero (or so close to it that it's negligible), because every single other "web framework" out there is "free".

But someone had to pay (either in hard coin or in time) to make that thing, so what gives? More often than not, these days, those "automagical" solutions come from one of two ways:

  • a labor of love by a handful of committed devs (long process, careful crafting, uncertain future)
  • an internal development paid by a "web company" that gets released (shorter turnaround, practical crafting, highly idiosyncratic)

In the second case, the release is almost incidental. Any contribution, bug report, valid criticism, etc..., is a freebie. The product is in use and actively being worked on anyways, whether adoption takes off or not. It's 100% benefits from here on in, and in addition, the company remains in control of the future evolution of the Thing.

In the first case, it's more complicated. I see two extremes, and a whole spectrum in between, but I could be wrong, having never release a widely popular web framework.

⚠️ CYNICAL AND SLIGHTLY SARCASTIC CONTENT AHEAD ⚠️

On one end we have the altruistic view, and the way it benefits the originators is in kudos and profile upgrade. Should these people ever voice their opinions or look for new opportunities, they probably won't lack friends. It's a proof that these developers know their stuff and are experts.

On the other end we have the cynical view: by creating something new and having a wide adoption, you essentially create a need for your expertise. Who else could better train (for money) people, or build (for money) your website? Who better to hire (for money) if you need an expert? You were warned, this is the cynical view.

Of course, the real answer is somewhere in between, and highly depends on the project. I am sure that you are already thinking about frameworks you used and placing them on the spectrum.

Why that bothers me

No, it's not just because I am an old fart with fossilized views about progress. While it's not my passion, I actually enjoy teaching people how to code. I kind of hope that everyone I taught doesn't think of me as a conservative know-it-all incapable of embracing new things.

But teaching gives me a window on how people are trained to replace me (eventually). And while I like the students fine as human beings, it's hard not to cringe at the general lack of interest for fundamental knowledge (maths - as applied to computing -, algorithmics, etc...). To be fair, it's not their fault. They have been told repeatedly that they don't need to remember the hard parts of our racket. Someone, somewhere, has already done it. All you need to do is find it, maybe adapt it, and use it.

This is true, to some extent, in the context of a school. After all, how can we grade their work if we don't already know the answers? But once they start working... they end up in a position where their employers or customers expect them to grapple with problems there is evidently no off-the-shelf solution that works well enough. Since they weren't trained (much) in the low-level stuff, they are more or less condemned to an assembly job, where they glue together efficient pieces written by other people in the most efficient way they know of. I don't doubt it's fun for a while, but it gets very little recognition by their peers, and it's impossible to explain why certain things are impossible or don't perform well enough to untrained people, and there is a lot of competition.

Speaking of competition, it also levels it for the worse: if your only job is perceived to be assembly work, and the only difference between a junior and an experienced developer is the time it takes to glue things together, where is the value in keeping the more experienced (and more expensive) one? This creates a huge imbalance between the person that manages a product (and the purse) and the one making the thing.

Trend lines

This was long, and jumped all over the place, and it reflects the state of my thoughts on the matter.

  • yes, everyone should be computer litterate
  • yes, we need more people and more diversity in the specialist field of writing code
  • yes, we need an expertise scale because no one can be good at everything
  • and also, yes we need to stress that coding is hard and has value

Despite the exponential growth of the need for computer "people", it seems to me that the number of students going through a more rigorous training isn't going up. Meaning that the reliance on competent  low-level programmers is going to increase faster and faster, and, as the gap widens, the possibility of transitionning from low(ish)-value/high-level programming to a better paid and recognized "expert" status will dwindle to almost zero.

"When everyone is a super, no one will be"
- Syndrome, The Incredibles

(Except the handful of super powerful people who actually know how things work under the hood and can name their price)