[Rant?] Managing Frustrations Is More Important Than Managing Expectations

Every year, I see a plethora of books and articles affirming they can teach you how to manage your expectations - most of the time lowering them - because it's a source of unhappiness, and frustration.

In the current "mindfullness" boom, coupled with a pandemic that stresses everyone, it's not really a surprise that telling people to aim for attainable goals is popular. Why should a person add stress on top of stress by picking objectives they can't actually reach?

It irks me.

Now, of course no one should pursue truly impossible goals. But being told to "manage your expectations" puts squarely on your shoulders all the blame for being stressed, while failing to address the problem with your place in society. It reeks of "know your place" or "don't dream too big".

Most of the "expectations" aren't yours 🌍

We have customers, bosses, employees, colleagues, friends, and family who have expectations of us.

My current job is to make sure students that come into my care will leave it better prepared to work in the tech industry.

My expectations:

  • the students will trust me to know what's best for them, and (reluctantly, maybe) do what I suggest they do
  • the teachers working with me will trust me that I know what I'm doing
  • everyone involved will deal with the various inconveniences with good graces (the students will do their homework and be curious, and the teachers will be fair and uplifting)
  • I will have enough time in a day's work to deal with most emergencies

Everyone else's expectations:

  • the government, school staff, students (and their families) want me to design the classes in such a way that getting a job at the end is a guarantee
  • the school wants me to make sure the curriculum is attractive enough for prospects to sign up
  • the teachers want to have a decent environment, pay, and freedom to be flexible, so that they can impart their knowledge and experience to the next generation
  • the teachers want students prepared in advance to learn what they have to teach
  • the students want actionable knowledge - none of that philosophy and what-ifs - so that they can earn a living
  • the students want to have as much free time as possible, while being paid as much as possible
  • "the market" wants to be able to put ex-students to work as fast as possible, for a pay that's as low as possible
  • governmental rules want me to enforce regulations, such as attendance and other disciplinary actions
  • the students want me to bend the rules in their favor
  • "the market", and the students, and the school, want me to make sure that the curriculum in unmarred, and that only the "deserving" get their diploma
  • and
  • so
  • many
  • more

These are all high-level expectations, the nitty gritty is way more involved. And it's obvious that respecting all these is akin to squaring the circle.

So, "managing my expectations", in the way people have been explaining it to me over the last decades, would be to forego my expectations, or at least lower them enough, in order to make sure I can actually turn them into a reality. It says nothing about bucking against everyone else's expectations. I'm supposed to try and accomplish the goals that have been set for me by my superiors, and force people I have power over to do what they are supposed to, revising their expectations lower.

That definitely looks like a self-sustaining power structure to me, reinforced by a philosophy of "don't expect too much or you'll be sad". The higher up the chain, the more you can dream, because your dreams will become orders for others. "Manage your expectations" if you are not in a position of power.

Manage your frustrations 😤

You can't absolutely control others. My students will sometimes be "unable" to hand in homework. My teachers will sometimes be overwhelmed and forget to grade some papers. My hierarchy will sometimes forget that life outside of school exists for me too. My family will sometimes take affront at the time I devote to the pupils or the teachers.

It is frustrating, because a very large majority of people feel like they are already giving 110%, and my priorities aren't theirs.

An easy way out of that frustration is, indeed, to lower my expectations, and respect the letter of whatever law, rule, or guideline, I have to enforce. Passing the buck in terms of personal responsibility (I'm just "following orders"). I can also lower my expectations by leaving aside anything that would cost me extra energy, that isn't strictly mandated by someone in power.

And therein lies, for me, the crux of the semantics problem. By saying people should take upon themselves to "manage their expectations", even if your intention is pure and you are trying to protect their mental health, you are in essence telling them not to dream too big. It's a race to the bottom.

Collaboration doesn't work like that. Sure, personal goals rarely align perfectly, and everyone has goals outside of whatever we're working together on. But whenever humans are involved, compromise is the norm, and that means that everyone has to deal with a little bit of frustration.

Let's take a practical example: teaching a class about program design. It's one of these things that can be seen as a waste of time by a lot of people. Except, of course, by people who've learned the hard way that if a program isn't designed first, the future holds a metric ton of headaches.

  • the students don't have that experience, so they have no practical reason to spend hours learning something that goes contrary to what they know: a program either works or doesn't. Who cares if it's elegant?
  • "the market" wants them to be operational, which to them usually means fast, not spending hours seemingly doing nothing about writing code.
  • the school system isn't properly equipped for evaluating "grey area" skills. Since there is no "best" way to do something, how can anyone grade something like this?

The easy way out of this pickle is to remove that course entirely. The students are happy, "the market" is satisfied enough, the school system churns along.  Lower my expectations, right? Sure, the next generation will make the same mistakes over and over again, but they will eventually learn, graduating to senior positions in the process.

And my frustration about seeing and sometimes using badly designed software, about seeing good developers burnt out by constantly running after technical debt, about people who should have no business making technical decisions forcing bad ones on people for expediency's sake, and about users having to work around the products they use, is ultimately irrelevant. Because everyone will friggin manage their expectations.

Oooooooor... I use my frustration, and use it constructively to educate people about all that. I teach them that they should definitely not lower their expectations but demand better, even if they might not get it. I try to teach them that a little bit of frustration can be the impetus towards a better world. You're unsatisfied? Do better.

The Karen Paradox 🤬

People will point out that this is exactly what the Karens of the world do: they demand better than what they get.

That is true enough. They strive for better, which is a good thing, if you ignore everything that has to do with a context. But I would argue it's because they cannot manage their frustrations. They put their want of not having any frustration whatsoever before any other consideration, especially the frustrations of others.

This is different from having unwarranted frustrations. Sometimes, a Karen forces us to see a problem we weren't aware of, even if the problem in question is dealing with Karens.

By the way, I find it hilarious that a first name is used as a label. I apologize wholeheartedly to good-natured Karens out there... It's not your fault, I know. But it's also not the first time a first name has gathered a reputation for malice. It'll sort itself out in the end, we love the "Good Karens" still.

Manage your frustrations 😓

And so we come full circle: yes, managing your frustration does involve managing your expectations. Being frustrated at not being an astronaut, or rich, or famous, or whatever we feel we deserve shouldn't be that big of a deal, and maybe in some instances, managing expectations are a good way to manage our frustration.

But it lacks ambition. Exploration, research, innovation, charity, support, everything that's good about the human species stems from looking at a situation, being frustrated by it, and trying something else.

Mind you, a lot of things that are bad about the human species, such as greed, oppression, and the like, also stems from frustration (see the Karen Paradox above), but that's humans for you. They are complicated, and as much as the say they'd like everything to fit in neat boxes with clear delineations, they'll do their damned best to blur the lines the first opportunity they get.

As a long time player of various role playing games (pen and paper too, yes, you irreverent youngins), I like to think of my frustrations as belonging to two very distinct categories: those that stack, and those that don't.

Frustrations that stack tend to amplify each other, being connected to one another, and are usually harder to "fix". Frustrations that don't stack contribute to the overall level of stress and dissatisfaction, but are independent, and the "fix" can be anything from super easy to super hard to impossible. Factor in the "what can I do about it" part, and there's something of a path towards a better tomorrow that doesn't involve quitting or blindly following directives. It's never as well delineated as that.

Let's take an example: I'm late for a class, because the subway is stuck. I know that it means my class will be shorter, and since a good portion of the students haven't spent the necessary time to understand fully the last one (or indeed hand their homework in), it will be even shorter because of all the time we'll spend rehashing the same things. That, in turn, will mean they won't be as well prepared as I hoped for the rest of their studies, or indeed their professional lives.

First frustration: I'm late. People around me behave in a way that make me cringe because they are frustrated. The whole situation is really, really, annoying. I'm not going to yell at the driver or the subway staff (even if they might be able to do a better job at handling the situation, I'm sure they have their own set of things to deal with that I'm unaware of, not being, you know, educated in the art of making trains run on time). I could tell the guy next to me that him yelling in his phone that he'll be late prevents me from doing the same. Should I? Maybe, if the call lasts for ever, or if in a stroke of bad luck he actually starts running the meeting by yelling into his phone. Will it solve the problem, or create a new one?

I want to make sure the tardiness won't impact the class too much. Should I send a message to the class asking them to start working without me? Without guidance, it could be a waste of time, either because they won't do it seriously, or they won't know how to do it properly. If I flex my power and tell them it's going to be graded to ensure compliance, is it getting me closer to my goal? Will they be better professionals, and/or will they think that sometimes, people in power dish out punishment because they are frustrated? What if they all do it diligently but fail? If I keep the grade in their record, it can be really unfair to them, and they won't trust me much after that. And if I don't, don't I show them that it was just a power flex, and they can safely ignore the next one?

This is not one frustration, but a conflation of two. One that stacks (the impact on my students) and one that doesn't (the fact the subway is late). Recognizing that some things are beyond my control and therefore that the frustration I feel towards those things have to be endured is a good thing, in my mind. Maybe next time, I'll have a bigger margin for transport, or a good book to pass the time, or noise-cancelling ear pieces. But that frustration will come and go, and I refuse to let it affect my decisions towards the rest.

Second frustration: I need to get my students more curious, more involved, more industrious about the topic so that if I am unavailable for 15 minutes, it doesn't matter that much. This has nothing to do with the subway being late.

I want them to want to research the topic. I am frustrated that they don't see the point. Is it their fault? Probably to some extent, so I need to coerce them a bit, because I know they won't have a glorious revelation with a ticker-tape parade, lights and fanfare, out of the blue. But it's also my responsibility. Why don't they trust me that this is important? Have I not given enough proofs of that? Were they unconvincing ones? Maybe it's because this class is among many many others, and everyone is vying for their attention? Maybe it's because I keep telling them it's important, but I'm not showing why often enough?

This is a frustration that stacks: it has a direct impact on many other tasks that I have to do, will have an effect on the extra time I set aside for preparing classes, preparing exams and grading them, will impact how the colleagues see my work, my relationship with other teachers and school staff.

This is a frustration that should drive me to do better. I should treasure it as an engine of change. And, sure, if I lower my expectations, it'll go away, just like the memory that I was late once because the subway was not moving. And maybe I'll be happier. Or at least less stressed.

We progress by being unhappy about our current situation and doing something about it. Not by shifting the responsibility onto someone else fully (as opposed to Karens), and definitely not by going all fatalistic and aiming only for what we're guaranteed of getting.

Frustration is a good thing (in moderation) 🤔

Look, we all know the rush of being validated, of having things go our way, of being right.

Some of us know that the pleasure of gratification is even better if we had to overcome something in order to get it. We're not talking sexual kinks here, although they are definitely a good indicator that this is a real thing.

There is a reason why so many people like brain teasers, word games, sports competitions, etc. There is a reason why the 3 acts are such a staple of stories in every culture. Without obstacles, where is the reward?

I tell the students that there are very few pleasures more satisfying than working on a bug for a few hours or days and suddenly finding a solution. You feel smart, tall, handsome and powerful, afterwards.

That satisfaction doesn't happen if the process is:
- realizing there is a problem
- looking the solution up
- applying the solution blindly
- end of the problem

I mean, a lot of life is like that, and that's ok, but I don't think it gives pleasure to people. It is frustration without a satisfactory ending. Sure, the job gets done, but machines will do that better than us, eventually.


[UPDATE] A Lil Bit Of Color

It's been a while. Two years ago feels like an eternity. I have been messing around with a lot of experiments in and around data, and swift, but nothing I felt particularly inclined to publish.

Pretty much my whole time was sucked up trying to take care of the students and helping the school staff not burning out too much.

I have lots of opinions, anecdotes, and the like, but I just don't feel like communicating on them just right now.

Instead, I decided to put some color back in my public life, including this blog.

And of course, as soon as I start hacking around interesting (and publishable) stuff, or have opinions I'll feel like sharing, I will start writing again.

In the meantime, COLORS! 🎉🥳🎉


[Dev Diaries] ELIZA

Back in the olden days...

Before the (oh so annoying) chatbots, before conversational machine-learning, before all of that, there was... ELIZA.

It is a weird little part of computer history that nerds like me enjoy immensely, but that is fairly unknown from the public.

If I ask random people when they think chatting with a bot became a Thing, they tend to respond "the 90s" or later (usually roughly ten years after they were born, for weird psychological reasons).

But back in the 60s, the Turing Test was a big thing indeed. Of course, nowadays, we know that this test, as it was envisionned, isn't that difficult, but back then it was total fiction.

Enters Joseph Weizenbaum, working at the MIT in the mid 60s, who decided to simplify the problem of random conversation by using a jedi mind trick: the program would be a stern doctor, not trying to ingratiate itself to the user. We talk to that kind of terse and no nonsense people often enough that it could be reasonably assumed that it wouldn't faze a normal person.

It's not exactly amicable, but it was convincing enough at the time for people to project some personnality onto it. It became a real Frankenstein story: Weizenbaum was trying to show how stupid it was, and the concept behind man-machine conversations, but users kept talking to it, sometimes even confiding as they would to a doctor. And the more Weizenbaum tried to show that it was a useless piece of junk with the same amount of intelligence as your toaster, the more people became convinced this was going to revolutionize the psychiatry world.

Weizenbaum even felt compelled to write a book about the limitations of computing, and the capacity of the human brain to anthropomorphise the things it interacts with, as if to say that to most people, everything is partly human-like or has human-analogue intentions.

He is considered to be one of the fathers of artificial intelligence, despite his attempts at explaining to everyone that would listen that it was somewhat a contradiction in terms.

Design

ELIZA was written in SLIP, a language that worked as a subset or an extension or Fortran and later ALGOL, and was designed to facilitate the use of compounded lists (for instance (x1,x2,(y1,y2,y3),x3,x4)), which was something of a hard-ish thing to do back in the day.

By modern standards, the program itself is fairly simplistic:

  • the user types an input
  • the input is parsed for "keywords" that ELIZA knows about (eg I am, computer, I believe I, etc), which are ranked more or less arbitrarily
  • depending on that "keyphrase", a variety of options are available like I don't understand that or Do computers frighten you?

Where ELIZA goes further than a standard decision tree, is that it has access to references. It tries to take parts of the input and mix them with its answer, for example: I am X -> Why are you X?

It does that through something that would become regular expression groups, and then transforming certain words or expressions into their respective counterparts.

For instance, something like I am like my father would be matched to ("I am ", "like my father"), then the response would be ("Why are you X?", "like my father"), then transformed to ("Why are you X?", "like your father"), then finally assembled into Why are you like your father?

Individually, both these steps are simple decompositions and substitutions. Using sed and regular expressions, we would use something like

$ sed -n "s/I am \(.*\)/Why are you \1?/p"
I am like my father
Why are you like my father?
$ echo "I am like my father" | sed -n "s/I am \(.*\)/Why are you \1?/p" | sed -n "s/my/your/p"
Why are you like your father?

Of course, ELIZA has a long list of my/your, me/you, ..., transformations, and multiple possibilities for each keyword, which, with a dash of randomness, allows the program to respond differently if you say the same thing twice.

But all in all, that's it. ELIZA is a very very simple program, from which emerges a complex behavior that a lot of people back then found spookily humanoid.

Taking a detour through (gasp) JS

One of the available "modern" implementations of ELIZA is in Javascript, as are most things. Now, those who know me figure out fairly quickly that I have very little love for that language. But having a distaste for it doesn't mean I don't need to write code in it every now and again, and I had heard so much about the bafflement people feel when using regular expressions in JS that I had to try myself. After all, two birds, one stone, etc... Learn a feature of JS I do not know, and resurrect an old friend.

As I said before, regular expressions (or regexs, or regexps) are relatively easy to understand, but a lot of people find them difficult to write. I'll just give you a couple of simple examples to get in the mood:

[A-Za-z]+;[A-Za-z]+

This will match any text that has 2 words (whatever the case of the letters) separated by a semicolon. Note the differenciating between uppercase and lowercase.
Basically, it says that I want to find a series of letters on length at least 1 (+) followed by ; followed by another series of letters of length at least 1

.*ish

Point (.) is a special character that means "any character", and * means "0 or more", so here I want to find anything ending in "ish"

Now, when you do search and replace (is is the case with ELIZA) or at least search and extract, you might want to know what is in this .* or [A-Za-z]+. To do that you use groups:

(.*)ish

This will match the same strings of letters, but by putting it in parenthesiseseseseseseseseses (parenthesiiiiiiiiiiiii? damn. anyway), you instruct the program to remember it. It is then stored in variables with the very imaginative names of \1, \2, etc...

So in the above case, if I apply that regexp to "easyish", \1 will contain "easy"

Now, because you have all these special characters like point and parenthesis and  whatnot, you need to differenciate when you need the actual "." and "any character". We escape those special characters with \.

([A-Za-z]+)\.([A-Za-z]+)

This will match any two words with upper and lower case letters joined by a dot (and not any character, as would be the case if I didn't use \), and remember them in \1 and \2

Of course, we have a lot of crazy special cases and special characters, so, yes, regexps can be really hard to build. For reference, the Internet found me a regexp that looks for email adresses:

(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])

Yea... Moving on.

Now, let's talk about Javascript's implementation of regular expressions. Spoiler alert, it's weird if you have used regexps in any other language than perl. That's right, JS uses the perl semantics.

In most languages, regular expressions are represented by strings. It is a tradeoff that means you can manipulate it like a string (get its length, replace portions of it, have it built out of string variables etc), but it makes escaping nighmareish:

"^\\s*\\*\\s*(\\S)"

Because \ escapes the character that follows, you need to escape the escaper to keep it around: if you want \. as part of your regexp, more often than not, you need to type "\\." in your code. It's quite a drag, but the upside is that they work like any other string.

Now, in JS (and perl), regexps are a totally different type. They are not between quotes, but between slashes (eg /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/). On one hand, you don't have to escape the slashes anymore and they more closely resemble the actual regexp, but on the other hand, they are harder to compose or build programmatically.

As I said, it's a different tradeoff, and to each their own.

Where it gets bonkers is how you use them. Because the class system is... what it is, and because there is no operator overload, you can't really get the syntactic elegance of perl, so it's kind of a bastard system where you might type something like

var myRe = /d(b+)d/;
var isOK = "cdbbdbsbz".match(); // not null because "dbbd" is in the string

match and matchAll aren't too bad, in the sense that they return the list of matching substrings (here, only one), or null, so it does have kind of a meaning.

The problem arises when you need to use the dreaded exec function in order to use the regexp groups, or when you use the g flag in your regexp.

The returned thing (I refuse to call it an object) is both an array and a hashmap/object at the same time.

In result[0] you have the matched substring (here it would be "dbbd"), and in result[X] you have the \X equivalents (here \1 would be "bb", so that's what you find in result[1]). So far so not too bad.

But this array also behaves like an object: result.index gives you the index of "the match" which is probably the first one.

Not to mention you use string.match(regex) and regex.exec(string)

const text = 'cdbbdbsbz';
const regex = /d(b+)d/g;
const found = regex.exec(text);

console.log(found);
console.log(found.index);
console.log(found["index"]);
Array ["dbbd", "bb"]
1
1

So, the result is a nullable array that sometimes work as an object. I'll let that sink in for a bit.

This is the end

Once I got the equivalence down pat, it was just a matter of copying the data and rewriting a few functions, and ELIZA was back, as a libray, so that I could use it in CLI tools, iOS apps, or MacOS apps.

When I'm done fixing the edge cases and tinkering with the ranking system, I might even publish it.

In the meantime, ELIZA and I are rekindling an old friendship on my phone!


[Dev Diaries] CredentialsToken 0.2.0

I needed to write a login mechanism for a Kitura project, and decided to dust off the old project and bring it into 2020.

Changes include:

  • Update to Swift 5.2
  • Update dependencies
  • Added a cross-check possibility between app and client tokens

Grab it on github


[Rant] Faith And Programming

Faith (the general brain thing that makes us think something is true even if there is no proof of it) is trickling into programming at alarming rate. I see it in blog posts, I see it in youtube videos and tutorials, I see it in my students hand-ins, and it tickles my old-age-induced arthtritis enough to make me want to write about it.

What I mean by faith

Let's start with definitions (from various dictionaries):

Faith: something that is believed especially with strong conviction

Not super helpful, as faith is more or less defined as "belief" + "strong conviction"

Conviction: a strong persuasion or belief

Again with the recursive definition...

Belief: something that is accepted, considered to be true, or held as an opinion

So, faith is a belief that is strongly believed. And a belief is something that's either thought to be true, or an opinion. Yea, ok. Thanks, dictionaries.

And that's why I need to define the terms for the benefit of this essay: because everything related to religious faith or political belief is so loaded these days, those vague definitions are weaponized in the name of usually bad rethoric. I'll drop any hit of religious or political wording if I can, because I think words should have meaning when you're trying to communicate honestly.

So here are the definitions I will subscribe to for the rest of this post:

  • Belief: fact or opinion thought to be true, with supporting sources to quote (eg: I believe the Earth orbits around the Sun, and I have plenty of sources to back me up, but I haven't seen it myself). It is stronger than an opinion (eg: I am of the opinion that the cat I live with is stupid), but weaker than a fact I know (eg: it is currently sunny where I am). Essentially, I use the word belief to mean something I'm convinced of, and think can be proven to be true (or false, as the case may be).
  • Faith: fact or opinion thought to be true, that is either unprovable (eg: Schrodinger's cat is alive), not yet proven (eg: we can land people on Mars), or dependant on other people's opinion (eg: my current relationship will last forever)

The subtle difference between these two words, for me, hinges on the famous "leap of faith". Sometimes, it doesn't matter if something is provable or not for us to think of as "true". That's when we leave the belief to enter the faith. Most aspirational endeavors come from faith rather than beliefs... Faith that the human species isn't suicidal, faith that researchers do their thing with the best intentions in mind, faith that my students will end up liking what I teach them...

So what does faith have to do with programming?

After all, when you do some programming, facts will come and hit you in the face pretty fast: if your program is wrong, then the thing doesn't compile, or crashes, or produces aberrant results.

Yeeeeeeeeees... and no.

Lower levels of programming are like that. Type the wrong keyword, forget a semi-colon, have a unit test fail, and yes, the computer will let you know that you were wrong.

It is a logical fallacy know as fallacy of composition to take the properties of a subset and assume they are true for the whole.

Here, thinking that if there is no compiler or test error means that the program is valid doesn't take into account so many things it's not funny: there could be bugs in the compiler or the tests - they are programs too, after all -, the ins-and-outs of the program could be badly defined, the algorithm used could have race conditions or otherwise dangerous edge cases that are not tested, etc.

But when you talk about immensely more complex systems than simple if x then y, the undecidibility knocks and says hello.

And here comes the faith. Because we cannot test everything and prove everything, we must have faith that what we did is right (or right enough). Otherwise, we can't produce anything, we can't move forward with our projects, and we can't collaborate.

There are multiple acts of faith we take for granted when we write a program:

  • The most important one is that what we are trying to do is doable. Otherwise, what's the point?
  • What we are trying to do not only is doable, but doable in a finite (and often set) amount of time.
  • The approach that we chose to do it with will work
  • and its cousin: the approach that we chose is the best one
  • The tech/framework/library/language we use will allow us to succeed
  • and its cousin: the tech/framework/library/language we use is the best one
  • If push comes to shove, anything that stumps us along the way will have a solution somewhere (search engine, person, ...)

This is not a complete list by any mean, but these are the ones I find the most difficult to talk about with people these days.

Because they are in the realm of faith, it is incredibly difficult to construct an argument that will change the opinion of someone, and doesn't boil down to "because I think so".

For instance, I like the swift language. I think it provides safety without being too strict, and I think it is flexible enough to construct pretty much anything I need. But what can I say to someone who doesn't like it for (good) reasons of their own to convince them that they should like it, without forcing them (for instance, by having super useful libraries that only exist in swift)?

And that's the second most dangerous fallacy of our domain: because some things are easier to do with this choice doesn't mean that it is the best choice.

The inevitable digression

I have conversations about front-end development a lot. A loooooooooooooooooooooooooooooooooooooot.

Web front-end, mobile front-end, desktop front-end, commandline front-end, hybrid front-end, ye-olde-write-once-run-anywhere front-end, you name it.

Because browsers are everywhere, and because browsers can all do more or less the same thing, it should be easier to write a program that runs in the browser, right?

For certain things, that's a definite yes. For other things, for now at least, that's a definite no. And everything in between.

First of all, the browser is an application on the host. It has to be installed, maintained, and configured by the end-user. Will your front still work when they have certain add-ons? Does the web browser have access to some OS features you need for your project? Is it sufficiently up to date for your needs?

Second, because the web browser is pretty much on every device, it tends to support things that are common across all these devices. If your project targets only browsers that have that specific extension, that only works on Linux, then... are you really being cross-platform?

The same reasoning applies to most, if not all, WORA endeavors. As long as we don't all have a single type of device with the same set of software functionnalities, there won't be a way to have a single codebase.

And you may think "oh that will happen eventually". That's another item of faith I encounter fairly often. But... I'm not convinced. The hardware manufacturers want differences in their lineup. Otherwise, how can they convince you to buy the latest version? Isn't it because there are differences with the old one? And even if you assume that the OS on top of it has top notch dedicated engineers that will do their damndest best to make everything backwards compatible, isn't that ultimately impossible?

Ah HA! Some of you are saying. It doesn't matter because we have WebAssembly! We can run every OS within the browser, and therefore eliminate the need to think about these things!

Do we, though? OK, it alleviates some headaches like some libraries only being available on some platforms, but it doesn't change the fact that WebAssembly, or ASM.js or whatever else, cannot conjure up a new hardware component or change the way the OS works under the browser. It still is constrained to the maximum common feature set.

And I'm sure, at this point, the most sensitive among you think that I'm anti-web. Not at all! In the same way I think web front-end isn't a panacea, I think native mobile or desktop front-end isn't an all-encompassing solution either.

If your project doesn't make any kind of sense in an offline scenario, then you better have strong hardware incentives to write it using native code.

Native programming is more idiosyncratic, for starter. I know of at least twelve different ways on the Mac alone to put a simple button on screen. Which is the best? It depends. Which is the easiest? It depends. Which will be the most familiar to the user? It depends. Which is the most common? Depends on the parameters of your search.

To newcomers, it is frustrating, and it is useless, and I understand why they think that. It is perceived as a simple task that requires a hellish amount of work to accomplish. And to an extent, this is the truth.

But there is a nugget of reason behind this madness. History plays a part, sensibilities pay a part, different metrics for what "best" is play a part.

To me, arguing that this piece of tech is better than this one is like arguing that this musical instrument is better than this one.

Can you play notes on all of them? Sure. Can you tweak the music so that it's still recognizable, even when played on a completely different instrument? Yep, that's a sizeable portion of the music industry.

Can you take an orchestra and rank all the instruments from best to worst in a way that will convince everyone you talk to? I doubt it. You can of course rank them along your preferences, but you know it's not universal.

Would anyone argue that the music should make the instruments indistinguishable from one another? I doubt it even more.

For me, a complex software product is like a song. You can replace an electric guitar by an acoustic one fairly easily and keep the song more or less intact, but replacing the drum kit with a flute will change things drastically, to the point where I would argue it's not the same song anymore.

So why insist (on every side of the debate) that all songs, everywhere, everywhen, should be played using a single kind of instrument?

Faith is a spectrum, and we need it

Back to the list of items of faith I gave earlier, I do genuinely believe that some are essential.

We need to have faith in our ability to complete the task, and we need to have faith in the fact that what we do will ultimately matter. Otherwise, nothing would ever ship. These faiths sould be fairly absolute and unshakeable, because otherwise, as I said, what's the point of doing anything?

The other points I want to push back on. A little bit. Or at least challenge the absolutism I see in too many places

The tools we use will get us there, and/or they are the best for getting us there

If you haven't skipped over the earlier digression, you'll know I feel very strongly about the tribal wars going on around languages/stacks/frameworks. I am on the record saying things like "X has an important role to play, I just don't like using it".

I also am a dinosaur who has waded through most families of programming languages and paradigms from assembly on micro controllers (machine language) to AppleScript (almost human language), and have worked on projects with tight hardware constraints (embedded programming, or IoT like the kids call it now), to no constraint whatsoever (purely front web projects), and most things in between.

There is comfort in familiarity. It's easy to morph the belief that you can do everything that is asked of you with your current tools into the faith that you will always be able to do so.

I have two personal objections that hold me back in that regard. First, I have been working professionally in this field long enough to have personally witnessed at least 3 major shifts in ways projects are designed, led, and implemented. The toolkit and knowledge I had even 5 years ago would probably be insufficient to do something pushing the envelope today. If I want to be part of the pioneers on the edge of what is doable, I need to constantly question the usefulness of what I currently know.

Now the good news is, knowledge in science (and Computer Science has that in its name) is incremental, more often than not. What I know now should be a good enough basis to learn the new things. This is faith too, but more along the lines of "yea, I think I'm not too stupid to learn" than "it will surely get me glory, fame and money".

So my first objection revolves mostly around the "always" part, because I think very few things are eternal, and I know that programming techniques definitely aren't.

The second one is more subtle: the premise that you can do everything with one set of tools is, to my mind, ludicrous. Technically, yes, you can write any program iso-functional to any other program, using whatever stack. If you can write it in Java, you can write it in C. If you can write it in assembly, you can write it in JavaScript. If you can write it in Objective-C, you can write it in Swift. The how will be different, but it's all implementation details. If you pull enough back, you'll have the same outputs for the same inputs.

But it doesn't mean there aren't any good arguments for or against a particular stack in a particular context, and pretending that "it's all bits in the end" combined with "it's what I'm more comfortable with" is the best argument is nonsensical.

To come back to that well, in the music instrument analogy, it would be akin to say that because you know how to play the recorder, and not the guitar, any version of "Stairway to Heaven" played on the recorder is intrisincly better. And that's quite a claim to make.

You can say it's going to be done faster because it takes a while to learn the guitar. You can say it's the only way it can be done with you because you can't play the guitar. You can even say that you prefer it that way because you hate the guitar as an instrument. But, seriously, the fact that you can't do chords on a recorder is enough to conclude that it's a different piece of music.

In that particular example, maybe you can be convinced to learn the piano, which makes chords at least doable with a normal set of mouths and fingers, since you hate the idea of learning the guitar. Maybe learn the bagpipes, which I believe are a series of recorders plugged into a balloon that does have multiple mouths.

I'll let that image sink in for a little while longer...

Next time you see a bagpipe player, don't mention my name, thanks.

Anyhoo

The faith in one's abilities should never be an argument against learning something new, in my opinion. If only to confirm that, yes, the other way is indeed inferior by some metric that makes sense in the current context.

Which allows me to try and address the elephant in the room:

Yes you should have some faith that your technical choices are good enough, and maybe even the best. But it should be the kind of faith that welcomes the challenge from other faiths.

The answer is waiting for me on the internet

That one irks me greatly.

When you learn to program, the tutorials, the videos, the classes even, have a fixed starting and ending points. The person writing or directing it lead you somewhere, they know what shape the end result should be.

Their goal is not to give you the best way, or prove that it's the only way, to do a thing. They are just showing you a way, their way of doing it. Sometimes it is indeed the best or the only way, but it's very very very rare. Or it's highly contextual: this is the best way to do a button given those sets of constraints.

But, because these articles/videos/classes are short, in the grand scheme of things, they can't address everything comparable and everything in opposition to what's being shown. People who do those things (myself included when I give a class) can't spend 90% of the time showing every single alternative.

The other variable in this discussion is that, when you learn programming, the problems are setup in such a way that it is possible to glue pieces together. Again, it's the time factor, but put another way: would you rather have the people ingesting your content focus on the core of what you are saying, or having them slowed down or even stopped because the other piece they are supposed to rely on doesn't work?

Expediency. A scenario for a tutorial, video, class, and even your favorite Stack Overflow answer, will make a bunch of simplifications and assumptions in order to get at the core of the message. These simplifications and assumptions are usually implicit, and yet, they help shape the content.

So, where you're new, you are being shown or told a lot of things at once (programming relies on a lot of decisions made by someone else), simple things that fit neatly in someone else's narrative for expediency's sake, and guide you to a higher knowledge that has a lot of assumptions attached. And you will never be told about them.

It's not surprising, therefore, that a lot of newcomers to programming think that writing a program is just about finding the mostly-right-shaped bricks on the internet and assembling them.

Weeeeeeell... Programming definitely has that. We rely on a lot of code we haven't written ourselves.

But it's not just that. Very often, the context, the constraints, or the goal itself, of your program is unique to that particular case. And because it's unique, no one on the internet has the answer for you.

You can be told that a redis+mongo+postgres+angular+ionic technological stack is the best by people you trust. And that's fine, they do believe that (probably). But there are so many assumptions and so much history behind that conclusion that it should be considered suspect. Maybe that, for your project, postgres+react+native works better, and takes shorter to program. How would <insert name of random person on the web> actually know the details of your set of constraints? It's not that they don't want to, necessarily, but they didn't think about your problem before giving out their solution right?

So, maybe you think their problem is close enough to yours, and that's fair enough. But how do you know? Did you critically look at all the premise and objectives they set for themselves? Or did you think that if 4 words in the title of their content match 4 words in your problem, that's good enough? If you're honest, it's going to be the second one.

Internet is a wonderful resource. I use it all the time to look up how different people deal with problems similar to mine. But unless the objective is small enough and I'm fairly certain it's close enough, I will not copy their code into mine. They can definitely inspire my solution, though. But inspiring and being my solution are about as close as "Stairway to Heaven" played on the recorder vs the guitar are.

Faith must be tempered by science to become experience

You've waded through 3000+ words from a random guy on the Internet, and you're left with the question: "what do you want from me? I have deadlines, I have little time to research the perfect solution or learn everything from scratch every two years, why do you bother me with your idealistic (and inapplicable) philosophy of software development?"

Look. I know. I've been through the early stages of learning how to code, and I'm not old enough not to remember. I also have deadlines, very little free time to devote to learning a new language or framework just for the sake of comparison, etc etc.

My point is not that everyone should do 4 months of research every time they write 5 lines of code. The (sometimes really small) time you can spend on actually productive code writing is part of the constraints. Even if Scala would be a better fit for my project, I don't have time to learn it for Friday. I get that. But I also am very keenly aware that there could be a much better way to do it than the one I'm stuck with.

The thing is, if you double down on your technological choices or methodology despite the recurrent proof that it is the wrong one, your faith is misplaced. It's as simple as that.

I used to think I was great at managing my time. Then I had one major issue. Then another. Then another. Then a critical one. Only a fool would keep believing that I'm great at managing my time. So I changed my tools and my methodology. Same thing for languages and frameworks and starting points.

The problem doesn't lie with the preferences you have or the successes you had doing something that was a bad idea. It lies with not looking back, measuring what you could have done better and how, and changing your faith. Science. Numbers. Objective truths. You look back even at a successful (for a given metric, usually "we managed to ship") project, and you can always find pain points and mistakes.

The idea is to learn from these mistakes and not being stuck with an opinion that has been shown to be wrong. Even if you have a soft spot for a piece of tech, it doesn't mean you should stop looking for alternatives.

Faith is necessary, faith in your abilities, faith in the tech you're using. But faith needs to be reevaluated, and needs to be confronted to be either strengthened or discarded. That's what experience is.