I needed to write a login mechanism for a Kitura project, and decided to dust off the old project and bring it into 2020.

Changes include:

• Update to Swift 5.2
• Update dependencies
• Added a cross-check possibility between app and client tokens

Grab it on github

Faith (the general brain thing that makes us think something is true even if there is no proof of it) is trickling into programming at alarming rate. I see it in blog posts, I see it in youtube videos and tutorials, I see it in my students hand-ins, and it tickles my old-age-induced arthtritis enough to make me want to write about it.

#### What I mean by faith

Faith: something that is believed especially with strong conviction

Not super helpful, as faith is more or less defined as "belief" + "strong conviction"

Conviction: a strong persuasion or belief

Again with the recursive definition...

Belief: something that is accepted, considered to be true, or held as an opinion

So, faith is a belief that is strongly believed. And a belief is something that's either thought to be true, or an opinion. Yea, ok. Thanks, dictionaries.

And that's why I need to define the terms for the benefit of this essay: because everything related to religious faith or political belief is so loaded these days, those vague definitions are weaponized in the name of usually bad rethoric. I'll drop any hit of religious or political wording if I can, because I think words should have meaning when you're trying to communicate honestly.

So here are the definitions I will subscribe to for the rest of this post:

• Belief: fact or opinion thought to be true, with supporting sources to quote (eg: I believe the Earth orbits around the Sun, and I have plenty of sources to back me up, but I haven't seen it myself). It is stronger than an opinion (eg: I am of the opinion that the cat I live with is stupid), but weaker than a fact I know (eg: it is currently sunny where I am). Essentially, I use the word belief to mean something I'm convinced of, and think can be proven to be true (or false, as the case may be).
• Faith: fact or opinion thought to be true, that is either unprovable (eg: Schrodinger's cat is alive), not yet proven (eg: we can land people on Mars), or dependant on other people's opinion (eg: my current relationship will last forever)

The subtle difference between these two words, for me, hinges on the famous "leap of faith". Sometimes, it doesn't matter if something is provable or not for us to think of as "true". That's when we leave the belief to enter the faith. Most aspirational endeavors come from faith rather than beliefs... Faith that the human species isn't suicidal, faith that researchers do their thing with the best intentions in mind, faith that my students will end up liking what I teach them...

#### So what does faith have to do with programming?

After all, when you do some programming, facts will come and hit you in the face pretty fast: if your program is wrong, then the thing doesn't compile, or crashes, or produces aberrant results.

Yeeeeeeeeees... and no.

Lower levels of programming are like that. Type the wrong keyword, forget a semi-colon, have a unit test fail, and yes, the computer will let you know that you were wrong.

It is a logical fallacy know as fallacy of composition to take the properties of a subset and assume they are true for the whole.

Here, thinking that if there is no compiler or test error means that the program is valid doesn't take into account so many things it's not funny: there could be bugs in the compiler or the tests - they are programs too, after all -, the ins-and-outs of the program could be badly defined, the algorithm used could have race conditions or otherwise dangerous edge cases that are not tested, etc.

But when you talk about immensely more complex systems than simple if x then y, the undecidibility knocks and says hello.

And here comes the faith. Because we cannot test everything and prove everything, we must have faith that what we did is right (or right enough). Otherwise, we can't produce anything, we can't move forward with our projects, and we can't collaborate.

There are multiple acts of faith we take for granted when we write a program:

• The most important one is that what we are trying to do is doable. Otherwise, what's the point?
• What we are trying to do not only is doable, but doable in a finite (and often set) amount of time.
• The approach that we chose to do it with will work
• and its cousin: the approach that we chose is the best one
• The tech/framework/library/language we use will allow us to succeed
• and its cousin: the tech/framework/library/language we use is the best one
• If push comes to shove, anything that stumps us along the way will have a solution somewhere (search engine, person, ...)

This is not a complete list by any mean, but these are the ones I find the most difficult to talk about with people these days.

Because they are in the realm of faith, it is incredibly difficult to construct an argument that will change the opinion of someone, and doesn't boil down to "because I think so".

For instance, I like the swift language. I think it provides safety without being too strict, and I think it is flexible enough to construct pretty much anything I need. But what can I say to someone who doesn't like it for (good) reasons of their own to convince them that they should like it, without forcing them (for instance, by having super useful libraries that only exist in swift)?

And that's the second most dangerous fallacy of our domain: because some things are easier to do with this choice doesn't mean that it is the best choice.

#### The inevitable digression

I have conversations about front-end development a lot. A loooooooooooooooooooooooooooooooooooooot.

Web front-end, mobile front-end, desktop front-end, commandline front-end, hybrid front-end, ye-olde-write-once-run-anywhere front-end, you name it.

Because browsers are everywhere, and because browsers can all do more or less the same thing, it should be easier to write a program that runs in the browser, right?

For certain things, that's a definite yes. For other things, for now at least, that's a definite no. And everything in between.

First of all, the browser is an application on the host. It has to be installed, maintained, and configured by the end-user. Will your front still work when they have certain add-ons? Does the web browser have access to some OS features you need for your project? Is it sufficiently up to date for your needs?

Second, because the web browser is pretty much on every device, it tends to support things that are common across all these devices. If your project targets only browsers that have that specific extension, that only works on Linux, then... are you really being cross-platform?

The same reasoning applies to most, if not all, WORA endeavors. As long as we don't all have a single type of device with the same set of software functionnalities, there won't be a way to have a single codebase.

And you may think "oh that will happen eventually". That's another item of faith I encounter fairly often. But... I'm not convinced. The hardware manufacturers want differences in their lineup. Otherwise, how can they convince you to buy the latest version? Isn't it because there are differences with the old one? And even if you assume that the OS on top of it has top notch dedicated engineers that will do their damndest best to make everything backwards compatible, isn't that ultimately impossible?

Ah HA! Some of you are saying. It doesn't matter because we have WebAssembly! We can run every OS within the browser, and therefore eliminate the need to think about these things!

Do we, though? OK, it alleviates some headaches like some libraries only being available on some platforms, but it doesn't change the fact that WebAssembly, or ASM.js or whatever else, cannot conjure up a new hardware component or change the way the OS works under the browser. It still is constrained to the maximum common feature set.

And I'm sure, at this point, the most sensitive among you think that I'm anti-web. Not at all! In the same way I think web front-end isn't a panacea, I think native mobile or desktop front-end isn't an all-encompassing solution either.

If your project doesn't make any kind of sense in an offline scenario, then you better have strong hardware incentives to write it using native code.

Native programming is more idiosyncratic, for starter. I know of at least twelve different ways on the Mac alone to put a simple button on screen. Which is the best? It depends. Which is the easiest? It depends. Which will be the most familiar to the user? It depends. Which is the most common? Depends on the parameters of your search.

To newcomers, it is frustrating, and it is useless, and I understand why they think that. It is perceived as a simple task that requires a hellish amount of work to accomplish. And to an extent, this is the truth.

But there is a nugget of reason behind this madness. History plays a part, sensibilities pay a part, different metrics for what "best" is play a part.

To me, arguing that this piece of tech is better than this one is like arguing that this musical instrument is better than this one.

Can you play notes on all of them? Sure. Can you tweak the music so that it's still recognizable, even when played on a completely different instrument? Yep, that's a sizeable portion of the music industry.

Can you take an orchestra and rank all the instruments from best to worst in a way that will convince everyone you talk to? I doubt it. You can of course rank them along your preferences, but you know it's not universal.

Would anyone argue that the music should make the instruments indistinguishable from one another? I doubt it even more.

For me, a complex software product is like a song. You can replace an electric guitar by an acoustic one fairly easily and keep the song more or less intact, but replacing the drum kit with a flute will change things drastically, to the point where I would argue it's not the same song anymore.

So why insist (on every side of the debate) that all songs, everywhere, everywhen, should be played using a single kind of instrument?

#### Faith is a spectrum, and we need it

Back to the list of items of faith I gave earlier, I do genuinely believe that some are essential.

We need to have faith in our ability to complete the task, and we need to have faith in the fact that what we do will ultimately matter. Otherwise, nothing would ever ship. These faiths sould be fairly absolute and unshakeable, because otherwise, as I said, what's the point of doing anything?

The other points I want to push back on. A little bit. Or at least challenge the absolutism I see in too many places

##### The tools we use will get us there, and/or they are the best for getting us there

If you haven't skipped over the earlier digression, you'll know I feel very strongly about the tribal wars going on around languages/stacks/frameworks. I am on the record saying things like "X has an important role to play, I just don't like using it".

I also am a dinosaur who has waded through most families of programming languages and paradigms from assembly on micro controllers (machine language) to AppleScript (almost human language), and have worked on projects with tight hardware constraints (embedded programming, or IoT like the kids call it now), to no constraint whatsoever (purely front web projects), and most things in between.

There is comfort in familiarity. It's easy to morph the belief that you can do everything that is asked of you with your current tools into the faith that you will always be able to do so.

I have two personal objections that hold me back in that regard. First, I have been working professionally in this field long enough to have personally witnessed at least 3 major shifts in ways projects are designed, led, and implemented. The toolkit and knowledge I had even 5 years ago would probably be insufficient to do something pushing the envelope today. If I want to be part of the pioneers on the edge of what is doable, I need to constantly question the usefulness of what I currently know.

Now the good news is, knowledge in science (and Computer Science has that in its name) is incremental, more often than not. What I know now should be a good enough basis to learn the new things. This is faith too, but more along the lines of "yea, I think I'm not too stupid to learn" than "it will surely get me glory, fame and money".

So my first objection revolves mostly around the "always" part, because I think very few things are eternal, and I know that programming techniques definitely aren't.

The second one is more subtle: the premise that you can do everything with one set of tools is, to my mind, ludicrous. Technically, yes, you can write any program iso-functional to any other program, using whatever stack. If you can write it in Java, you can write it in C. If you can write it in assembly, you can write it in JavaScript. If you can write it in Objective-C, you can write it in Swift. The how will be different, but it's all implementation details. If you pull enough back, you'll have the same outputs for the same inputs.

But it doesn't mean there aren't any good arguments for or against a particular stack in a particular context, and pretending that "it's all bits in the end" combined with "it's what I'm more comfortable with" is the best argument is nonsensical.

To come back to that well, in the music instrument analogy, it would be akin to say that because you know how to play the recorder, and not the guitar, any version of "Stairway to Heaven" played on the recorder is intrisincly better. And that's quite a claim to make.

You can say it's going to be done faster because it takes a while to learn the guitar. You can say it's the only way it can be done with you because you can't play the guitar. You can even say that you prefer it that way because you hate the guitar as an instrument. But, seriously, the fact that you can't do chords on a recorder is enough to conclude that it's a different piece of music.

In that particular example, maybe you can be convinced to learn the piano, which makes chords at least doable with a normal set of mouths and fingers, since you hate the idea of learning the guitar. Maybe learn the bagpipes, which I believe are a series of recorders plugged into a balloon that does have multiple mouths.

I'll let that image sink in for a little while longer...

Next time you see a bagpipe player, don't mention my name, thanks.

Anyhoo

The faith in one's abilities should never be an argument against learning something new, in my opinion. If only to confirm that, yes, the other way is indeed inferior by some metric that makes sense in the current context.

Which allows me to try and address the elephant in the room:

Yes you should have some faith that your technical choices are good enough, and maybe even the best. But it should be the kind of faith that welcomes the challenge from other faiths.

##### The answer is waiting for me on the internet

That one irks me greatly.

When you learn to program, the tutorials, the videos, the classes even, have a fixed starting and ending points. The person writing or directing it lead you somewhere, they know what shape the end result should be.

Their goal is not to give you the best way, or prove that it's the only way, to do a thing. They are just showing you a way, their way of doing it. Sometimes it is indeed the best or the only way, but it's very very very rare. Or it's highly contextual: this is the best way to do a button given those sets of constraints.

But, because these articles/videos/classes are short, in the grand scheme of things, they can't address everything comparable and everything in opposition to what's being shown. People who do those things (myself included when I give a class) can't spend 90% of the time showing every single alternative.

The other variable in this discussion is that, when you learn programming, the problems are setup in such a way that it is possible to glue pieces together. Again, it's the time factor, but put another way: would you rather have the people ingesting your content focus on the core of what you are saying, or having them slowed down or even stopped because the other piece they are supposed to rely on doesn't work?

Expediency. A scenario for a tutorial, video, class, and even your favorite Stack Overflow answer, will make a bunch of simplifications and assumptions in order to get at the core of the message. These simplifications and assumptions are usually implicit, and yet, they help shape the content.

So, where you're new, you are being shown or told a lot of things at once (programming relies on a lot of decisions made by someone else), simple things that fit neatly in someone else's narrative for expediency's sake, and guide you to a higher knowledge that has a lot of assumptions attached. And you will never be told about them.

It's not surprising, therefore, that a lot of newcomers to programming think that writing a program is just about finding the mostly-right-shaped bricks on the internet and assembling them.

Weeeeeeell... Programming definitely has that. We rely on a lot of code we haven't written ourselves.

But it's not just that. Very often, the context, the constraints, or the goal itself, of your program is unique to that particular case. And because it's unique, no one on the internet has the answer for you.

You can be told that a redis+mongo+postgres+angular+ionic technological stack is the best by people you trust. And that's fine, they do believe that (probably). But there are so many assumptions and so much history behind that conclusion that it should be considered suspect. Maybe that, for your project, postgres+react+native works better, and takes shorter to program. How would <insert name of random person on the web> actually know the details of your set of constraints? It's not that they don't want to, necessarily, but they didn't think about your problem before giving out their solution right?

So, maybe you think their problem is close enough to yours, and that's fair enough. But how do you know? Did you critically look at all the premise and objectives they set for themselves? Or did you think that if 4 words in the title of their content match 4 words in your problem, that's good enough? If you're honest, it's going to be the second one.

Internet is a wonderful resource. I use it all the time to look up how different people deal with problems similar to mine. But unless the objective is small enough and I'm fairly certain it's close enough, I will not copy their code into mine. They can definitely inspire my solution, though. But inspiring and being my solution are about as close as "Stairway to Heaven" played on the recorder vs the guitar are.

#### Faith must be tempered by science to become experience

You've waded through 3000+ words from a random guy on the Internet, and you're left with the question: "what do you want from me? I have deadlines, I have little time to research the perfect solution or learn everything from scratch every two years, why do you bother me with your idealistic (and inapplicable) philosophy of software development?"

Look. I know. I've been through the early stages of learning how to code, and I'm not old enough not to remember. I also have deadlines, very little free time to devote to learning a new language or framework just for the sake of comparison, etc etc.

My point is not that everyone should do 4 months of research every time they write 5 lines of code. The (sometimes really small) time you can spend on actually productive code writing is part of the constraints. Even if Scala would be a better fit for my project, I don't have time to learn it for Friday. I get that. But I also am very keenly aware that there could be a much better way to do it than the one I'm stuck with.

The thing is, if you double down on your technological choices or methodology despite the recurrent proof that it is the wrong one, your faith is misplaced. It's as simple as that.

I used to think I was great at managing my time. Then I had one major issue. Then another. Then another. Then a critical one. Only a fool would keep believing that I'm great at managing my time. So I changed my tools and my methodology. Same thing for languages and frameworks and starting points.

The problem doesn't lie with the preferences you have or the successes you had doing something that was a bad idea. It lies with not looking back, measuring what you could have done better and how, and changing your faith. Science. Numbers. Objective truths. You look back even at a successful (for a given metric, usually "we managed to ship") project, and you can always find pain points and mistakes.

The idea is to learn from these mistakes and not being stuck with an opinion that has been shown to be wrong. Even if you have a soft spot for a piece of tech, it doesn't mean you should stop looking for alternatives.

Faith is necessary, faith in your abilities, faith in the tech you're using. But faith needs to be reevaluated, and needs to be confronted to be either strengthened or discarded. That's what experience is.

As you probably know by now, I'm fairly obsessed with tools that give me metrics about quality: linting, docs, tests...

Unfortunately, code coverage is fairly hard to get with SPM in a way that is usable.

##### What?

Code coverage is the amount of code in the package that is covered with your tests. If you run your tests, are these lines run? Are those? Your tests pass, and that's fine, but have you forgotten to test anything?

You can enable it in Xcode using the option "gather code coverage" in the scheme you are running tests for, which allows you to find a decent visualization if you know where to look for it (a new gutter appears in your code editors).

In Swift Package Manager though, it's fairly obscure:

• first you have to use the --enable-code-coverage option of the testing phase
• then you have to grab the output json path by using --show-codecov-path
• then you get a collection of things that is unique to SPM, and therefore unusable elsewhere

Now, if you look closely at the output, you can see it's fairly close to the lcov format, which is more or less a standard.

##### Let's make a script!

Because I'm very attached to my packages running on both Linux and MacOS, I need to grab the correct values from the environment (makes them dockerizable too).

I need:

• the output of the swift test phase
• llvm-cov which is used by the swift toolchain and can extract usable information
• A few frills here and there

Looking here and there if anything existed already, I stumbled upon a good writeup setting some of the bricks up. I would suggest reading this first if you want to get the nitty gritty details.

Reusing parts of this and making my own script that can spit either human readable or lcov-compatible, and can work either on Linux or MacOS, and dockerizable, here's what I end up with:

codecov.sh spits something like:

Filename                      Regions    Missed Regions     Cover   Functions  Missed Functions  Executed       Lines      Missed Lines     Cover
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SEKRET.swift                       67                20    70.15%          23                 4    82.61%         157                27    82.80%
Extensions.swift                   15                 3    80.00%           2                 0   100.00%          20                 0   100.00%
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
TOTAL                              82                23    71.95%          25                 4    84.00%         177                27    84.75%


codecov.sh lcov spits the corresponding lcov output.

Hurray for automation!

The changes I made to make NSLogger SPM compatible are now in the master branch of the official repo. Update your dependencies ☺️

I know and have fun as often as I can with Florent Pillet, another member of the tribe of "dinosaurs" still kicking around.

I really like one of his projects that contributed to his notoriety : NSLogger. Logging has always been a pain in the neck, and this tool provided us all with a way to get it done efficiently and properly. The first commit on the github repo is from 2010, and I have a strong suspicion it's been in production since before that in one form or another.

Anyhoo, I like Florent, I  like NSLogger, but I hate what Cocoapods (and to a lesser extent Carthage) do to my projects. It's too brittle and I strongly dislike things that mess around with the extremely complicated XML that is a pbxproj. They do however serve an admirable purpose: managing dependencies in a way that doesn't require me to use git submodules in every one of my projects.

So, I rarely use NSLogger. SHAME! SHAME! <insert your own meme here>

With the advent of (and subsequent needed updates to) Swift Package Manager, we now have an official way of managing and supporting dependencies, but it has its own quirks that appently make it hard to "SPM" older projects.

Let's see what we can do about NSLogger.

##### Step 1 : The Project Structure

SPM can't mix Obj-C code and Swift code. It's always been pretty hacky anyways, with the bridging headers and the weird steps hidden by the toolchain, so we need to make it explicit:

• One target for the Objective-C code (imaginatively named NSLoggerLibObjC)
• One target for the Swift code (NSLogger) that depends on NSLoggerLibObjC
• One product that builds the Swift target

One of the problems is that all that code is mixed in the folders, because Xcode doesn't care about file placement. SPM, on the other hand does.

So, let's use and abuse the path and sources parameters of the target. The first one is to provide the root where we look for files to compile, and the second one lists the files to be compiled.

• LoggerClient.m for NSLoggerLibObjC
• NSLogger.swift for NSLogger

Done. Right?

Not quite.

##### Step 2 : Compilation Quirks

The Obj-C lib requires ARC to be disabled. Easy to do in Xcode, a bit harder in SPM.

We need to pass the -fno-objc-arc flag to the compiler. SPM doesn't make it easy or obvious to do that, for a variety of reasons, but I guess mostly because you shouldn't pass compiler flags at all in an ideal world.

But (especially in 2020), looking at the world, ideal it ain't.

We have to use the (not so aptly named) cSetting option of the target, and use the very scary CSetting.unsafeFlags parameter for that option. Why is it unsafe, you might ask? Weeeeeeeeell. It's companies' usual way of telling you "you're on your own with this one". I'm fine with that.

Another compilation quirk is that Obj-C code relies (like its ancestor, C) on the use of header files to make your code usable as a dependency.

Again, because Xcode and SPM treat the file structure very differently, just saying that every header should be included in the resulting library is a bad idea: the search is recursive and in this particular case, would result in having specific iOS or MacOS (yes, capitalized, because sod that change) test headers exposed as well.

In the end, I had to make the difficult choice of doing something super ugly:

• move the public headers in their own directory
• use symlinks to their old place so's not to break the other parts of the project

If anyone has a better option that's not heavily more disruptive to the organization of the project, I'm all ears.

##### Step 3 : Final Assembly

So we have the Swift target that depends on the Obj-C one. Fine. But how do we use that dependency?

"Easy" some will exclaim (a bit too rapidly) "you just import the lib in the swift file!"

Yes, but then it breaks the other projects, which, again, we don't want to do. Minimal impact changes. Legacy. Friend.

So we need a preprocessing macro, like, say, SPMBuild, which would indicate we're building with SPM rather than Xcode. Sadly, this doesn't exist, and given the rate of change of the toolchain, I don't want to rely too heavily on the badly documented Xcode proprocessor macros that would allow me to detect a build through the IDE.

Thankfully, in the same vein as cSettings, we have a swiftSettings parameter to our target, wich supports SwiftSetting.define options. Great, so I'll define a macro, and test its existence in the swift file before importing the Obj-C part of the project.

One last thing I stumbled upon and used despite its shady nature: there is an undocumented decorator for import named @_exported which seems extraneous here, but has some interesting properties: it kinda sorta exposes what you import as part of the current module, flattening the dependency graph.

To be honest, I didn't know about it, it amused me, so I included it.

##### Wrap Up

In order to make it work directly from the repo, rather than locally, I also had to provide a version number. I chose to go with the next patch number instead of aggrandizing myself with a minor or even a major version.

Hopefully, these changes don't impact the current project at all, and allows me to use it in a way I like better (and is officially supported), and I hope Florent will not murder me for all of that. He might even decide to accept my pull request. We'll see.

In the meantime, you can find all the changes above and a usable SPM package in my fork.