It’s been a recurrent topic in my life lately that I am kind of dumb to not take any web development gigs, because that’s obviously where the money is, you dummy.
Spending a lot of time with students who have trouble seeing the actual point of native development is also a thing I wonder about. While I am fairly sure all of them think I hate the web, possibly because I am an old fossil, I actually have a couple of reasons for not doing Web Whatever.0 things, and focus on backend stuff, and native apps. This is a short version, which I might revisit at a later point, but please, if you comment or send me messages, bear in mind that this is an opinion born out of many many projects, and I did so some web stuff with angular and vuejs and the rest of the current fads.
The Weird Competition
I come from a print and video background. I used to do plugins for XPress and InDesign, and making tools for DVD and movie productions. The key common issue between those two fields is that the constraints we work under are about placement of pixels. If you print outside of the page, or if your effects show up outside of the screen, they may be awesomely coded, but they are useless. In both these cases, the less layers you had between the pixel buffer and the code you were writing, the less chance there was of mishaps. It came with a lot of weird stuff to worry about (especially in movies, but you’d be surprised at the technical difficulty of print), which in turn gave me a totally biased approach to coding: as many levels of abstractions as you want in the unseen stuff, but performance and control are paramount whenever something is visible. My short stint in videogames confirmed that, but videogame programming is and has always been a special case.
My issue with the current state of web development is that it goes against that cred pretty heavily. Short sidenote, I had to compile WebKit over the last few days and it’s… well, it’s hefty. I’ll come back to that point later.
So, in order to put pixels on the page – which, you know, is a print issue, ultimately – on any given website there are three technologies competing for supremacy.
First and foremost is HTML. It’s old, kind of clunky by modern standards, but it’s a very intuitive way of doing things: you mark up your text to provide the piece of code that will render your text with hints as to how you want it to appear. You bold or italicize it, you give it paragraphs and headers, and a general structure, and the browser does what typesetters have always done: it will try to respect typography rules and try to fit the text you provide as best as it can in its assigned rectangle. Hard problems, but we’ve been at them for centuries, so we have a good handle on them.
But, obviously, that’s not enough for people who like beautiful stuff. So, here comes CSS. It’s a lot less intuitive, but it kind of follows the same pattern, although in some kind of annotation way. You mark a piece of your text with an identifier that points to a slew of style attributes rather than putting it all in there and making the whole thing illegible. At this point, it’s still fairly straightforward. You say this thing should use the “main content, left column” style, and in the CSS part, you define that style to be whatever font, size, color, alignment, and the rest. It abstracts the concepts a bit more, but that’s not something we’re unaccustomed to. We factorize the styles so that we can apply them everywhere we need to, and we make the HTML part easier to edit if we need to. Where it gets weird is that, as opposed to HTML, CSS has never been a real standard. A lot of it is, but in the same way regular street level fashion is: if everyone wears ski suits, which look admittedly ridiculous, you’ll be the odd one out if you ski in your jeans. And then there’s the CSS stuff that looks differently in different browsers, or isn’t even supported in some of them.
Add to that the fact that it actually competes with HTML, and you already have something that weirds me out. “What do you mean, competes?”. Weeeeeeell. If the CSS says that this style has a bold font in it, and you have only some words in bold in your text, yea, these two don’t say the same thing. Which one should be applied? Turns out, it’s CSS, most of the time, but not always. To me, this already feels like a bug, although I do understand it’s not one.
Anyways, the whole point of JS is to go back and change the structure of the HTML document, and by extension the CSS. Yep that’s right, its entire job is to assume that the two things that were loaded before it are not actually to be held as anything but a guideline. A page can have a hugely complex HTML/CSS structure, and a one line JS script that trashes all of it.
The usual way of “doing web” is to have the structure and most things in HTML/CSS, then the interactive stuff in JS, but nothing actually enforces any of it. Gentlemen agreement sort of thing, but there is absolutely nothing you cannot do in terms of algorithmic carnage. Of course, that’s pretty much where all the security, privacy and performance issues stem from. Given that JS can silently and without any restriction change what the page looks like and behaves, it can do stuff like capturing input and sending it to a different server, it can inject unwanted stuff in your reading, etc. Again, silently and in total impunity.
From a consumer point of view, this is bad enough, but from my coder’s point of view it leads me to question why I would actually even try to structure my code properly and efficiently, given that at almost any point after the carefully crafted words I tried to put on a page, they can be warped and reinterpreted in uncountable ways. All it takes is a few innocuous words somewhere downstream from my beautiful HTML to change everything. No wonder we get sites that take forever to load and that do quite unethical stuff to anyone’s content.
The Whole Dependency Debacle
Of course, when you realize that it’s probably impossible for a single brain to understand fully how and where pixels are actually displayed, you go where it’s safe and somewhat organized: frameworks and libraries that were vouched for by, preferably, large organizations you admire the websites of.
But before we get there, let’s get back for a second to WebKit. It’s a beautiful piece of software engineering, as are its competitors and various forks. The whole point of WebKit is to provide once again a stable base that is somewhat standard and that renders pixels in a predictable way. Its job is to gobble up all that HTML/CSS/JS and to turn it into something you can print (to screen or to paper or to pdf or whatever, it’s a shortcut to call it “printing”, but I’ve been “printing” to screen for two decades, so spare me the argument). So you have the whole of three language interpreters rolled into one, and trying to solve the problems mentioned previously, so that people can actually read stuff like they would in the beautifully enluminated bibles of the middle ages. Same problem, different century: take text and images and fit them in a (kind of) white rectangle.
The whole of the source code of WebKit-GTK takes 4h to compile on a single processor, on my server. No it’s not a Raspberry Pi, it’s actually a somewhat old 2.4Ghz PowerEdge. CLOC has this to say:
Next time you marvel at how simple it is to write a post that looks kind of nice, just remember it takes a million and a half lines of code to make it work. People who know how to code in something else than HTML/CSS/JS had to write that code. You are utterly dependent on a monstrous hundred megabytes of library to do your “simple” thing. And that’s without whatever is powering your backend or your CMS.
So, even a bare webpage without much code in it already comes at a significant software engineering cost, which in turns comes at a non zero cost in terms of electrical consumption (battery) and potential bugs. I don’t remember where I read that figure, but I was once told that there is on average a bug every 10k lines of code in production. I’ll let that sink in.
On top of that, since – again – most people can’t actually fully understand what the output of the HTML/CSS/JS code will be, a nice site has to inject a bunch of things to somewhat standardize the methods of getting pixels on the screen. Bootstrap, Angular, VueJS, React, seem to be the most prominent ones (or at least the ones I hear most about), and they come with their hundreds of thousands of lines of code. When you take a step back and look at why those things emerged and why they are used, you come up to a weird conclusion: because the HTML/CSS/JS stack is too complicated and chaotic, you simplify it by adding more code on top of it. Of course, as a web developer, you don’t see those millions of lines of code needed for your stuff to be displayed in a single app of any given computer, which is the point. You hide complexity that stemmed from wanting to expand the capabilities of the foundations by adding another extremely complex layer.
If there is a bug in my app, I have ways to fix them, or at least work around them. In a web site? What recourse do I have? Which one of the 20-odd layers is the bug in? Can I even fix it? Most probably, I won’t even try, I will just bake the workaround in some kind of library, which in turn will add another layer for my apps, and potentially others that use my library as a dependency.
Does it sound alarmist? Probably. For us dinosaurs who had to scrounge for bytes of memory and hard disk space, it’s hard to accept that we should build a nice house on top of a volcano, itself on a fault line, situated on a planet which can be struck randomly by asteroids or “lolbro” aliens, orbiting a star that might go supernova at any given time. But since some randos told us none of these things would happen, we’re fine, right?
OK, that was alarmist. It’s not that the web is evil, or that the tech underpinning it isn’t really clever and truly admirable, it’s that very few web developers realize any of it that tickle me wrong. And they keep building workaround upon workarounds, because they never doubted the foundation was solid.
The Future Is Already Happening
There have been recent pushes (most notably by Google) to reduce the insanity of those humongous pages that need 150 dependencies and for the stars to align just so, ad blockers and other browser extensions to identify and weed out the cruft and the security concerns. There are various pushes to re-simplify the World Wide Web. With the rise of the mobile, webpages cannot afford to be 20 megabytes of scripts and layout hints for a 5000 words article. The bandwidth and the battery are again restricted to what we had in the 90s. Facebook and Twitter have gone from a native app to an embedded web app back and forth every alternate sunday for however long.
Ultimately the problem is that we don’t actually know what we want the web to be. In many ways, it’s a natural extension of the printed medium: a rectangular canvas that we fill with text and images. But more and more, as we try to blend it with multimedia content (video mostly), there is also a push to become something else, something more. Soon it will be 3D, and in AR/VR. What good does HTML (which is designed for text) do in this context? Why would we continue to try and square the circle using web browsers (again, meant for displaying text) to do what operating systems have historically done since personal computing is a Thing?
I mean… When you think about it, that’s also how the OS evolved: at first it was used to display (and edit) text, then we had programs on top of them that could do animations and images and whatnot. We programmed those applications in languages and with libraries that allowed us to not know intimately how the hardware under the hood worked. We are currently trying to do the same thing inside a browser, which is all fine and logical. But the way we did it with operating systems was to provide as little abstraction as possible between the user and the silicon. Because we’re “close to the metal”, we can’t dispense with thinking about all the things that are rooted in meatspace. Is the user clicking or tapping? Are they using a crappy $200 laptop or a $10000 one? How much RAM do they have? Are they running on battery? Do they have a full HD screen or a 640×480? The answers to these questions factor in how we write the code. Some say the web frees us of these considerations, because we’re finally truly cross-platform. In my view, it’s just hidden under the (admittedly classy) rug. Someone will have to invent clever and efficient ways for your web pages to load in a timely fashion and perform correctly. That’s quite a risk.
If you start thinking about the fact that page rendering (as in web page) will be less and less significant as we move towards a less bookish and a more immersive kind of media consumption (does your site load on my starfish-shaped virtual screen, I wonder…), you’ll have a better handle on why I object when people call me a web hater. I just think the HTML/CSS/JS stack is a transitional technology that is growing obsolete by the minute, built – as many other transitional technologies – on shaky, over-hyped, workarounds. I think what we can do with is is awesome. And I also think it ultimately hasn’t got anything to do with the browser technology.
Feel free to disagree and present counter-points to me. But I hope I made it clear that this isn’t the ramblings of a man who doesn’t want to learn something new and is sticking to old tech for fear of the future. I just don’t want to invest that much time and effort in a piece of tech that isn’t that forward looking after all, and is built on stuff that feels a little too brittle for my tastes. I’d rather skip it and start coding for AR/VR directly.