A Return To The Island Of Myst

Every time a new version of Blender came out, I kicked its tires and made a Thing™. This time around, with the biggest update in a long long time, I found myself swamped, incapable of finding time to spend time with one of my favorite pieces of software. So I dredged up my own white whale and started working again a few minutes here and there on my hi-res Myst Island.

Why the... why?

I write software for a living. I have done so for two decades straight. I'm quite decent at it. But, see, the fun part is, and has always been, learning new things and challenging myself. And there is one thing I know I will always suck at: drawing. I just don't have the knack for it. Something's not wired properly in my brain and the image I have in my head always ends up like someone very ill tried to scribble something before passing out.

Soooooooo. I know computers, right? And I have images in my head, yea? Why not use the computer to do some drawing?

And it works, more or less. I'm definitely no genius with the tool, but I end up doing things that at least look like something.

Case in point:

The library, from the top of the circuit breaker

Pieces are still missing (spot the tree placeholder), and some stuff definitely need some tweaking (hello supernova in the atrium), but hey, a few hours here or there in a year...

Myst? That old thing?

I wasn't in the biz' back in 1993 when Myst came out. I had to wait quite a few years before I stumbled upon this weird gem. You see, Myst had never been a technical marvel, or a genre-defining game. It's its own thing.

But it's a game that resonates on a personal level for a lot of people at wits' end about the world. You're trapped on an island full of wonderful things, and you have nothing to do. The plot could be as easy as this: you find yourself on an island full of books and memories and mechanisms. You enjoy the simple life there. The end. Or you can just play with the stuff forever, or escape, or try to solve mysteries ,or bring justice for people who have been wronged. There's no time limit. No order to do things. Nothing dangerous.

For some people, it's a boring game. It lacks "action" and "tension". For me, it's comforting.

The gory details

At this point in time, most of the island has been reconstructed. It lacks a bit of vegetation, and a lot of texturing. I'm not happy with the light, nor with the lack of atmosphere.

It sits at 4.5 billion polygons. It's a 400MB blender file (which is not small). And it renders a frame in roughly 1h at 1920x1080. But I can go to 4K (about 3h) or 8K (about 5h) on a whim. Sorta.

It's got PBR textures and subsurface scattering. 95% of the textures are procedural rather than photographic, for enhance-and-zoom glory. It's got details so fine you can almost infinitely zoom on them. It's got 20 or so varieties of plants. And millions of grass blades. And it's a good way to pass an hour away from the worries and the stress. It's kind of home.

The pool and the library

Back To School

With 10.15, my old and dependable workhorse of a machine - a souped up Mac Pro "Cheesegrater" from 2010, upgraded in every possible way - will have to retire. Catalina, Swift UI, and I suspect most of the ML stuff, now require AVX instructions in the processor, and there is no replacement that I could find that would slot in the socket.

I don't consider it to be "planned obsolescence" or anything of this ilk, given that this computer has been my home office's principal work station - and game station, mostly on Kerbal Space Program - for almost a decade. It will live on as my test Linux server, and I will slot a bunch of cheap video cards in it to run my ML farm, so it will probably see another decade of service.

However, the question of replacement arose. You see, I'm an avid Blender enthusiast, and I often run ML stuff on it, which nowadays means I need a decent video card that I can upgrade. The new Mac Pro would be perfect for that, but it's on the expensive side, given that I mostly use the high end capacity of the cards for personal projects or for self-education.

I settled on a 2018 mac mini with tons of ram and a small 512GB internal drive. The 16TB of disks I had now live in a USB 3.1 external bay, and the video card will reside in a eGPU box. That way, if and when I need to change the mac again, All I have to do is change the mac mini... hopefully.

Since the sound setup is of some importance to me (my bird/JBL setup has been with me forever), and I sometimes need to plug old USB/FireWire stuff, I dusted my Belkin Express Dock, and plugged everything in it.

The thing is, every migration is an opportunity for change. I've been very satisfied with my OmniFocus/Tyme combo for task management, but the thing I've always wanted to do and was never able to due to lack of time was managing my Gitlab issues outside of a web browser. I've been working on a couple of projects this summer, with lots of issues on the board, and I have old issues in old projects that I keep finding by accident.

As far as I can tell, there is no reliable way to sync that kind of stuff in an offline fashion. This trend has been going on for a long time, and color me a fossil, but I don't live on the web. I like having a twitter client that still works offline, I like managing my tasks and timers and whatever offline if I need to, "the web" coming in only as a sync service.

This migration (with its cavalcade of old software refusing to work, or working poorly under new management) will force me to write some software to bridge that gap (again). The web is cool and all, but I need unobtrusive, integrated, and performant tools to do my work. 78 opened tabs with IFTTT/Zapier/... integrations to copy data from one service to another won't cut it.

The Engineer's Triangle

Fast, cheap, or good, pick two
(an unknown genius)

It's a well-known mantra in many fields, including - believe it or not - in the project manager's handbook. Except they don't like those trivial terms, so they use schedule, cost, scope, instead.

So, why do a lot of developers feel like this doesn't apply to their work? Is it because with the wonders of CI/CD, fast and cheap are a given, and good will eventually happen on its own? But enough of the rant, let's look at the innards of computers to see why you can't write a program that ignores the triangle either.

Fast

The performance of our CPUs have more or less plateaued. We can expand the number of cores, but by and large, a single process will not be done in half the time two years from now anymore, if the developer doesn't spend some time honing the performance. GPUs have a little more legroom, but in very specific areas, which are intrinsinctly linked to the number of cores. And the user won't (or maybe even can't) wait for a process for a few minutes anymore. Gotta shave those milliseconds, friend.

Cheap

In terms of CS, the cost of a program is about the resources it uses. Does running your program forbid any other process from doing anything at the same time? Does it use 4 GB of RAM just to sort the keys of a JSON file? Does it occupy 1TB on the drive? Does it max out the number of threads, opened files and sockets and ports that are available? Performance ain't just measured in units of time.

Good

This is about completude and completeness. Does your software handle gracefully all the edge cases? Does it crash under load? Does it destroy valuable user data? Does it succumb to a poor rounding error, or a size overflow? Is it safe?

Pick the right tool for the right job

And so, it's a very very very hard thing to get all three in a finite amount of time, especially in the kind of timescales we work under. Sometimes, it's even lucky if we get only one of those.

It's important to identify as soon as possible the cases you want to pursue:

  • Cheap and fast: almost nothing except maybe tools for perfectly mastered workflows (where the edge cases and the rounding errors are left to the user to worry about)
  • Fast and good: games, machine learning, scientific stuff
  • Good and cheap: pro tools (dev tools, design tools, 3d modelers, etc) where the user is informed enough to wait for a good result

[BETA] Fun With Combine

I'm an old fart, that's not in any way debatable. But being an old fart, I have done old things, like implementing a databus-like system in a program before. So when I saw Combine, I thought I'd have fun with re-implementing a databus with it.

First things first

Why would I need a databus?

If you've done some complex mobile programming, you will probably have passed notifications around to signal stuff from one leaf of your logic tree to another, something like a network event that went in the background and signalled its task was done, even though the view controller that spawned it has gone dead a long time ago.

Databuses solve that problem, in a way. You have a stream of "stuff", with multiple listeners on it that want to react to, say, the network went down or up, the user changed a crucial global setting, etc. And you also have multiple publishers that generate those events.

That's why we used to use notifications. Once fired, every observer would receive it, independantly of their place in the logic (or visual) tree.

The goal

I wanted to have a databus that could do two things:

  • allow someone to subscribe to certain events or all of them
  • allow to replay the events (for debug, log, or recovery purposes)

I also decided I wanted to have operators that reminded me of C++ for some reason.

The base

Of course, for replay purposes, you need a buffer, and for a buffer, you need a type (this is Swift after all)

public protocol Event {
    
}

public final class EventBus {
    fileprivate var eventBuffer : [Event] = []
    fileprivate var eventStream = PassthroughSubject<Event,Never>()

PassthroughSubject allows me to avoid implementing my own Publisher, and does what it says on the tin. It passes Event objects around, and fails Never.

Now, because I want to replay but not remember everything (old fart, remember), I decided to impose a maximum length to the replay buffer.

    public var bufferLength = 5 {
        didSet {
            truncate()
        }
    }
    
    fileprivate func truncate() {
        while eventBuffer.count > bufferLength {
            eventBuffer.remove(at: 0)
        }
    }

It's a standard FIFO, oldest in front latest at the back. I will just pile them in, and truncate when necessary.

Replaying is fairly easy: you just pick the last x elements of a certain type of Event and process them again. The only issue is reversing twice: once for the cutoff, and once of the replay itself. But since it's going to be seldom used, I figured it was not a big deal.

    public func replay<T>(count: UInt = UInt.max, handler: @escaping (T) -> Void) {
        var b = [T]()
        for e in eventBuffer.reversed() {
            if b.count >= count { break }
            if let e = e as? T {
                b.append(e)
            }
        }
        for e in b.reversed() {
            handler(e)
        }
    }

Yes I could have done it more efficiently, but part of the audience is new to swift (or any other kind) of programming, and, again, it's for demonstration purposes.

Sending an event

That's the easy part: you just make a new event and send it. It has to conform to the Event protocol , so there's that. Oh and I added the << operator.

    public func send(_ event: Event) {
        eventBuffer.append(event)
        truncate()
        eventStream.send(event)
    }
    static public func << (_ bus: EventBus, _ event: Event) {
        bus.send(event)
    }

From now on, I can do bus << myEvent and the event is propagated.

Receiving an event

I wanted to be able to filter at subscription time, so I used the stream transformator compactMap that works exactly like its Array counterpart: if the transformation result is nil, it's not included in the output. Oh and I added the >> operator.

    public func subscribe<T:Event>(_ handler: @escaping (T) -> Void) {
        eventStream.compactMap { $0 as? T }.sink(receiveValue: handler)
    }
    static public func >><T:Event> (_ bus: EventBus, handler: @escaping (T) -> Void) {
        bus.subscribe(handler)
    }

The idea is that you define what kind of event you want from the block's input, and Swift should (hopefully) infer what to do.

I can now write something like

bus >> { (e : EventSubType) in
    print("We haz receifed \(e)")
}

EventSubType implements the Event protocol, and the generic type is correctly inferred.

The End (?)

It was actually super simple to write and test (with very high volumes too), but I'm guessing there would be memory retention issues as I can't figure out a way to properly unsubscribe from the bus, especially if you have self references in the block.

Then again it's a beta and this is a toy sample. I will need to dig deeper in the memory management stuff, but at first glance, it looks like the lifetime of the blocks is exactly the lifetime of the bus, which makes it impractical in real cases. Fun stuff, though.

[Security] Tracking via Image Metadata

From Edin Jusupovic

Facebook is embedding tracking data inside photos you download

Of course they do.