Procedure All The Things! πŸ₯³

Blender 3.1 is out!, and I made a thing, and that makes me so happy. (πŸ‘β‰–β€Ώβ€Ώβ‰–)πŸ‘

The laundry list of enhancements and features is very long, but 2 are especially important to me right now:

  • support for Metal acceleration on M1 (I can finally make the fans on my new laptop go)
  • procedural geometry nodes

Now, if you read what I write every once in a while, you'll know that I'm a big fan of anything that automates and facilitates manual tasks. I'm a developer after all.

Back on the old blog, I used to rave about the superior procedural texture system in blender that mostly allowed me to get free of images and UV maps. Granted, I'm no artist, and I tend towards mechanical/naturalistic scenes where procedural textures can be used. It's just so great to set a few parameters and see a decent result without having to spend hours in a pixel editor. Plus, it works at any resolution. So, there's that.

Back when Blender got its revamp and started getting some attention again, the white whale on the forums and various conversations was "when will we be able to node all the things?". It's is a very hard problem to solve, and the blender community has been at it like maniacs.

And... geometry nodes are finally here. I can finally replace most of my particle systems with a nice, clean, geometry node.

On a lark, I decided to retake my huge island project, a re-imagining of the Island of Myst that used to take 5h/frame to render, because of the millions of leaves, blades of grass, and rocks that the particle system generated. It was very hard to work with because the RAM usage would explode and renders would fail sometimes. Without culling and boolean operations dealing with the field of view, the scene used in the vicinity of 24G, making it all but impossible to render on a GPU that I can afford.

After a little experimenting to get my feet wet, I managed to visually get roughly the same result on less than 2.2G of RAM. That freed me to add... more geometry and more complexity πŸ€“

And because of the M1 architecture, the ceiling of VRAM usage is very high anyways, so I let it rip and got even more geometry in.

There is somewhere between 2 and two and a half billion polygons that I know of. Probably more.

Here is the original I based my scene on:

From the Wikipedia page

The end result is this (warning -- 4K image):

Myst Library

It uses 3.7G of VRAM (😳), about an hour and a half to render on a laptop (πŸ€ͺ), it's not even as complex as I can make it go (πŸ€“), it's all geometry, no bump map tricks (🀩), and I could finally check that there were fans in my laptop (πŸ™ƒ)

I could do better on the render side, the complexity of the lighting means a lot of artifacts, and, of course, as I said before, I'm no artist, so some of the geometry is a bit iffy. Plus it's only 1024 samples, because I wanted to be fast-ish.

But Blender continues to impress, and who knows? Maybe they'll add a Do-What-I-Want-No-What-I-Type button in a future release that will magically enhance my skills.


[Dev Diaries] ELIZA

Back in the olden days...

Before the (oh so annoying) chatbots, before conversational machine-learning, before all of that, there was... ELIZA.

It is a weird little part of computer history that nerds like me enjoy immensely, but that is fairly unknown from the public.

If I ask random people when they think chatting with a bot became a Thing, they tend to respond "the 90s" or later (usually roughly ten years after they were born, for weird psychological reasons).

But back in the 60s, the Turing Test was a big thing indeed. Of course, nowadays, we know that this test, as it was envisionned, isn't that difficult, but back then it was total fiction.

Enters Joseph Weizenbaum, working at the MIT in the mid 60s, who decided to simplify the problem of random conversation by using a jedi mind trick: the program would be a stern doctor, not trying to ingratiate itself to the user. We talk to that kind of terse and no nonsense people often enough that it could be reasonably assumed that it wouldn't faze a normal person.

It's not exactly amicable, but it was convincing enough at the time for people to project some personnality onto it. It became a real Frankenstein story: Weizenbaum was trying to show how stupid it was, and the concept behind man-machine conversations, but users kept talking to it, sometimes even confiding as they would to a doctor. And the more Weizenbaum tried to show that it was a useless piece of junk with the same amount of intelligence as your toaster, the more people became convinced this was going to revolutionize the psychiatry world.

Weizenbaum even felt compelled to write a book about the limitations of computing, and the capacity of the human brain to anthropomorphise the things it interacts with, as if to say that to most people, everything is partly human-like or has human-analogue intentions.

He is considered to be one of the fathers of artificial intelligence, despite his attempts at explaining to everyone that would listen that it was somewhat a contradiction in terms.

Design

ELIZA was written in SLIP, a language that worked as a subset or an extension or Fortran and later ALGOL, and was designed to facilitate the use of compounded lists (for instance (x1,x2,(y1,y2,y3),x3,x4)), which was something of a hard-ish thing to do back in the day.

By modern standards, the program itself is fairly simplistic:

  • the user types an input
  • the input is parsed for "keywords" that ELIZA knows about (eg I am, computer, I believe I, etc), which are ranked more or less arbitrarily
  • depending on that "keyphrase", a variety of options are available like I don't understand that or Do computers frighten you?

Where ELIZA goes further than a standard decision tree, is that it has access to references. It tries to take parts of the input and mix them with its answer, for example: I am X -> Why are you X?

It does that through something that would become regular expression groups, and then transforming certain words or expressions into their respective counterparts.

For instance, something like I am like my father would be matched to ("I am ", "like my father"), then the response would be ("Why are you X?", "like my father"), then transformed to ("Why are you X?", "like your father"), then finally assembled into Why are you like your father?

Individually, both these steps are simple decompositions and substitutions. Using sed and regular expressions, we would use something like

$ sed -n "s/I am \(.*\)/Why are you \1?/p"
I am like my father
Why are you like my father?
$ echo "I am like my father" | sed -n "s/I am \(.*\)/Why are you \1?/p" | sed -n "s/my/your/p"
Why are you like your father?

Of course, ELIZA has a long list of my/your, me/you, ..., transformations, and multiple possibilities for each keyword, which, with a dash of randomness, allows the program to respond differently if you say the same thing twice.

But all in all, that's it. ELIZA is a very very simple program, from which emerges a complex behavior that a lot of people back then found spookily humanoid.

Taking a detour through (gasp) JS

One of the available "modern" implementations of ELIZA is in Javascript, as are most things. Now, those who know me figure out fairly quickly that I have very little love for that language. But having a distaste for it doesn't mean I don't need to write code in it every now and again, and I had heard so much about the bafflement people feel when using regular expressions in JS that I had to try myself. After all, two birds, one stone, etc... Learn a feature of JS I do not know, and resurrect an old friend.

As I said before, regular expressions (or regexs, or regexps) are relatively easy to understand, but a lot of people find them difficult to write. I'll just give you a couple of simple examples to get in the mood:

[A-Za-z]+;[A-Za-z]+

This will match any text that has 2 words (whatever the case of the letters) separated by a semicolon. Note the differenciating between uppercase and lowercase.
Basically, it says that I want to find a series of letters on length at least 1 (+) followed by ; followed by another series of letters of length at least 1

.*ish

Point (.) is a special character that means "any character", and * means "0 or more", so here I want to find anything ending in "ish"

Now, when you do search and replace (is is the case with ELIZA) or at least search and extract, you might want to know what is in this .* or [A-Za-z]+. To do that you use groups:

(.*)ish

This will match the same strings of letters, but by putting it in parenthesiseseseseseseseseses (parenthesiiiiiiiiiiiii? damn. anyway), you instruct the program to remember it. It is then stored in variables with the very imaginative names of \1, \2, etc...

So in the above case, if I apply that regexp to "easyish", \1 will contain "easy"

Now, because you have all these special characters like point and parenthesis and Β whatnot, you need to differenciate when you need the actual "." and "any character". We escape those special characters with \.

([A-Za-z]+)\.([A-Za-z]+)

This will match any two words with upper and lower case letters joined by a dot (and not any character, as would be the case if I didn't use \), and remember them in \1 and \2

Of course, we have a lot of crazy special cases and special characters, so, yes, regexps can be really hard to build. For reference, the Internet found me a regexp that looks for email adresses:

(?:[a-z0-9!#$%&'*+/=?^_`{|}~-]+(?:\.[a-z0-9!#$%&'*+/=?^_`{|}~-]+)*|"(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21\x23-\x5b\x5d-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])*")@(?:(?:[a-z0-9](?:[a-z0-9-]*[a-z0-9])?\.)+[a-z0-9](?:[a-z0-9-]*[a-z0-9])?|\[(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?|[a-z0-9-]*[a-z0-9]:(?:[\x01-\x08\x0b\x0c\x0e-\x1f\x21-\x5a\x53-\x7f]|\\[\x01-\x09\x0b\x0c\x0e-\x7f])+)\])

Yea... Moving on.

Now, let's talk about Javascript's implementation of regular expressions. Spoiler alert, it's weird if you have used regexps in any other language than perl. That's right, JS uses the perl semantics.

In most languages, regular expressions are represented by strings. It is a tradeoff that means you can manipulate it like a string (get its length, replace portions of it, have it built out of string variables etc), but it makes escaping nighmareish:

"^\\s*\\*\\s*(\\S)"

Because \ escapes the character that follows, you need to escape the escaper to keep it around: if you want \. as part of your regexp, more often than not, you need to type "\\." in your code. It's quite a drag, but the upside is that they work like any other string.

Now, in JS (and perl), regexps are a totally different type. They are not between quotes, but between slashes (eg /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/). On one hand, you don't have to escape the slashes anymore and they more closely resemble the actual regexp, but on the other hand, they are harder to compose or build programmatically.

As I said, it's a different tradeoff, and to each their own.

Where it gets bonkers is how you use them. Because the class system is... what it is, and because there is no operator overload, you can't really get the syntactic elegance of perl, so it's kind of a bastard system where you might type something like

var myRe = /d(b+)d/;
var isOK = "cdbbdbsbz".match(); // not null because "dbbd" is in the string

match and matchAll aren't too bad, in the sense that they return the list of matching substrings (here, only one), or null, so it does have kind of a meaning.

The problem arises when you need to use the dreaded exec function in order to use the regexp groups, or when you use the g flag in your regexp.

The returned thing (I refuse to call it an object) is both an array and a hashmap/object at the same time.

In result[0] you have the matched substring (here it would be "dbbd"), and in result[X] you have the \X equivalents (here \1 would be "bb", so that's what you find in result[1]). So far so not too bad.

But this array also behaves like an object: result.index gives you the index of "the match" which is probably the first one.

Not to mention you use string.match(regex) and regex.exec(string)

const text = 'cdbbdbsbz';
const regex = /d(b+)d/g;
const found = regex.exec(text);

console.log(found);
console.log(found.index);
console.log(found["index"]);
Array ["dbbd", "bb"]
1
1

So, the result is a nullable array that sometimes work as an object. I'll let that sink in for a bit.

This is the end

Once I got the equivalence down pat, it was just a matter of copying the data and rewriting a few functions, and ELIZA was back, as a libray, so that I could use it in CLI tools, iOS apps, or MacOS apps.

When I'm done fixing the edge cases and tinkering with the ranking system, I might even publish it.

In the meantime, ELIZA and I are rekindling an old friendship on my phone!


Introducing FuzzyTests

TL;DR: Grab it here : Github repo

Unit testing is painful amirite?

Writing good tests for your code very often means spending twice as much time coding them than on the things you test themselves.

It is good practice though to verify as much as possible that the code you write is valid, especially if that code is going to be public or included in someone else's work.

In my workflow I insist on the notion of ownership :

The bottomline for me is this: if there are several people on a project, I want clearly defined ownership. It's not that I won't fix a bug in someone else's code, just that they own it and therefore have to have a reliable way of testing that my fix works.
Tests solve part of that problem. My code, my tests. If you fix my code, run my tests, I'm fairly confident that you didn't wreck the whole thing. And that I won't have to spend a couple of hours figuring out what it is that you did.

This a a very very very light constraint when you compare it to methodologies like TDD, but it's a required minimum for me.

Plus, it's not that painful, except...

Testing every case

In my personal opinion, the tests that are hardest to do right are the ones that have a very large input range, with a few failure/continuity points.

If, for instance, and completely randomly, of course, you had an application where the tilt of the phone changes the state of the app (locked/unlocked, depending on whether the phone is lying flat-ish on the table or not:

  • from -20ΒΊ to 20ΒΊ the app is locked
  • from 160ΒΊ to 200ΒΊ the app is locked
  • the rest of the time it's not locked
  • All of that modulo 360, of course

So you have a function that takes the current pitch angle, and returns if we should lock or not:

func pitchLock(_ angle: Double) -> Bool {
  // ...
}

Does it work? Does it work modulo 360? What would a unit test for that function even look like? A for loop?

I have been looking for a way to do that kind of test for a while, which is why I published HoledRange (now Domains πŸ˜‡) a while back, as part of my hacks.

What I wanted is to write my tests kind of like this (invalid code on so many levels):

for x in [-1000.0...1000.0].randomSelection {
  let unitCircleAngle = x%360.0
  if unitCircleAngle >= 340 || unitCircle <= 20 {
    XCTAssert(pitchLock(x))
  } else if unitCircleAngle >= 160 && unitCircle <= 200 {
    XCTAssert(pitchLock(x))
  } else {
    XCTAssertFalse(pitchLock(x))
  }
}

This way of testing, while vaguely valid, leaves so many things flaky:

  • how many elements in the random selection?
  • how can we make certain values untestable (because we address them somewhere else, for instance)
  • what a lot of boilerplate if I have multiple functions to test on the same range of values
  • I can't reuse the same value for multiple tests to check function chains

Function builders

I have been fascinated with @_functionBuilder every since it was announced. While I don't feel enthusiastic about SwiftUI (in french), that way to build elements out of blocks is something I have wanted for years.

Making them is a harrowing experience the first time, but in the end it works!

What I wanted to use as syntax is something like this:

func myPlus(_ a: Int, _ b: Int) -> Int

DomainTests<Int> {
    Domain(-10000...10000)
    1000000
    Test { (a: Int) in
        XCTAssert(myPlus(a, 1) == a+1, "Problem with value\(a)")
        XCTAssert(myPlus(1, a) == a+1, "Problem with value\(a)")
    }
    Test { (a: Int) in
        let random = Int.random(in: -10000...10000)
        XCTAssert(myPlus(a, random) == a+random, "Problem with value\(a)")
        XCTAssert(myPlus(random, a) == a+random, "Problem with value\(a)")
   }
}.random()

This particular DomainTests runs 1000000 times over $$D=[-10000;10000]$$ in a random fashion.

Note the Test builder that takes a function with a parameter that will be in the domain, and the definition that allows to define both the test domain (mandatory) and the number of random iterations (optional).

If you want to test every single value in a domain, the bounding needs to be Strideable, ie usable in a for-loop.

DomainTests<Int> {
    Domain(-10000...10000)
    Test { (a: Int) in
        XCTAssert(myPlus(a, 1) == a+1, "Problem with value\(a)")
        XCTAssert(myPlus(1, a) == a+1, "Problem with value\(a)")
    }
    Test { (a: Int) in
        let random = Int.random(in: -10000...10000)
        XCTAssert(myPlus(a, random) == a+random, "Problem with value\(a)")
        XCTAssert(myPlus(random, a) == a+random, "Problem with value\(a)")
   }
}.full()

Conclusion

A couple of hard working days plus a healthy dose of using that framework personally means this should be ready-ish for production.

If you are a maths-oriented dev and shiver at the idea of untested domains, this is for you 😬


[Dev Diary] Vanilla Is The Best Flavor

I have a weird thing with the multiplication of command-line tools and gizmos: I forget them.

Do I want to run supercool gitlab commands? Hell yea! Do I need to install 12 utilities (or code a new one) to archive every project older than a year? I hope not...

The setup

I am a sucker for well documented fully linted code. But the thing is, all the gizmos that help me do that have to be installed in the system or in my ~/bin and I have to remember to update them, and I have to install them on my CD machine, and on every new environment I setup, and make sure they are still compatible with the toolchain, and it freaks me out, ok?

Plus,watching the students try to do it is painful.

So, given a 100% vanilla swift-capable environment, can I manage to run documentation and linting?

The idea

We have Swift Package Manager, which is now a first-class citizen in XCode, but it can't run shell script phases without some nasty hacks.

What if some targets were (wait for it) built to do the documentation and the linting?

Linting

One of the most popular linters out there is swiftlint, and it supports SPM. It can also build a library instead of an executable, which means one of my targets could just run the linting and output it in the terminal.

In the Package.swift file, all I needed to do was add the right dependency, and the right product and voila!

let package = Package(
	name: "WonderfulPackage",
    products: [
    	// ...
         .executable(name: "Lint", targets: ["Lint"])
   	],
    dependencies: [
        // Dependencies declare other packages that this package depends on.
        // .package(url: /* package url */, from: "1.0.0"),
		// ... normal dependencies
        .package(url: "https://github.com/realm/SwiftLint", from: "0.39.0")
    ],
    targets: [
    	// ... normal targets
        .target(
            name: "Lint",
            dependencies: ["SwiftLintFramework"]),
	]
)
Package.swift

Now, SPM is very strict with paths, so I had to put a file named main.swift in the Sources/<target>/ directory, in this case Sources/Lint.

Running the linter is fairly straightforward, and goes in the main.swift file:

// Lint command main
// runs SourceDocs
import Foundation
import SwiftLintFramework

let config = Configuration(path: FileManager.default.currentDirectoryPath+"/.swiftlint.yml",
                           rootPath: FileManager.default.currentDirectoryPath,
                           optional: true,
                           quiet: true,
                           enableAllRules: false,
                           cachePath: nil,
                           customRulesIdentifiers: [])

for lintable in config.lintableFiles(inPath: FileManager.default.currentDirectoryPath, forceExclude: false) {
    let linter = Linter(file: lintable, configuration: config)
    let storage = RuleStorage()
    let collected = linter.collect(into: storage)
    let violations = collected.styleViolations(using: storage)
    if !violations.isEmpty {
        print(EmojiReporter.generateReport(violations))
    }
}

print("πŸŽ‰ All done!")
Sources/Lint/main.swift

Setup the .swiftlint file as usual, and run the command via swift run Lint

Sources/WonderfulPackage/main.swift
⛔️ Line 15: Variable name should be between 3 and 40 characters long: 'f'
⚠️ Line 13: Arguments can be omitted when matching enums with associated types if they are not used.
⚠️ Line 12: Line should be 120 characters or less: currently 143 characters

Documentation

Documentation is actually trickier, because most documentation tools out there aren't built in swift, or compatible with SPM. Doxygen and jazzy are great, but they don't fit my needs.

I found a project that was extremely promising called SourceDocs by Eneko Alonso, but it isn't a library, so I had to fork it and make it into one (while providing a second target to generate the executable if needed). One weird issue is that SPM doesn't like subtargets to bear the same name so I had to rename a couple of them to avoid conflict with Swift Argument Parser (long story).

I finally found myself in the same spot than with the linter. All I needed to do was create another target, and Bob's you're uncle. Well actually he was mine. I digress.

let package = Package(
	name: "WonderfulPackage",
    products: [
    	// ...
         .executable(name: "Docs", targets: ["Docs"])
   	],
    dependencies: [
        // Dependencies declare other packages that this package depends on.
        // .package(url: /* package url */, from: "1.0.0"),
		// ... normal dependencies
        .package(url: "https://github.com/krugazor/SourceDocs", from: "0.7.0")
    ],
    targets: [
    	// ... normal targets
        .target(
            name: "Docs",
            dependencies: ["sourcedocslib"])
	]
)
Package.swift

Another well-placed main file:

// Docs command main
// runs SourceDocs
import Foundation
import SourceDocs

do {
    switch try SourceDocs().runOnSPM(moduleName: "WonderfulPackage",
                                     outputDirectory: FileManager.default.currentDirectoryPath+"/Documentation") {
    case .success:
        print("Successful run of the documentation phase")
    case .failure(let failure):
        print(failure.localizedDescription)
    }
} catch {
    print(error.localizedDescription)
}
Sources/Docs/main.swift

Now, the command swift run Docs generates the markdown documentation in the Documentation directory.

Parsing main.swift (1/1)
Removing reference documentation at 'WonderfulPackage/Documentation/KituraStarter'... βœ”
Generating Markdown documentation...
  Writing documentation file: WonderfulPackage/Documentation/WonderfulPackage/structs/WonderfulPackage.md βœ”
  Writing documentation file: WonderfulPackage/Documentation/WonderfulPackage/README.md βœ”
Done πŸŽ‰
Successful run of the documentation phase

Conclusion

βœ… Vanilla swift environment
βœ… No install needed
βœ… Works on Linux and MacOS
βœ… Integrated into SPM
⚠️ When running in XCode, the current directory is always wonky for packages


[Utilities] Time Tracking Structure

Every now and again (especially when training a model), I need to have a guesstimate as to how long a "step" takes, and how long the process will take, so I wrote myself a little piece of code that does that. Because I've had the question multiple times (and because I think everyone codes their own after a while), here's mine. Feel free to use it

/// Structure that keeps track of the time it takes to complete steps, to average or estimate the remaining time
public struct TimeRecord {
    /// The number of steps to keep for averaging. 5 is a decent default, increase or decrease as needed
    /// Minimum for average is 2, obvioulsy
    public var smoothing: Int = 5 {
        didSet {
            smoothing = max(smoothing, 2) // minimum 2 values
        }
    }
    /// dates for the steps
    private var dates : [Date] = []
    /// formatter for debug print and/or display
    private var formatter = DateComponentsFormatter()
    public var formatterStyle : DateComponentsFormatter.UnitsStyle {
        didSet {
            formatter.allowedUnits = [.hour, .minute, .second, .nanosecond]
            formatter.unitsStyle = formatterStyle
        }
    }
    
    public init(smoothing s: Int = 5, style fs: DateComponentsFormatter.UnitsStyle = .positional) {
        smoothing = max(s, 2)
        formatterStyle = fs
        formatter = DateComponentsFormatter()
        // not available everywhere
        // formatter.allowedUnits = [.hour, .minute, .second, .nanosecond]
        formatter.allowedUnits = [.hour, .minute, .second]
        formatter.zeroFormattingBehavior = .pad
        formatter.unitsStyle = fs
    }
    
    /// adds the record for a step
    /// - param d: the date of the step. If unspecified, current date is taken
    mutating func addRecord(_ d: Date? = nil) {
        if let d = d { dates.append(d) }
        else { dates.append(Date()) }
        while(dates.count > smoothing) { dates.remove(at: 0) }
    }
    
    /// gives the average delta between two steps (in seconds)
    var averageDelta : Double {
        if dates.count <= 1 { return 0.0 }
        var totalTime = 0.0
        for i in 1..<dates.count {
            totalTime += dates[i].timeIntervalSince(dates[i-1])
        }
        
        return totalTime/Double(dates.count)
    }
    
    /// gives the average delta between two steps in human readable form
    /// - see formatterStyle for options, default is "02:46:40"
    var averageDeltaHumanReadable : String {
        let delta = averageDelta
        return formatter.string(from: delta) ?? ""
    }
    
    /// given a number of remaining steps, gives an estimate of the time left on the process (in s)
    func estimatedTimeRemaining(_ steps: Int) -> Double {
        return Double(steps) * averageDelta
    }
    
    /// given a number of remaining steps, gives an estimate of the time left on the process in human readable form
    /// - see formatterStyle for options, default is "02:46:40"
    func estimatedTimeRemainingHumanReadable(_ steps: Int) -> String {
        let delta = estimatedTimeRemaining(steps)
         return formatter.string(from: delta) ?? ""
    }
}

When I train a model, I tend to use it that way:

// prepare model
var tt = TimeRecord()
tt.addRecord()

while currentEpoch < maxEpochs {
  // train the model
  tt.addRecord()
  if currentEpoch > 0 && currentEpoch % 5 == 0 {
  	print(tt.averageDeltaHumanReadable + " per epoch, " 
    	+ tt.(estimatedTimeRemainingHumanReadable(maxEpochs - currentEpoch) + " remaining"
    )
  }
}