Couldn’t be happier
Moore’s Law be damned, my upgrade/compile/download times remain more or less constant.
I was musing about that while upgrading my 2 main computers to 10.11 and my 2 main iOS devices to version 9 (9.0.1 soon followed): my Retina Macbook Pro may be faster than all my old computers rolled into one, it still takes me roughly a day to upgrade to a new major release. Between the system itself, the apps to update, the various libraries to check etc, it is a huge time sink.
And the same goes for compilation times. It used to take me 2-5 minutes to compile my biggest project on my old clamshell iBook (time enough to fix myself a cup of coffee), and it’s still the same in 2015.
We always tend to use our devices to capacity. Drives are full (who’s ever going to need more than a gigabyte?), networks are “too slow”, projects are complex enough to take forever to build,…
I taught Android development for a week recently and the need for instantaneous results is omnipresent, even though mobile development is kind of a reset in that way : small capacities, shoddy connectivity, lack of space in general. We are so used to manipulating 40MB gifs and 1GB video files that we forget these things were science-fiction only 10 years ago. And don’t get me started on Swift compile times…
There is an undeniable gamer part in me. I like challenge and I like the escapism it allows. If I’m stuck on a tricky line of code, or if there is some data I need to digest before formulating a plan, I find that sending spaceships in space helps me let go of my block. Writing sometimes does that to me as well, which is why I’m going to try and write some more.
Casual gaming wasn’t really a thing back when I wrote about the original Apple TV and the opportunities it might have provided. Sadly, hacking it to unlock the OS features underneath was (and still is, till the new one ships) the only way to get that sort of games available on the big screen.
“Casual gaming” is something of a strange beast. It usually refers to games that can be picked up and let go on a whim, or maybe games that appeal to people who don’t want too much difficulty in their gaming experience, or maybe games that require minimal input, or maybe games that cost as much as a cup of coffee to buy, or maybe cost less than 100k to make. No one has actually explained to me what “casual gaming” was, and why it was inferior (or indeed any different) than regular gaming. I know people who have sunk in Candy Crush (a popular “casual game”) way more time than I ever did in Zelda game, for instance.
To me, gaming, on a computer, a console, on a table, or in the recess yard, is just a way to do something that makes you feel good and doesn’t directly translate to your “obligations” (school, work, housekeeping, whatever). There is a part of us that wants to “slack off”, and games is a way to express that side. That being said, games also reward you with useful things for those “obligations”, such as a better understanding of teamwork, strategy, communication, coordination, and a lot of less obvious perks. Gaming is good, in general, since it provides you with a risk-free environment to test things. Whether or not we are aware of it when we’re playing, it changes the way we look at some of the non-game activities in our lives. I am fully aware that games also provide a risk-free environment for the most abhorrent behaviors, but that’s a topic for another day.
So from now on, I will just ignore the “casual” part of “casual gaming”, because ultimately it makes very little sense. Now why would anyone want to play games on the Apple TV?
For the same reasons we play them on our phones.
I know it’s a radical notion, but phones weren’t invented for us to throw birds across the screen. They were built originally so that we can talk to other human beings. Then we tacked a few other things on it, mostly because why the hell not, but also because it made good use of a device that we always have in our pocket anyway. Since it’s used as a phone for very little time overall, why not make it more useful when its primary function isn’t active anyways. So… other forms of communication? text messages, emails, social networks, etc? Yup, sure. But when it can do that, it can do a lot more. And people wanted to play games when they didn’t have anything else to do on their phone and couldn’t access any other device (yes, that was the reasoning I heard at the time). Turns out, sometimes, some people almost exclusively want to play games on their phones, with a phone call or message here and there for good measure. People just like to play games, it seems.
Back to the TV : the current TV model is a comfortable 70+ year old thing. I arbitrarily decided that the current TV model started when the first ads were aired, during WWII, but feel free to disagree. You have a big screen somewhere in your home, like 80% of Earth’s population, you turn it on, select a channel, and watch. You may switch channels too, but that’s TV 2.0. Someone out there decides what airs at what time, and you decide if you want and can watch it. In recent decades, we also added the ability to decide when you want to watch it, via recording. And even more recently, we added the optional bonus of not even having to record to to decide when to watch it. But the model stays roughly the same : someone creates something you might want to watch, sets a price (that you can pay with “sitting through ads”, or in decent money), and you decide if, and sometimes when, you want to watch it.
Just like the phone, your TV set (and added boxes) has a primary function, and supposedly does it well enough for the vast majority of the human race to actually own one.
Is there enough down time to justify putting a game on the screen?
That was the argument for the phone, and it makes sense to use it on the TV as well, at least as a first step. And the answer is yes. Console games sell really well. You could argue it’s because people haven’t transitioned yet to on-demand content, and therefore play between things they want to watch, if you think people would rather enjoy something passively than gaming. It’s a valid notion, when you look at the kind of restricted and small offering we get on-demand in most countries. You can also argue that playing games appeals more to hyperactive people, who usually play games on their phone while watching TV anyways. Plus, switching channels or browsing through on-demand isn’t exactly “watching TV”, so some people might want to repurpose that browsing time into a more active endeavor.
At any rate, people like games and there is enough “not tv” time on a tv set to justify their existence, even to the most hardcore “one device per function” oriented minds out there.
All in all, this is why I have always wanted to be able to make/play games on the AppleTV, and why I think games will be a decent success on that platform. People are lazy. If they choose an Apple TV for their passive content, they will get games as well, because switching to a different device may cross the “too much work” threshold. Just like with the phone. And for the same reasons you guys made games for the phone, you should for the TV.
Probably something everyone but me knew already, but at least next time I will have to do something similar I’ll have a trace.
My original Bootcamp partition was a smallish thing at the end of the first disk I had in that machine. I have bigger needs and a new disk and all that jazz, so I want to clone the partition to the new one. Forget about disk utility for that, NTFS is not its forte, I’ve had weird partition size issues with it, so I use clonezilla, which works better, performs checks and repairs and is overall smarter.
I could restart on the CD and have my computer doing nothing but that for a couple of hours, but I would rather use VMWare Fusion.
I have prepared my new partition as ExFAT on the mac (the closest it knows to creating a new NTFS one). By default, Bootcamp virtual machines grab the whole disk for the purposes of booting (even though it only uses and unmounts the NTFS partition), so I decided to use the same for the new one.
VMWare won’t let you add an existing disk directly using the UI but it does work with command line. Using mount I check which disk has the partition I want to use as destination
$ mount /dev/disk2s1 on /Volumes/WIN7 (msdos, asynchronous, local, noowners)
So disk2 is my target.
With VMWare, you can create “proxy” disks for existing real hard drives with the command vmware-rawdiskCreator. So I create the vdmk file
$ /Applications/VMware\ Fusion.app/Contents/Library/vmware-rawdiskCreator create /dev/disk2 fullDevice ~/Desktop/Win7.vmdk ide
For some weird reason, the VMWare Fusion app won’t let you add that disk as is, so you need to right click on the bootcamp virtual machine, reveal it in finder, open package contents and edit the .vmk file. At the end of the file, I added
ide1:1.present = "TRUE" ide1:1.fileName = "/Users/zino/Desktop/Win7.vmdk" ide1:1.redo = ""
I then added the Clonezilla iso to the virtual CDROM, set the machine to boot off of it, and clicked start.
Clonezilla has a plethora of modes, machine to machine over the network being an awesome one for instance, but what I want to do is first check the partition is the right one and formatted correctly.
So I enter the command prompt and fdisk the drives. There should be 2, sda and sdb, one being the source the other the target. Use the “p” command within fdisk to check the partition scheme and the availability of the disk. For me, the destination partition was sdb2. So I formatted it to NTFS:
$ sudo -i # mkfs.ntfs -Q /dev/sdb2
Double check everything if you are unsure of which partition you will completely erase! Do not come back to me afterwards complaining your disk was erased… My partition was the 2nd one of the 2nd disk (sd b 2), but yours might be different.
Then go back to the clonezilla menu, select local, then local partition to local partition, choose the right ones as source and destination, and enjoy the show.
Well because that way I could write that post while it was doing the transfer. Because I trust clonezilla with copies, with all its smart things and its checks, even if it’s kind of daunting if you fear the command line. And because I generally hate the idea of having a computer stuck for hours on something that uses les than a percent of a percent of what it could do.
Oh and because that way I can tell people who didn’t know about the way of mounting real disks into VMWare, and give a shoutout to clonezilla.
Let me know if it’s useful to you!
If you know me a little bit, you know I’m a sucker for space stuff. And research in general. Doing something that has never been done before, or furthering an agenda that goes into that direction has always been something that gives me goose bumps in an awesome way.
2014 has been a wonderful year for space buffs, but two very recent missions have hopefully recaptured the interest for everything interplanetary, Rosetta/Philae and Orion.
“It’s like hitting a bullet with a smaller bullet, while wearing a blindfold, riding a horse”
In march 2004, some people thought it would be a cool thing to achieve. Rosetta was supposed to come close enough to a comet to take detailed pictures and perform analysis, why not try to land on it too with Philae?
Think about it: a route spanning 6.4 billion kilometres in 10 years, to hit a rock 4 kilometres in diameter ( 1/1600th of Earth ). Mind boggling. And yet, it was done, in the name of science. There are a lot of reasons to do such a thing, and the ESA explains it nicely.
“To Infinity and Beyond!”
Earth isn’t doomed just yet (even though it’s getting there), but we all know in a corner of our minds that we will have to leave it for another planet at some point in the future. Almost 50 years after our first baby steps in interplanetary travel and the Apollo Program, NASA tested a new craft designed to take us back to the Moon, and even Mars. Even if it’s currently empty, it signals a commitment to a spacefaring culture once more. Sure, we are nowhere near having a solution for interstellar travel, but when we start colonizing the Solar System in earnest, we’ll be closer to the stars.
THIS is why funding research is important
Does it make any difference today to know what that comet is made of and what it’s seen during its travels? Does landing on mars allow me to have a summer house there? Of course not. But our grandchildren will be thankful we didn’t spend to much time navelgazing as if the universe was restricted to Earth.
Remote Workers Are A Pain To Manage (sic)
This is not exactly news anymore, but a fraud-related scandal was uncovered a few days ago in the US patent body of regulation.
This hits me on two different levels, completely unrelated to one another : work-at-home mechanics, and the actual concept of patenting stuff.
The work-at-home side of this story is distressing to say the least. In the last 15 years, I have worked for maybe a couple of months in an actual office with actual people. It’s no secret I don’t enjoy it, and it’s no due to any of the fine folks I was sitting next to. It’s just that my habits of cursing loudly at my screen, and my need for a total lack of distraction when I’m focusing on a particularly thorny problem, make having people sitting right next to me a difficult fit.
But because of stories like this, and because it is so easy to cheat bosses/customers of actual working time when they don’t have their eye resting directly on you, working from home is a very real deal-breaker for my interactions with customers sometimes. Trust issues aside, on an hourly basis, I get more than regular employees, and I can Do It in my bathtub! Holy granola! From the outside it looks like I have some totally unfair advantages over everyone
As Seen From The Other Side
Truth be told, working from home is hard.
Let’s start with the beginning of the day: it’s so easy to snooze the alarm and go back to bed. Really. Especially if you have been working late the day before. Then whatever your routine in the morning might be, taking your time to read the news, catch up on social stuff, etc is tempting. Then you realize it’s really late and you might have to cram everything before lunch, which could last longer because you’re enjoying it in front of the TV, etc etc…
Basically, if you have any procrastinating tendencies, they are all very easy to succumb to. Structure helps, like having “office hours” to simulate the real thing, planning your customer phone calls early in the morning, or at any other time you might be tempted to do anything else but work. Life hacks such as this are easy to implement and adhere to, and every one should know themselves well enough to know which ones are important and how their personal procrastinating tendencies surface. Because the key thing about working from home isn’t replicating a workplace at home.
To be able to work from home, you need to know exactly how your brain works.
To take the only example I know well enough, I tend to be very code efficient right after I wake up. So I have two known times where I cram my most urgent/important stuff : early morning, and after my nap. Yes I do take naps, partly because of this, and in a regular office, it’s not generally the norm. After roughly 2h straight of coding, my mind tends to wander. I start checking news, chat with people. So I use that time to do my support / client stuff. But even that is tiring, so I generally cap that out at 1h. Then I do the code that’s less neuron-consuming, which might (or might not) get me in the zone again for more important stuff.
The important part of all this is that I spread my work hours larger than strictly necessary. I usually have a 8-8 work day, and I sometimes work for a few hours on week ends as well. Because I can, and because it doesn’t impede on other things I consider vital. And during the day, I have free time to run errands, have a cuppa with people, etc. The very fact it’s spread out a bit means that I can contract it if necessary to stay on a deadline that is whistling dangerously close, or expand it a bit if I have time and am feeling under the weather or uninspired.
The Root Of The Problem
Applying “office rules” at home seems completely stupid and backwards to me. Either you give people the option to work from home until they can’t achieve what they said they would do anymore, whatever way they want to organize themselves, or you force them to be under scrutiny in an office. Giving them restrictions in their own homes will lead to resentment and “cheating”, and there should be no shame in saying after a while “look, it doesn’t seem to work when you do it remotely, come back in an office”, to potentially be tried again at a later date. The remote workforce problem embodies to me a fundamental flaw in how people’s work is valued : results vs time.
It’s perfectly ok for people whose job it is to be available (to interact with customers who may or not call, for instance) to be paid / valued in good part relative to the time they spend on the job. But for developers, to take an example I know only too well, it’s all about what we do deliver. Time is second.
Let me take an example. Company A contacts me, for a contract on an app that displays news for their product and allows for support contact, and social sharing. The very first question they ask is how long it’s going to take. Which is fine and normal. But based on that, they derive the amount of money they will assign to the project. While my time is as valuable as anyone’s, we can all agree that there are some things I will do faster than others with my level of experience (to take the seniority out of the equation). If it takes a colleague of mine 1 month to do that app, and I take only 2 weeks, should I be paid less? No but the second question they ask is “what is your daily rate?”. So in essence, if I have a fixed rate that’s close to the market I will be paid less for the same job, and if I double it, I probably won’t have the contract. How is that fair?
I can hear sniggering in the back : “why don’t you just SAY you will take a month?”. The ethical value of that comment is left as open for debate.
But once again, we circle back to the problem of assigning a value to someone’s job, and the perversity of contemplating cheating to “fix an intrinsic wrong”. I refuse to think every single human on the planet is prone to cheating in every circumstances. Most of the time, mostly honest people who try to game the system to do less while earning the same financial compensation feel cheated themselves. It is indeed a HR problem, but not in a “let’s put more restrictive measures in place to increase productivity” way, more in a “let’s see why these experts in their fields feel like they aren’t paid enough”. And remove the actual bad apples based on results.