Skip to content

Please don’t learn to use a screwdriver

Let’s face it, things have gotten out of control. There really should be no need for everyone to learn how to use a screwdriver. Most of our PC’s come pre-assembled, and the ones that don’t should only be assembled by people who know what they are doing. From my experience, people using screwdrivers can only contribute more overtightened screws in the world, or poor usage of screws. If my drawer breaks I’m not going to try and fix it myself; I have no idea how drawers works and I should leave the job up to someone who knows what they are doing.

An example, let’s do some reductio ad ridiculum on this.
“If we don’t learn to screw we risk being screwed ourself. Screw or be screwed.” – Douglas Rushkoff

Parody aside, do I believe programming is a life skill everyone should be exposed to? Yes, definitively. Should we all be programmers? Probably not. I think Jeff makes some good points, but I feel his analogy is off. I also think his conclusion is elitist, and personally I support meritocracy over elitism. Let’s employ the programmers who prove themselves. Let’s use the good code, but but also let’s give everyone the tools to identify good code. Let’s give everyone the chance to prove they are good coders. But most of all, let’s help people learn how to solve problems–just like Jeff says in his article. The key point I differ on is I support people learning to code in order to strengthen and supplement their problem solving skills.

How do we solve problems if we don’t learn how to use the tools? What’s wrong with coding your own solutions for your own problems? Even if it’s ugly code, it works for you and that’s all it needs to do. How many of you can honestly say your custom shell scripts are programming gems? (Mine sure aren’t) Do you want to argue that they solved problems that other people are more qualified to solve and we should leave the job up to them?

I also find Jeff’s article aggravating because I see it as further justification of an attitude I despise–especially in my co-workers. They only use software that does everything for them, and if it doesn’t do what they want, they complain: “why didn’t they just make it do XYZ?” “why didn’t they do it this way instead of that way?” “Oh! I’m going to pay hundreds of dollars for this software because it will normalize my graphs for me!” If only they realized that with the proper tools, they could fix it them-damned-selves. They don’t need to be master programmers to normalize a set of graphs; they just need to make a stupid formula (and being chemistry grad students, they damned better know how to mathematically normalize something). Perhaps with some knowledge of variables and assignment, they could make a macro they can apply to data sets, or something! Instead of whining and waiting for the next black box to descend from the higher-ups, they could do it themselves.

Advertisements

Conkeror!

After posting my last entry, I saw an old post of mine about moving to Midori from Chromium. And well…. that didn’t last ;]

I still like Midori, and I still like webkit, but I found myself making Midori more and more… Emacs like. That’s when I finally came across Conkeror. My partner is disgusted I’m back on a Firefox based browser (he’s scarred from the memory leaks on windows…) , but frankly I don’t mind it so much. I don’t have to deal with as many headaches dealing with incompatible sites, and the work-flow and configuration are just what I was looking for. It also works out for moving from machine to machine (my current machines bring me between Linux and 3 different *BSDs) as most have a [reasonably] recent version of Firefox/Xulrunner, and Conkeror is written as a javascript frontend; I can install it in my home directory and carry it with me whatever machine I go to.

I haven’t done anything crazy that hasn’t been documented well on the wiki, except for the launcher script (nothing special really, just calls ‘exec firefox -app (location of conkeror)’)

And what may be me parroting myself from before: I am aware of Uzbl, and theoretically it seems like an ideal browser, but I’m still not there ;] Maybe someday…

Launching Programs: dmenu_run and process groups?

Update 06-May-2012: I’ve been reading the mailing list to see if this has come up before, and there is a lengthy thread on improving dmenu_run. From what I can tell, the examples for improvement all had some sort of  “exec” call in them (but did not seem the focus of the discussion).  Looking through the revisions, it looks like dmenu_run did use ‘exec’ but called it on a shell, but this was changed for the purpose of “disowning the child shell.” The only thing I can think is perhaps this is due to a difference in how dmenu_run is invoked…

I just moved back to i3 from wmii, and after using it for a bit, I noticed that after launching programs with dmenu_run that they all were running under zsh (my SHELL) in pstree.

I remember wmii would launch the program as it’s own process. I like zsh in interactive uses, but it is a bit heavy wait to be used for just spawning processes. Luckily, dmenu_run uses ${SHELL} for launching, so I just made my keybinding launch “SHELL=dash dmen_run …” instead of plain “dmenu_run …” This worked nicely as it used dash to launch the program and took up much less memory.

But then, I thought why would dmenu_run launch programs this way? Wouldn’t it be a better idea to launch an application in it’s own group, say with the ‘exec’ function? The only reason I could think not to was the ability to use dmenu_run as a kind-of “one-off” shell. I’m not an expert on ‘exec’, but I was testing it out, and it still gave the same functionality. So, I made a modified dmenu_run called dmenu_exec, that just prepends “exec” to whatever it pipes to the shell. So far, it seems to be working as expected, and still allows me to use dmenu_run as a “one-off” shell.  And because I’m still piping this to a shell to run, I left the SHELL=dash line in my keybinding.

From what I understand about process groups, this seems to be a better way to launch programs, but as it isn’t what the dmenu developers did (and they seem like a pretty smart bunch), I wonder if there is something I’m missing. I’m using this for now, but any comments would be appreciated.

#!/bin/sh
cachedir=${XDG_CACHE_HOME:-"$HOME/.cache"}
if [ -d "$cachedir" ]; then
    cache=$cachedir/dmenu_run
else
    cache=$HOME/.dmenu_cache # if no xdg dir, fall back to dotfile in ~
fi
(
    IFS=:
    if stest -dqr -n "$cache" $PATH; then
        stest -flx $PATH | sort -u | tee "$cache" | dmenu "$@"
    else
        echo "exec $(dmenu "$@" < "$cache")"
    fi
) | ${SHELL:-"/bin/sh"} &

Immersion

Since getting laid off from his IT job, my partner finally had the motivation to “learn Linux.” I was extremely happy of course, and figured Linux Mint would be a great starting point for him.

It hasn’t turned out the way I thought it would. After some initial frustrations, he became familiar with the general environment (he has worked in IT for 12 years, he knows how to figure things out). However, when it came to anything command line oriented, or needing my help (as 99% of my solutions are probably through command line/text editing methods) he always seemed to have trouble. He was learning the commands and syntax and problem solving skills, but it didn’t seem to be sinking in as quickly or as deeply as I hoped.

I tried to think how I had gotten to that point where I do most things on the command line, the first thing I thought of was reading/referencing a sparse guide on common utilities and commands. I found and sent him a [better] guide. This didn’t seem to help much.

Finally, this weekend I realized why I was using that guide when I started. My first [real] Linux distro was Slackware, and I had to manually setup my graphical environment. I had to learn by doing, and there was no easy way around it. I realized that while Linux Mint got him going, it made the learning curve gradual, but also made it long.

So, I finally sent him a message from work. “Forget Linux Mint. Try Slackware, Debian, Arch or FreeBSD (I can help out with the last two, been too long for slack or debian).”

Because he isn’t as excited about the BSDs as I am, but wanted my help, he went with Arch. That’s when I remembered the move to pacman 4, and how he’d be setting up package signing manually before he could install anything… hooray for immersion.

Tail-recursive Quicksort?

I’ve been using scheme and racket quite a bit lately, and because I have been playing with functional languages, I have been doing little exercises in solving problems in a more functional approach, which also includes some nice recursion. More specifically, I’ve been trying to think in a more “tail-recursion” approach to take advantage of the scheme systems that are tail-call optimized.

I found this little article, Quicksort: Like You’re 5, which gave a quick implementation- independent demostration of the quicksort algorithm, and being reintroduced to it’s definition, I recongnized it as perfect for recursion! And of course, a good problem for tail-recursion. However, I couldn’t think of how to approach it in a tail recursive manner, so I started by just quickly coding up a quick non-tail recursive version:

(define (quicksort list)
  (if (or (empty? list) (empty? (rest list))) list
      (let ([pivot (first list)]
	    [list (rest list)])
	(append (quicksort (filter ((curry >) pivot) list))
		`(,pivot)
		(quicksort (filter ((curry <=) pivot) list))))))

I got stumped, I couldn’t think of how to accumulate the results for a tail-call. The block I was stuck on was, “Can you tail-call something that branches?” I couldn’t think of a way, and unsure if it was me being naive or it was just not possible. Quick searches on rosettacode and duckduckgo turned up similar approaches to what I did above.

After a good nights sleep, I finally conceded. So you can’t tail-call two separate branches, but you can compute one-branch and tail-call the other, right? After a little bit of fumbling, I came up with this:

(define (quicksort-tail thelist)
  (let quick-acc ([tosort thelist] [acc '()])
    (if (or (empty? tosort) (empty? (rest tosort))) (append tosort acc)
	(let* ([pivot (first tosort)]
	       [tosort (rest tosort)]
	       [acc (append `(,pivot) (quicksort-tail (filter ((curry <=) pivot) tosort)) acc)])
	  (quick-acc (filter ((curry >) pivot) tosort) acc)))))

I haven’t done anything to determine if it’s more time/space efficient, but it works :]

Update: 10-16-2012 I feel pretty dumb about this post. I have been skimming through the newest [unofficial] version of SICP (as I’ve been contemplating making a toy scheme implementation), and while skimming through the section on recursion, I realized why tail-recursion is so important–it’s how you implement iteration. It was a wonderful moment when lots of dangling ends came together to form a strong knot… and then turned around and called me dumb. The final solution is the obvious one as that’s how we iterate through a tree. I essentially arrived at nested iterations in a very convoluted way.

Novelty, Invention and Innovation

Personally, I think TermKit is awesome. I was skeptical at first, but the idea grew on me more and more. I’ve become very interested in novel power-user interfaces, and it seems I keep finding more posts along the same line–describing hypothetical tools and shells. I rarely read comment threads, and I wish I never read the reddit comment thread for this posting–people sure are vehement about changing the Unix Way. I don’t mean to disregard their viewpoint; the Unix philosophy is amazing and versatile and has stood the test of time. Defending something that works well against modernizing with every popular fad is respectable, and discussion from both sides is necessary. However, when the conversation turns to comments such as “Don’t you dare to change and ‘improve’ my terminal,” the credibility fades dramatically.

Is Unix the pinnacle of computing platforms? To think that we have hit our maximum potential seems pessimistic to me. I like to think there is an even better way out there. A new paradigm and a new philosophy that improves on the Unix Way. This may be my chemist side showing, but I deeply believe that we need to encourage experimentation and exploration. Plan 9 and Inferno did not gain wide acceptance, but they tried some new things that have already been adopted and more I think are worth adopting. I think the adoption of procfs/sysfs and the prevalence of plan9port is evidence of that.

I find it comical that despite TermKit being built and emphasized as a layer built over the userland and not a replacement, people still felt threatened by it; as if it’s presence alone would forcibly tear their shell away from them. I call that unjustified paranoia, and also stifling to invention and innovation.

As for the direction I’d like to see things head? Functional programming languages with good parallelization, structured pipes (I like JSON in this regard, or just plain s-exprs), hypertext terminals, and transparent network access via filesystem namespaces (or single system image approaches, like what DragonFly BSD is hoping to accomplish).

I’m hoping to experiment in my spare time with the easier parts of that list, and I’m sure it will be fun. If I’m lucky, maybe I’ll get a long comment thread going too.

This opinionated, fact-void post brought to you by a chemist, who uses computers to get shit done.

Update 12-14-2011: I should mention I have been meaning to learn to use acme as inspiration for some of the above interface ideas, and realized that what I wanted has already been done (sort of). Reading the introduction to Rob Pike’s “Acme: A User Interface for Programmers” states some of the changes I’d like to see. Also, on suckless.org project ideas, the one for a “Yet another less sucking editor” describes the desire to fill a gap between acme and vim.

A Small Rant

I had fun this morning trying to submit some more calculations only to have them all fail, with the log file having one of Gaussians “succinct” error messages that something went wrong in FileIO.

I’ve seen this error before, usually when I’m an idiot and fill up the hard drive with a CIS calculation, or using up all the systems file descriptors (should probably have the advisor increase this…), etc. So, these were the problems I was looking for, and tearing my hair out that everything seemed fine: permissions worked out, there was space, there were enough descriptors.

Finally, I realized all the calculations were restarts/guess=read routes (reads in a checkpoint file) and had forgot to copy the checkpoints to the new filename (as I like to keep the old failed/partial checkpoint in case something goes wrong), which means the FileIO error was because it couldn’t find the checkpoints to read.

For those who don’t use Gaussian: while it is a very very fine program, there are some things that just make you go “WTF?” For example, whenever g09 exits unsuccesfully, it segfaults. Granted, it sets the error code to 1 and to most people it’s no different than a messages that says “Something went wrong! Check the logfile,” but to me this is fundamentally wrong. Segfaults are when a program tries to access memory is can’t/shouldn’t. Think buffer overflows and the such. To me, segfaulting when a program error occurs and the program tries to exit reminds me of a story EHS told us:

A student was heating diethyl ether in a beaker out on a bench top, and with it’s low flash point, it inevitably caught fire from the internal circuitry in the heating plate. Luckily, no one was hurt. However, the student’s solution to this? Do the same thing, but keep a watch glass handy to cover it when it catches fire. So, he sat there covering this thing up whenever it caught fire.