Skip to content

My new setup: One reason why I love DragonFlyBSD

I recently switched to DragonFlyBSD as my main OS, and one of my favorite new things I’ve done has been my mirroring setup.

I currently use mercurial to manage my home directory. It works out pretty well,  I have an alias that automatically stages changes and commits with an automatic message and then push the changes to an SD card I had inserted. This also allowed me to backup my home to our home server and/or update my home on my other PC’s using the ssh:// url in hg push. The added benefit of merging changes from when I switch machines was the main motivation behind this setup. (I was thinking of posting about this setup, but it’s not too different than what pretty much everyone else posts on this subject).

Some downsides? Large binary files use a lot of memory to manage. Before we upgraded the memory in our backup server I was using it all up and making mercurial bail. This caused me to make an untracked directory that is a mess, and I’d have to rsync this along with pushing. I’d also have to prune the tree history when it got too big. For moving between machines this is still the best option I’ve found, but for backups it can be a little cumbersome.

With HAMMER on my laptop and backup server,  I had another option, mirror-copy! I simple made a separate pfs for my home directories on the server and laptop, and called mirror-copy with the remote url. This would be a filesystem level mirror to our server, including the automatic snapshots I already have on the laptop’s pfs (which get automatically pruned).

Some downsides to this approach are the pfs-slaves are read-only, so this is really only good for backup purposes (although, I might try some mount_union tricks for when I log into the server), and this of course only works with systems that use HAMMER (currently only DragonFlyBSD as far as I know).

Wait, why not ZFS or BTRFS? Good question. It never occurred to me to use BTRFS this way when I was running Linux, and I never bothered with ZFS on my FreeBSD machine. Technical/concrete comparisons aside, HAMMER feels really well put together and very cohesive.

Status Update: libva-epiphany-driver

On the off chance someone actually looked at my github page and followed my info here…
I had been crunching getting a libva skeleton driver up and running, my initial hopes were to generate excitement for the parallella kickstarter. After I failed to get it done in time (but it got funded! yay!), I was still crunching to get a working demo up (to keep people excited). However, I hit a brick wall trying to debug my Huffman decoding routine, and quickly lost focus as my research drew my attention away. I had more in-progress work, including functional DRI output, that I hadn’t commited because I was trying to debug that routine.

I keep meaning to go back to it, but I still haven’t had any mental breakthroughs. Therefore, I decided to just go ahead and commit what I had. The skeleton driver works, it just doesn’t do anything :P

For some reason, I had insisted on coding the codecs from scratch (part pride, part licensing, etc…), but now I’m feeling more pragmatic. Therefore, I’ve decided to do a few things:

  1. Use libjpeg-turbo source as a reference and quickly finish up the JPEG decoding routines for the demo (concede pride).
  2. Approach problem differently! (possibly concede licensing).

The”problem” was one I made for myself: libva acts as a hardware mediator between applications and accelerated hardware, and it’s on the hardware that the codecs are implemented. The libva-driver gets requests and handles setting up and communicating with the hardware, shuffling data around and such. I knew I’d have to implement the codecs somehow, but foolishly decided to implement them in the driver.

I’ve recently decided to focus my efforts on porting existing codecs/libraries to utilize Epiphany, then just have the libva-epiphany-driver as a host program that loads the separate programs onto Epiphany. This should have the benefit of reducing my workload, simplifying libva-epiphany-driver, and making it possible to receive the benefits of my porting in non-vaapi applications. And of course, porting existing projects and contributing upstream will be better overall (upstream benefits from wider adoption, I benefit from upstream contributions, etc.).

I decided to start with libjpeg-turbo, as it’d be the simplest to work with (and wouldn’t have to worry about the BSD license).  Hopefully this approach will go much better.

Parallella!

I just backed this kickstarter today for Parallella, and have to say I’m very excited! To me, this is the right thing to do, done the right way. I really hope it succeeds! Not just to get a dev board, but I want to see these chips proliferate and make it easier for people to do heterogeneous parallel programming.

My mind is racing with everything I want to do with it, and everything that is possible :D

Update: 10-16-2012 In their newest project update, the Parallella kickstarter has decided on a “soft relaunch” in order to better sell the platform to a wider audience. I think it’s a great idea. They had asked for people to post about what they plan to do with their dev board in order to post the more exciting/credible ones to the front page in order to (as they said) “WOW” the non-programmers. I had already posted what I plan to do, but thought I’d reproduce my ideas here and elaborate a bit more than I would in a comment thread.

I definitely plan on playing around with computational chemistry packages on my board. I currently use Quantum ESPRESSO and Gaussian on our small cluster in lab, but have been interested in running the open source packages on more accelerated hardware. This is one of the reasons this project caught my attention.
ESPRESSO and Octopus (another comp chem package with similar goals) have preliminary/development branches with CUDA/OpenCL accelerated backends respectively, so getting them to use the Epiphany cores shouldn’t be too difficult. Getting them to run well will be fun to experiment with ;]
Both packages mostly rely on the matrix math backends to offload the work to the GPU, so assuming I get these packages utilizing the Epiphany cores, it should be simple enough to get other computational packages to use Epiphany cores. I would likely start hacking on NWChem (another popular computation chemistry package).
All these packages support some form of cluster programming through MPI in particular, so a Parallella board should be able to be dropped into an existing cluster (or make a small cluster of Parallella boards ;]).

I really don’t think this will be too difficult, and this gives me the chance to have a reasonable computational setup at home that won’t bankrupt me electricity-wise. Exciting!

…LibVA (vaapi) seems a good target as well. Writing a backend for that would allow the use of the cores for video decoding/encoding, and do so without having to mess with the base package. This seems like it might be more effort (it looks like it involves reimplementing any encoding/decoding profiles to expose to libVA), but would allow an easy “drop in” solution that would immediately benefit mplayer/VLC/XMBC. This is a project I’d definitely be willing to work on….

I had this idea when people were discussing “out-of-box” and “media box” potential. I poked around the vaapi repo and realized the Epiphany could easily be a “backend” for it. This seemed like a bit more work, but I was willing to do so in order to generate some more excitement for the board. I actually resolved myself to start writing what I could with the documents they’ve already released. I’m hoping to get a chance to have at least a skeleton project before the relaunch this friday.

The other crazier idea I had was to make an LLVM backend for the Epiphany chips, and use LLVM’s JIT/runtime compilations capabailities to do interesting things like: dynamically enable use of the Epiphany chip if available (much like Apple had done with their OpenGL pipeline to enable software fallbacks for missing hardware features on the Intel GMA), or make t easier to have an optimizer that will translate SIMD calls to relevant Epiphany kernels (which should help accelerate quite a few things). For the latter, I’m sure there is a way to do it in GCC, but my impression is the internals are not as modular, and I’d also loose the dynamic compilation possible with LLVM.

I have other even crazier ideas about what I’d do with dynamic [re]compilation, but I’ll save those for a separate post.

Bash one-liners: deleting unopened files in current directory

Gaussian ’09 can leave behind piles of temp files when things fail. These build up and take up a good chunk of disk-space. Usually I just do the house-cleaning when disk-space gets low, but as some may be in use, I cannot just clean the whole directory. I usually tinker until I get something that will “delete all unopened files”, but I keep forgetting how I did it the time before (and I’ve probably done it differently each time…)
The current incarnation follows. This deletes any unopened files in the current directory:

for file in * ; do lsof $file > /dev/null || rm $file ; done

It’s not perfect, as this one complains when rm tries to delete directories. Testing for directories is unnecessary and annoying. I could silence rm with a 2> /dev/null, but the paranoid-me doesn’t want to quiet all errors.
Any suggestions welcomed. My glob was originally `ls *.*` but thought this excessive, and potentially able to miss files without a suffix.

Misconceptions

I’ve been playing/looking with/for libraries for a project I’m thinking about, and got to the problem of font rendering. All signs seemed to point to freetype2, but I was hesitant. I just wanted something lightweight (only rendered fonts), preferably BSD/similarly licensed, but would give me good looking results.

My impression was that the font system under the *nixes+Xorg was libxft working with freetype2, and all those gross XML files were from freetype2. Thus I was worried that I’d have to pull in all that cruft just to render some fonts. However, this is when I actually did some digging and found out libxft is built on fontconfig, (which was responsible for all the XML crap), and that freetype2 is actually a separate and minimalistic library that seems very well put together.

It just seems the right way to do it.

Please don’t learn to use a screwdriver

Let’s face it, things have gotten out of control. There really should be no need for everyone to learn how to use a screwdriver. Most of our PC’s come pre-assembled, and the ones that don’t should only be assembled by people who know what they are doing. From my experience, people using screwdrivers can only contribute more overtightened screws in the world, or poor usage of screws. If my drawer breaks I’m not going to try and fix it myself; I have no idea how drawers works and I should leave the job up to someone who knows what they are doing.

An example, let’s do some reductio ad ridiculum on this.
“If we don’t learn to screw we risk being screwed ourself. Screw or be screwed.” – Douglas Rushkoff

Parody aside, do I believe programming is a life skill everyone should be exposed to? Yes, definitively. Should we all be programmers? Probably not. I think Jeff makes some good points, but I feel his analogy is off. I also think his conclusion is elitist, and personally I support meritocracy over elitism. Let’s employ the programmers who prove themselves. Let’s use the good code, but but also let’s give everyone the tools to identify good code. Let’s give everyone the chance to prove they are good coders. But most of all, let’s help people learn how to solve problems–just like Jeff says in his article. The key point I differ on is I support people learning to code in order to strengthen and supplement their problem solving skills.

How do we solve problems if we don’t learn how to use the tools? What’s wrong with coding your own solutions for your own problems? Even if it’s ugly code, it works for you and that’s all it needs to do. How many of you can honestly say your custom shell scripts are programming gems? (Mine sure aren’t) Do you want to argue that they solved problems that other people are more qualified to solve and we should leave the job up to them?

I also find Jeff’s article aggravating because I see it as further justification of an attitude I despise–especially in my co-workers. They only use software that does everything for them, and if it doesn’t do what they want, they complain: “why didn’t they just make it do XYZ?” “why didn’t they do it this way instead of that way?” “Oh! I’m going to pay hundreds of dollars for this software because it will normalize my graphs for me!” If only they realized that with the proper tools, they could fix it them-damned-selves. They don’t need to be master programmers to normalize a set of graphs; they just need to make a stupid formula (and being chemistry grad students, they damned better know how to mathematically normalize something). Perhaps with some knowledge of variables and assignment, they could make a macro they can apply to data sets, or something! Instead of whining and waiting for the next black box to descend from the higher-ups, they could do it themselves.

Conkeror!

After posting my last entry, I saw an old post of mine about moving to Midori from Chromium. And well…. that didn’t last ;]

I still like Midori, and I still like webkit, but I found myself making Midori more and more… Emacs like. That’s when I finally came across Conkeror. My partner is disgusted I’m back on a Firefox based browser (he’s scarred from the memory leaks on windows…) , but frankly I don’t mind it so much. I don’t have to deal with as many headaches dealing with incompatible sites, and the work-flow and configuration are just what I was looking for. It also works out for moving from machine to machine (my current machines bring me between Linux and 3 different *BSDs) as most have a [reasonably] recent version of Firefox/Xulrunner, and Conkeror is written as a javascript frontend; I can install it in my home directory and carry it with me whatever machine I go to.

I haven’t done anything crazy that hasn’t been documented well on the wiki, except for the launcher script (nothing special really, just calls ‘exec firefox -app (location of conkeror)’)

And what may be me parroting myself from before: I am aware of Uzbl, and theoretically it seems like an ideal browser, but I’m still not there ;] Maybe someday…

Follow

Get every new post delivered to your Inbox.