Skip to content

From now on, I’m only buying Nexus phones…

tl;dr: I will not be buying another LG phone. Their new image has locked down their system and prevented us from installing the software we want on our phone. The ability to easily/legitimately do this was a major reason I bought the phone in the first place.

Warning, this is not a well-researched article. It’s a rant. No links (yet), and lots of suppositions regarding motivations. However, the feelings are bubbling forth and need an outlet. Hopefully publicly making my point will change… something?

I absolutely hate the current trends in “smart” phones. Phone manufacturers seem to implement flawed software “features” and load phones up with proprietary bloatware in an attempt to edge out the competitors. I would call these perceived or false features. They’re gimmicks meant to entice consumers. These “features” typically are ones that are not difficult to implement or necessarily that useful. LG’s “quicknotes” is simple and there are other apps that implement that functionality. The Samsung eye-tracking scrolling is more difficult to implement, but I consider that a gimmick. It’s also based on image-recognition research and software techniques developed by many other people (but I bet Samsung has a patent related to using this “feature” in phones/mobile devices). So now, if someone wants to improve on this feature, even if they used a clean-room version, the patent would prevent them from releasing their improved version. Holding back progress in the name of supposed profit.

I suppose I understand why: strip the software away and the hardware is pretty much equivalent across the board. Most phones use the same base components! There’s not much innovation to occur here, so they attempt to do it in software, or at least fake it. It’s much easier to practice anti-competitive vendor lock-in than to actually innovate. Locked bootloaders and lack of root access to the system in order to prevent the use of 3rd-party firmware, or modifications to the system. This has been seen before with Microsoft, and after a long fight it seemed we had won out against this type of anti-competitive behavior.

The tin-foil hat portion of me also suspects even more malicious motivations. Much like the Silverfish incident with Lenovo (this is the best summary I can find so far, but all of these write-ups argue that “hackers” can exploit this, when in fact the silverfish program is actively exploiting it, and could be monitoring your encrypted traffic and doing nefarious things): if users could remove this pre-installed software, how will they monitor your encrypted traffic in order to insert ads? Certainly a lucrative endeavor, and not much to stop them from spying on the encrypted traffic…

I love Android due to it’s open source base, and I try to run as much of an open source stack as I [reasonably] can. I love what this has done for the mobile ecosystem, and I do recall the early days of Android containing much more drastic clashes between developers, and manufacturers attempting to hold onto an outdated development model. I’ve been a consistent Cyanogenmod user since my Motorola Droid. It’s quite amusing how software written by a group of volunteers can be much more useful and stable than the closed-source products that an isolated team rushes out (likely attempting to make a deadline). Bugs and errors cannot be traced by end-users or outside developers with ease, leaving this team as the sole source of fixes. The solutions of which can’t be verified by others, and this lack of transparency can result in hackish and ineffectual solutions.

Anyways! This leads into my story: I loved my Galaxy Nexus. I held onto it for a long time, waiting for a non-Qualcomm based phone to hit US markets (that’s a rant for another day!). The Nexus line of phones is wonderful: a developer friendly phone that has an unlocked and standard bootloader, capable of flashing [almost] anything you want (note: this doesn’t cover the modem firmware and early stage bootloaders… but one battle at a time).

Sadly, it was beginning to show it’s age. Cyanogenmod dropped support for it, and right at the critical point when APN support was broken. My phone had become unusable on Verizon’s networks and any tweaks I tried weren’t working. I could have backported patches as others had done, but this was more work than I was willing to put in for an unsupported phone. Downgrading wasn’t an option as various security exploits were being patched as well. It was unavoidable… I needed a new phone.

I used this chance to move to T-mobile. I had wanted to do this for a long time and for many reasons (such as their position on net neutrality, customer-friendly practices, standard global GSM frequency use, and so forth), but also in the hopes of being able to use those wonderful international phones. However, they can be quite expensive when not offset by plans/contracts…

With the worst possible timing, my decision to settle for a Qualcomm based phone was right when all the carriers dropped the Nexus 5 I was eyeing, and then exclusively carried the unwieldly Nexus 6. I could still buy the Nexus 5… for 450$ I didn’t have.

Investigating the options available to me was depressing. I was lucky to move to T-mobile, as they don’t require a locked bootloader (a position Verizon and AT&T have taken), but that didn’t mean it was going to be easy to root and flash my phone. My first priority was to support any manufacturer that allowed one to root their phone/unlock the bootloader. This was only HTC at the time (Wow! remember when HTC was the worst Android phone manufacturer? I recall they were very reluctant to comply with the GPL for the Linux kernel in particular). However, unlocking it meant creating an account, sending them the device ID, and getting a custom unlock key. This set my tin-foil hat alarm off. The Samsung phones could be unlocked by using a reverse engineered utility to access their [custom] flashing method. The utility was open-source by the developers who reverse engineered it (yay!) but I didn’t grok Samsung’s ecosystem and stance here. I’d rather see a standard protocol used.

Lastly, LG. I couldn’t discern their stance here. The Nexus 5 was an LG phone and wonderful. The bootloaders on T-mobile are unlocked, and the G3 could be rooted by accessing a standard recovery mode. However, this wasn’t documented anywhere by LG, and others suspected it to be an oversight on their part.

I had started a new position, was tight on money, and the LG G3 was cheaper than the HTC M8. So, I let the guy talk me into it. The LG firmware was… okay, I guess? I quickly replaced it with Cyanogenmod, and it was awesome! I love this phone (with Cyanogenmod, of course).

My partner also picked up a G3 at the same time. He holds similar views as I do, but is more… pragmatic. There was no stable release of Cyanogenmod for this phone yet, and I was experiencing the occasional crash (which was being reported as originating from one of the few binary blobs the phone required to function, just saying). He decided to wait until things stabilized. However, he found the LG firmware extremely annoying. Sporadic behavior, and annoying “features” getting in his way. He decided to update to the new Lollipop image hoping it fixed these problems.

It did not. It made things worse, and he’s reset his phone multiple times trying to remedy the situation. With my endorsement that Cyanogenmod was much more stable, he finally decided to root his phone and install it.

Surprise! The bootmode used to root the phone is absent in the new image. Searching revealed the only option was to downgrade to the earlier stock image through a method I find absolutely revolting. The phone must be put into a download-mode (this seems to be an LG only protocol, as opposed to the more standardized fastboot protocol), and can only be flashed using a Windows flashing program (who’s origins I cannot determine), a Window’s DLL for the driver (origins also indeterminate), using a firmware image that we can’t download from reputable sources.

No. No. No, and No. Huge buyer’s remorse. It certainly seems the bootmode was an oversight, and they were quick to close it and prevent people from re-enabling it.

If this is how they do things, I will not be buying another LG phone. Which is a shame, they make great hardware! That’s one of the reasons I bought it in the first place. But their priorities seem to be protecting their right to disappoint users with a horrible software experience, and no option to opt-out.

I was contemplating to use my Jump! program to payout and change to an HTC M8 next year (yay T-mobile!), but this experience has left me bitter, and determined to use only a Nexus phone from now on, where I know something like this won’t happen. Now just need to wait for the replacement for that awful phablet Nexus 6…

All of this is possible because it is not transparent to most end-users. So, I implore the two people who will read this to purchase their next phone consciously. In this case HTC and any Nexus phone are the clear victors.


ARM Boards and Me

After a long time waiting and hoping, I finally received my Parallella board! …to be reminded I didn’t get the accessory pack and that it runs as hot as a stove (damnable Zynq chip…). I was able to boot it up on USB power, and keep it cool with a box fan I had nearby, but of course this is not workable. I ordered power supplies and small 5V fans from Jameco, but they will not be here until next week >:/ In the meantime, I thought I should write down what I’ve been doing so far in my quest to use ARM boards for my personal and professional needs.

For ARM boards in general, ideally I would like to run Octopus and NWChem (two of my favorite computational packages), and also to have them use an ARM optimized BLAS/LAPACK implementation. With regard to graphics, the mobile GPUs in ARM SoCs all (except for Nvidia’s newer K1 I think) implement OpenGL ES as opposed to full “desktop” class OpenGL, necessitating some porting to run most of the software I use on my laptop.

Working on these was why I purchased the Odroid U3. In hindsight, I probably should have gotten the XU… (I decided to holdout for the XU2 which I’ve since learned _may_ come out Q3…). With the U3 I underestimated the progress of Lima and the state of the Mali blobs, underestimated how much I’d want OpenCL support, didn’t realize the XU had better upstream/FDT support, and didn’t realize the Cortex A15s have PAE (would have allowed me to natively compile Firefox… except ‘ld’ is still 32-bit so nevermind :P). That being said, for the price and size, I love my U3. I’ve been casually splitting my time between making it “personal use”-able and having it run as a computational node (so I don’t have to keep running my enormous 200W rack mount server). I’m not quite there in both regards. For personal use the two main detractors for me are some quirkiness in the xf86-video-fbturbo driver (it occasionally doesn’t properly dirty a region and this can lead to some display corruption), and the lack of GLES support in applications. The GLES is mostly just taking the time to recompile packages with GLES support and without glx, and then making sure they properly find the Mali blobs. However, the show stopper is Avogadro.

The newest release of Avogadro needed a small patch to compile on ARM, and it runs with software glx support. However, if I wished to retain my sanity I’d need to get GLES acceleration. I haven’t made much progress in this regard other than learning Qt seems to have good GLES support, and good OpenGL 2.0 code that uses only VBOs is “GLES compatible,” but I have not gone digging in Avogadro yet to see about either case. The hurdle (aside from time), is the development of Avogadro 2 (a rewrite with a much cleaner design), and my inability to decide which one to devote time to ripping apart.

Overall, the “personal use” aspect of the ARM boards isn’t that hard to manage, thanks to Arch Linux ARM :D With FDT usage becoming more widespread, and emphasis on upstreaming device drivers, it seems as if it’s becoming easier to get good old Linux running on these boards as well.

The computational chemistry packages on the other hand, have been giving me some trouble. In anticipation of my U3, I went ahead and finished my efforts to package Octopus and NWChem for Arch. Octopus was quite happy to compile and pass its test suite with the same package, using the default netlib derived BLAS/LAPACK. NWChem also compiled successfully, however my attempts to actually _run_ NWChem have been unsuccessful and result in something similar to :

MA_verify_allocator_stuff: starting scan ...
stack block 'gai_diag_std:z', handle 1, address (.......):
current right signature (#########) != proper right signature (*******)

(zsh) Segmentation Fault

The segfaulting seems to be a problem with the Global Arrays memory allocation, but nothing I have been trying seems to work (OpenMPI vs MPICH, “internal” ARMCI vs ARMCI-MPI, reducing the -O level, internal vs external BLAS/LAPACK, and all permutations therein). The only other mention of a similar error I’ve seen came from a forum post for compiling on BlueGene/Q, and the solution was to link to the optimized BLAS/LAPACK libraries. However, this is another problem, as I have not been able to get a successful compilation of OpenBLAS on my U3. Granted, ARM support in OpenBLAS is very new, but any permutation of options I attempt gives a library that causes Octopus to fail it’s testsuite (and even attempting to use these with NWChem does not help). My other option is ATLAS, but I have no experience with and have heard compilation/installation is a bear. I’ve also recompiled NWChem so much I just had to take a break from it.

This is when I remembered I want these packages to be able to use the Epiphany chip as well! Octopus has OpenCL support, so I just needed to have it use Coprthr for some easy Epiphany acceleration. However, not wanting to melt my board I’ve been focusing my efforts on getting coprthr to compile on Arch and package well (and test if Octopus can use libocl). Compilation fixes for Arch Linux were not too difficult, but Coprthr is not quite a good team-player when it comes to packaging as it hardcodes a lot of paths, supports a “prefix” but not other options (datadir, etc), and has no DESTDIR support. Back when Epiphany support in Coprthr was announced I had actually added DESTDIR support and made coprthr package very neatly. I never submitted a pull-request. I can’t remember why and I am kicking myself as attempting to merge the changes since then has been more effort than just starting from scratch… Coprthr also has a “test” target, but this tests the software after it’s installed, not ideal for a “check” function in a package. I’ve been patching the testsuite to use the local files, but it’s like peeling an onion and finding more and more stuff to fix/patch/tweak. Hopefully I can finish this before those fans arrive ;]

There’s certainly plenty to do ;] And once my fans arrive I plan to have a lot of fun playing with the eSDK (because OpenCL is kind of a boring way to program the Epiphany ;])

My new setup: One reason why I love DragonFlyBSD

I recently switched to DragonFlyBSD as my main OS, and one of my favorite new things I’ve done has been my mirroring setup.

I currently use mercurial to manage my home directory. It works out pretty well,  I have an alias that automatically stages changes and commits with an automatic message and then push the changes to an SD card I had inserted. This also allowed me to backup my home to our home server and/or update my home on my other PC’s using the ssh:// url in hg push. The added benefit of merging changes from when I switch machines was the main motivation behind this setup. (I was thinking of posting about this setup, but it’s not too different than what pretty much everyone else posts on this subject).

Some downsides? Large binary files use a lot of memory to manage. Before we upgraded the memory in our backup server I was using it all up and making mercurial bail. This caused me to make an untracked directory that is a mess, and I’d have to rsync this along with pushing. I’d also have to prune the tree history when it got too big. For moving between machines this is still the best option I’ve found, but for backups it can be a little cumbersome.

With HAMMER on my laptop and backup server,  I had another option, mirror-copy! I simple made a separate pfs for my home directories on the server and laptop, and called mirror-copy with the remote url. This would be a filesystem level mirror to our server, including the automatic snapshots I already have on the laptop’s pfs (which get automatically pruned).

Some downsides to this approach are the pfs-slaves are read-only, so this is really only good for backup purposes (although, I might try some mount_union tricks for when I log into the server), and this of course only works with systems that use HAMMER (currently only DragonFlyBSD as far as I know).

Wait, why not ZFS or BTRFS? Good question. It never occurred to me to use BTRFS this way when I was running Linux, and I never bothered with ZFS on my FreeBSD machine. Technical/concrete comparisons aside, HAMMER feels really well put together and very cohesive.

Status Update: libva-epiphany-driver

On the off chance someone actually looked at my github page and followed my info here…
I had been crunching getting a libva skeleton driver up and running, my initial hopes were to generate excitement for the parallella kickstarter. After I failed to get it done in time (but it got funded! yay!), I was still crunching to get a working demo up (to keep people excited). However, I hit a brick wall trying to debug my Huffman decoding routine, and quickly lost focus as my research drew my attention away. I had more in-progress work, including functional DRI output, that I hadn’t commited because I was trying to debug that routine.

I keep meaning to go back to it, but I still haven’t had any mental breakthroughs. Therefore, I decided to just go ahead and commit what I had. The skeleton driver works, it just doesn’t do anything :P

For some reason, I had insisted on coding the codecs from scratch (part pride, part licensing, etc…), but now I’m feeling more pragmatic. Therefore, I’ve decided to do a few things:

  1. Use libjpeg-turbo source as a reference and quickly finish up the JPEG decoding routines for the demo (concede pride).
  2. Approach problem differently! (possibly concede licensing).

The”problem” was one I made for myself: libva acts as a hardware mediator between applications and accelerated hardware, and it’s on the hardware that the codecs are implemented. The libva-driver gets requests and handles setting up and communicating with the hardware, shuffling data around and such. I knew I’d have to implement the codecs somehow, but foolishly decided to implement them in the driver.

I’ve recently decided to focus my efforts on porting existing codecs/libraries to utilize Epiphany, then just have the libva-epiphany-driver as a host program that loads the separate programs onto Epiphany. This should have the benefit of reducing my workload, simplifying libva-epiphany-driver, and making it possible to receive the benefits of my porting in non-vaapi applications. And of course, porting existing projects and contributing upstream will be better overall (upstream benefits from wider adoption, I benefit from upstream contributions, etc.).

I decided to start with libjpeg-turbo, as it’d be the simplest to work with (and wouldn’t have to worry about the BSD license).  Hopefully this approach will go much better.


I just backed this kickstarter today for Parallella, and have to say I’m very excited! To me, this is the right thing to do, done the right way. I really hope it succeeds! Not just to get a dev board, but I want to see these chips proliferate and make it easier for people to do heterogeneous parallel programming.

My mind is racing with everything I want to do with it, and everything that is possible :D

Update: 10-16-2012 In their newest project update, the Parallella kickstarter has decided on a “soft relaunch” in order to better sell the platform to a wider audience. I think it’s a great idea. They had asked for people to post about what they plan to do with their dev board in order to post the more exciting/credible ones to the front page in order to (as they said) “WOW” the non-programmers. I had already posted what I plan to do, but thought I’d reproduce my ideas here and elaborate a bit more than I would in a comment thread.

I definitely plan on playing around with computational chemistry packages on my board. I currently use Quantum ESPRESSO and Gaussian on our small cluster in lab, but have been interested in running the open source packages on more accelerated hardware. This is one of the reasons this project caught my attention.
ESPRESSO and Octopus (another comp chem package with similar goals) have preliminary/development branches with CUDA/OpenCL accelerated backends respectively, so getting them to use the Epiphany cores shouldn’t be too difficult. Getting them to run well will be fun to experiment with ;]
Both packages mostly rely on the matrix math backends to offload the work to the GPU, so assuming I get these packages utilizing the Epiphany cores, it should be simple enough to get other computational packages to use Epiphany cores. I would likely start hacking on NWChem (another popular computation chemistry package).
All these packages support some form of cluster programming through MPI in particular, so a Parallella board should be able to be dropped into an existing cluster (or make a small cluster of Parallella boards ;]).

I really don’t think this will be too difficult, and this gives me the chance to have a reasonable computational setup at home that won’t bankrupt me electricity-wise. Exciting!

…LibVA (vaapi) seems a good target as well. Writing a backend for that would allow the use of the cores for video decoding/encoding, and do so without having to mess with the base package. This seems like it might be more effort (it looks like it involves reimplementing any encoding/decoding profiles to expose to libVA), but would allow an easy “drop in” solution that would immediately benefit mplayer/VLC/XMBC. This is a project I’d definitely be willing to work on….

I had this idea when people were discussing “out-of-box” and “media box” potential. I poked around the vaapi repo and realized the Epiphany could easily be a “backend” for it. This seemed like a bit more work, but I was willing to do so in order to generate some more excitement for the board. I actually resolved myself to start writing what I could with the documents they’ve already released. I’m hoping to get a chance to have at least a skeleton project before the relaunch this friday.

The other crazier idea I had was to make an LLVM backend for the Epiphany chips, and use LLVM’s JIT/runtime compilations capabailities to do interesting things like: dynamically enable use of the Epiphany chip if available (much like Apple had done with their OpenGL pipeline to enable software fallbacks for missing hardware features on the Intel GMA), or make t easier to have an optimizer that will translate SIMD calls to relevant Epiphany kernels (which should help accelerate quite a few things). For the latter, I’m sure there is a way to do it in GCC, but my impression is the internals are not as modular, and I’d also loose the dynamic compilation possible with LLVM.

I have other even crazier ideas about what I’d do with dynamic [re]compilation, but I’ll save those for a separate post.

Bash one-liners: deleting unopened files in current directory

Gaussian ’09 can leave behind piles of temp files when things fail. These build up and take up a good chunk of disk-space. Usually I just do the house-cleaning when disk-space gets low, but as some may be in use, I cannot just clean the whole directory. I usually tinker until I get something that will “delete all unopened files”, but I keep forgetting how I did it the time before (and I’ve probably done it differently each time…)
The current incarnation follows. This deletes any unopened files in the current directory:

for file in * ; do lsof $file > /dev/null || rm $file ; done

It’s not perfect, as this one complains when rm tries to delete directories. Testing for directories is unnecessary and annoying. I could silence rm with a 2> /dev/null, but the paranoid-me doesn’t want to quiet all errors.
Any suggestions welcomed. My glob was originally `ls *.*` but thought this excessive, and potentially able to miss files without a suffix.


I’ve been playing/looking with/for libraries for a project I’m thinking about, and got to the problem of font rendering. All signs seemed to point to freetype2, but I was hesitant. I just wanted something lightweight (only rendered fonts), preferably BSD/similarly licensed, but would give me good looking results.

My impression was that the font system under the *nixes+Xorg was libxft working with freetype2, and all those gross XML files were from freetype2. Thus I was worried that I’d have to pull in all that cruft just to render some fonts. However, this is when I actually did some digging and found out libxft is built on fontconfig, (which was responsible for all the XML crap), and that freetype2 is actually a separate and minimalistic library that seems very well put together.

It just seems the right way to do it.

Please don’t learn to use a screwdriver

Let’s face it, things have gotten out of control. There really should be no need for everyone to learn how to use a screwdriver. Most of our PC’s come pre-assembled, and the ones that don’t should only be assembled by people who know what they are doing. From my experience, people using screwdrivers can only contribute more overtightened screws in the world, or poor usage of screws. If my drawer breaks I’m not going to try and fix it myself; I have no idea how drawers works and I should leave the job up to someone who knows what they are doing.

An example, let’s do some reductio ad ridiculum on this.
“If we don’t learn to screw we risk being screwed ourself. Screw or be screwed.” – Douglas Rushkoff

Parody aside, do I believe programming is a life skill everyone should be exposed to? Yes, definitively. Should we all be programmers? Probably not. I think Jeff makes some good points, but I feel his analogy is off. I also think his conclusion is elitist, and personally I support meritocracy over elitism. Let’s employ the programmers who prove themselves. Let’s use the good code, but but also let’s give everyone the tools to identify good code. Let’s give everyone the chance to prove they are good coders. But most of all, let’s help people learn how to solve problems–just like Jeff says in his article. The key point I differ on is I support people learning to code in order to strengthen and supplement their problem solving skills.

How do we solve problems if we don’t learn how to use the tools? What’s wrong with coding your own solutions for your own problems? Even if it’s ugly code, it works for you and that’s all it needs to do. How many of you can honestly say your custom shell scripts are programming gems? (Mine sure aren’t) Do you want to argue that they solved problems that other people are more qualified to solve and we should leave the job up to them?

I also find Jeff’s article aggravating because I see it as further justification of an attitude I despise–especially in my co-workers. They only use software that does everything for them, and if it doesn’t do what they want, they complain: “why didn’t they just make it do XYZ?” “why didn’t they do it this way instead of that way?” “Oh! I’m going to pay hundreds of dollars for this software because it will normalize my graphs for me!” If only they realized that with the proper tools, they could fix it them-damned-selves. They don’t need to be master programmers to normalize a set of graphs; they just need to make a stupid formula (and being chemistry grad students, they damned better know how to mathematically normalize something). Perhaps with some knowledge of variables and assignment, they could make a macro they can apply to data sets, or something! Instead of whining and waiting for the next black box to descend from the higher-ups, they could do it themselves.


After posting my last entry, I saw an old post of mine about moving to Midori from Chromium. And well…. that didn’t last ;]

I still like Midori, and I still like webkit, but I found myself making Midori more and more… Emacs like. That’s when I finally came across Conkeror. My partner is disgusted I’m back on a Firefox based browser (he’s scarred from the memory leaks on windows…) , but frankly I don’t mind it so much. I don’t have to deal with as many headaches dealing with incompatible sites, and the work-flow and configuration are just what I was looking for. It also works out for moving from machine to machine (my current machines bring me between Linux and 3 different *BSDs) as most have a [reasonably] recent version of Firefox/Xulrunner, and Conkeror is written as a javascript frontend; I can install it in my home directory and carry it with me whatever machine I go to.

I haven’t done anything crazy that hasn’t been documented well on the wiki, except for the launcher script (nothing special really, just calls ‘exec firefox -app (location of conkeror)’)

And what may be me parroting myself from before: I am aware of Uzbl, and theoretically it seems like an ideal browser, but I’m still not there ;] Maybe someday…

Launching Programs: dmenu_run and process groups?

Update 06-May-2012: I’ve been reading the mailing list to see if this has come up before, and there is a lengthy thread on improving dmenu_run. From what I can tell, the examples for improvement all had some sort of  “exec” call in them (but did not seem the focus of the discussion).  Looking through the revisions, it looks like dmenu_run did use ‘exec’ but called it on a shell, but this was changed for the purpose of “disowning the child shell.” The only thing I can think is perhaps this is due to a difference in how dmenu_run is invoked…

I just moved back to i3 from wmii, and after using it for a bit, I noticed that after launching programs with dmenu_run that they all were running under zsh (my SHELL) in pstree.

I remember wmii would launch the program as it’s own process. I like zsh in interactive uses, but it is a bit heavy wait to be used for just spawning processes. Luckily, dmenu_run uses ${SHELL} for launching, so I just made my keybinding launch “SHELL=dash dmen_run …” instead of plain “dmenu_run …” This worked nicely as it used dash to launch the program and took up much less memory.

But then, I thought why would dmenu_run launch programs this way? Wouldn’t it be a better idea to launch an application in it’s own group, say with the ‘exec’ function? The only reason I could think not to was the ability to use dmenu_run as a kind-of “one-off” shell. I’m not an expert on ‘exec’, but I was testing it out, and it still gave the same functionality. So, I made a modified dmenu_run called dmenu_exec, that just prepends “exec” to whatever it pipes to the shell. So far, it seems to be working as expected, and still allows me to use dmenu_run as a “one-off” shell.  And because I’m still piping this to a shell to run, I left the SHELL=dash line in my keybinding.

From what I understand about process groups, this seems to be a better way to launch programs, but as it isn’t what the dmenu developers did (and they seem like a pretty smart bunch), I wonder if there is something I’m missing. I’m using this for now, but any comments would be appreciated.

if [ -d "$cachedir" ]; then
    cache=$HOME/.dmenu_cache # if no xdg dir, fall back to dotfile in ~
    if stest -dqr -n "$cache" $PATH; then
        stest -flx $PATH | sort -u | tee "$cache" | dmenu "$@"
        echo "exec $(dmenu "$@" < "$cache")"
) | ${SHELL:-"/bin/sh"} &