Katadyn Pocket Water Filter

During the summer of 2012, I bought a Katadyn Pocket water filter. It took a bit of research, but in short order the decision to buy this model over just about any other was clear: Most water filters seemed to have a capacity of a few hundred gallons or maybe up to 1,500 gallons; the Katadyn Pocket filter has a capacity of up to 13,000 gallons, or 50,000 litres. Given the price difference — anywhere from $75 to $250 for most of the rest, and $300 to $350 for the Katadyn Pocket, there was little to decide.

The unit has a 0.2 micron ceramic filter with silver impregnated in it in order to act as a bacteriostatic agent, although you have to be careful about that (see below).

The only thing that bugs me a very little bit about it is that it’s a filter only (albeit very good), not a purifier. Unfortunately, the purifiers don’t have the capacity that this filter has. This works out to the fact that the unit can effectively remove all bacteria and cysts — and of course cloudiness — in water, but theoretically it can’t remove viruses due to their being far smaller than the pore size (unless they electrostatically attach themselves to a particle which can be filtered out by the unit). It also means that it doesn’t remove any other contaminants smaller than 0.2 microns, including the usual nasties one might think of such as dissolved heavy metals, pesticides and other such nasty contaminants, and the more benign but nonetheless undesirable tastes, odours and colours that aren’t due to cloudiness.

There are two ways of dealing with these issues:

1) Choose a clear water source — that you might be tempted to drink without treating it at all (your natural “yuck” factor will help you out with this) — and this will reduce the likelihood that these are problems to begin with. By itself, most people — including myself, a trained water techie — can’t just look at clear water and tell whether it’s contaminated with the poop of 30 deer 100 feet upstream, or the dumpings from some illegal leather tanning shop 200 feet upstream. But, generally, you can tell the difference between clear, running water in the middle of the woods far away from just about anything and that doesn’t have any smells to it, and stagnant, cloudy and smelly water in the ditch surrounding a garage.

2) Bring around a small bottle filled with bleach and an eye dropper (*). I find that depending on the water source and the strength of the bleach (typically 4% to 6% sodium hypochlorite), 1 to 3 drops per imperial gallon (4.5 litres) has worked well on the filtered water. Melted snow from my cottage could do with 1/2 drop per imperial gallon, given that the bleach taste still often comes through quite distinctly on such (presumably) relatively pure water. As a reference, the USEPA (here’s my archive) recommends to use two drops per quart when using bleach to disinfect untreated water, or about 8 drops per US gallon (3.78 litres), or about 9 drops per imperial gallon (4.5 litres).

(*) This won’t deal with a bunch of dissolved metals, and can’t completely deal with tough contaminants, so choose your water source carefully!

Now, putting aside that I’m a water techie, why would I, who stopped being involved in Scouting and most forms of camping and hiking in 1999, need such a device?

The family cottage doesn’t have running water in the winter, and I usually spend a week over Christmas and typically a weekend a month at the cottage over winter, when the water is off. I’ve been starting to get tired of carrying up big jugs filled with water. I’ve been getting tired of running out of water or at least having to be careful about how I use water. And, particularly, I’ve been getting tired of depending on a few neighbours for their goodwill. The operative notion here is “depending”; a lack of goodwill is not the issue, although the variability of whether or not two of the immediate neighbours would be around all the time is a concern alongside the inconvenience of having to go out to get clean drinking water in the middle of washing dishes.

One of the first things I had to figure out the hard way is the importance of keeping the unit clean (go figure, a water techie needing to be reminded of the importance of keeping drinking water treatment equipment clean): Over almost two weeks in the summer, I’d used it three times, and ended up with a good case of diarrhea which took a couple of weeks to clear up. So note to myself, and those considering buying any camping water filter: Keep the unit and the outlet hose in particular clean — it can get contaminated easily — and when you’re going to leave it sitting around for more than a day or two or pack it away for a while, run a bleach solution through it first and dry it out.

So, does the filter work? And do I get the runs any more?

Of course, and of course not.

This year over Christmas, I found it quite useful, although I did bring up a good supply of water anyway to begin with, given that I was coming up for a week and the long-awaited testing grounds had finally arrived. I needed some kind of starting point, in case I found out that “making” water was a lot more work than I’d bargained for, especially given all the freezer cooking (and therefore dish washing) I do over that period.

I also confirmed what I had begun observing for years while melting snow for things that didn’t require drinking-quality water: You’d be surprised how much dirt and debris comes through when melting “pristine” snow in the middle of cottage country, far away from the city. It’s but a little more appetizing for drinking, cooking or rinsing the dishes than dishwater — so of course I don’t bother filtering the melted snow for my “put the dirty dishes in hot soapy dishwater” part, but of course I use filtered water for rinsing the dishes.

Regarding the amount of bleach to use, I have found that the filtered water from melting snow needs about a drop per imperial gallon, while the filtered lake water can handle about two drops per imperial gallon, before a distinct bleach taste comes through. This is a little testing based on working with the filtered water and after having first consulted some tables on how many drops of bleach per litre to use for treating water (here’s my archive) (instead of just calculating it myself). As a reference, the USEPA (here’s my archive) recommends to use two drops per quart when using bleach to disinfect untreated water, or about 8 drops per US gallon (3.78 litres), or about 9 drops per imperial gallon (4.5 litres).

Regarding the “one litre per minute” claim, it mostly works out to that, sort of, I guess — which means, not really. In practice, though, I suspect that that’s based on filling up one litre or one quart water bottles commonly used, especially in camping and hiking circles. For larger amounts of water, it takes longer. The best test I’ve had — since filtering water while being distracted by the TV at the cottage isn’t much of a test — was when I recently filtered about 23 litres of melted snow undistracted for a future batch of beer, and it took me about 40 to 45 minutes. This admittedly but importantly included a stop about halfway through to open up the filter and clean the ceramic filter, which had become sufficiently dirty from the dirt in the “clean” snow that I’d melted, making filtering the water difficult. A comparison between the 100 metre race and the 3,000 metre race in the Olympics would be apt: You sprint in the former race, but you pace yourself at a somewhat slower running speed in the latter in order not to get too tired right away and be able to make it to the end of the race.

Anyway, I like the filter, and it should get several years’ worth of use before I have to start thinking about buying a replacement filter cartridge.

Update 08 June 2016: Katadyn water filter capacity — update

Installing, then removing, the Mate desktop

Back in mid-January, 2013, I switched to the Mate Desktop. I can’t figure out if I changed because I was getting tired of Gnome 3 and all the complaints surrounding it, including my own, and fell over a cliff when an imp pushed me in the form of my brother wanting me to install Fedora 18 with Mate on it, or the other way around.

I went through a lot to get the desktop to a level at which I had become accustomed under Fedora 14. I took a lot of notes; I would need them in the next few days for my brother’s system. Indeed, they were useful; the install went rather well, in no small part due to my notes. Incidentally, it all felt like a fresh reinstall. Yet, the whole transition was smooth, and I hadn’t forgotten anything. (Having a CentOS box with a Gnome 2 desktop helped. 🙂 )

Two things presented themselves as relative problems, one right away, and one that took several weeks to develop.

The first problem was a general set of issues: I was hoping for what amounted to the Gnome 2 desktop, having reached its apex (for me) in Fedora 14. The general environment and look and feel of Gnome 2 were there, but a number of packages, some might call secondary, weren’t there. Most notably, PackageKit, a gui application for adding and removing software, and xsane for scanners, and hplip — removing the ability to use my scanner before reinstalling hplip — but there were others. These were trivial absences since installing them is trivially easy, but curiously, PackageKit still wouldn’t show up in the menus. This was still a trivial absence since I rarely use PackageKit, normally preferring command-line installations and updates; but this was still a mild nuisance since I occasionally like to see what is available in the repositories, as well as some descriptions. And, I found that the themes were different from that to which I’d become accustomed, such as buttons, most notable for me in LibreOffice, were different. I was struggling to remember how much of that to which I was accustomed was that to which I had been accustomed for a long time, and how much was a case of much having changed however many times since I started using Fedora with F8 such that I was numb to any such changes.

But the other was a bit more insidious in its latency: After but a few weeks, it felt old. I daresay stale. And, it felt unloved. I suppose that in having used Gnome 3 for a year and a half, the familiarity it did have with Gnome 2, and the extensions having smoothed out many complaints, I had become a convert.

So, what did I do?

yum groupinstall gnome-desktop
yum groupremove mate-desktop
yum groupinstall gnome-desktop (to reinstall that which was removed by the previous command)
yum install gnome-shell-extensions*
yum install gnome-tweak-tool
yum remove xscreensaver*
reboot

… and here I am. Happy again. No, I’m not sure I want to say that. 🙂

Installing Fedora 18

This past week I installed Fedora 18 on a Frankenputer for my brother, yes, *that* brother who is my linux expert, because, well, I’ve been exclusively using a linux desktop since 2006 at home and at the same time he had gravitated away from using a linux desktop back to “another” system.

After having swiftly changed my own Gnome 3 desktop to Mate this past week in order to work out the kinks, I was ready to install his system. I expected or at least hoped that this setup would be a bit easier than my experiences under Fedora 17 since at least under F18 Mate is an officially supported desktop, while under Fedora 17, Mate is merely fully available (and functional) from the Fedora repositories. Interestingly, it was an identical experience — besides the parts about doing a “yum groupinstall mate-desktop” on my system and installing it under anaconda under F18; I still had to add a bunch of packages, tweak here, adjust there, etc. in the same ways. A quick look at versions numbers between the two systems suggests that many of the packages are the same version number, just recompilations for each version of Fedora.

So after his having spent hours going through four junkers, interestingly all having come from me over time, including one on which we had installed Fedora 14 a couple of years ago, and not being able to get any of them to work, we managed at the last minute to secure two more junkers — essentially, stripped carcasses which were cast offs from a local guy who does a lot of business doing virus & spyware removals, “reinstalls”, selling good-quality used laptops, and dealing in spare parts both to locals (such as myself and my brother) and over the internet through the likes of eBay, we start working on the install. Spare parts from other junkers as well as a new 2T drive are inserted into “Door #1”.

“Door #1” appears to be an Intel Core 2 Duo or some such, so this is so far the most advanced processor with which I’ve ever knowing worked at least up close. To me it seems clear that it’s a 64bit machine; I’ve caught the 64-bit religion, which is pretty gratuitous under linux, since just about *everything* is available under 64 bit. A 64bit Fedora Net Install ISO is downloaded and burned.

Frustratingly, “door #1” proves to be a doorstop: it powers up, rather loudly, but being able to do little else beyond turn on and make a lot of noise in the process, and not even boot through the BIOS. At this first glance it seems to be little wonder that our local tinkerer had stripped out the parts he thought he could sell on eBay or otherwise use to build his own Frankenputers in a reverse “2 for 1” deal. Time is running out on closing time for the local stores, so we move on to “Door #2”.

“Door #2” again appears to be a dual core or some such, so this is so far the most advanced processor with which I’ve ever knowing worked at least up close. To me it seems clear that it’s a 64bit machine; I’ve caught the 64-bit religion, which is pretty gratuitous under linux, since just about *everything* is available under 64 bit. The aforementioned CD is thrown in and things boot up until it hangs a few times at the point that the graphical part of Anaconda is supposed to kick in. After a few stalled attempts, we download and burn the 32-bit version, and everything works. Phew, just under the wire.

The visual changes in Anaconda made things seem more direct. Setup goes through cleanly.

Then I go through my list of things to install and remove, some of which date back to my install of Fedora 12 — yes, I still have some notes from then (paper works!) — while others dating back to the past week or so when I installed Mate on my machine and learned about what I wanted, needed, didn’t want, and so on.

The whole process took about 6 to 7 hours.

And, apparently, my brother, the master surpassed by his student, tries things out and loves it, after a few years of not having a linux desktop. Maybe there’s a renewed hope for him.

Go figure.

Fedora Life Spans

As a quick post, I am presenting my table here of typical Fedora lifespans.

Surprise, surprise — or, if you prefer, surprisingly — over the years, on average Fedora has actually been doing a good job of keeping to what is colloquially described as a 13 month lifespan, despite fairly variable lifespans of almost +/- 20% compared to average as of Fedora 16, often being delayed by a week or two or more, and in the case of Fedora 18, by two months! In fact, it has been keeping to this average rather closely — as of Fedora 16, the cumulative averages have kept to less than 2% from the overall average since Fedora 5. Well we’ll see how that affects things, as it is right now I’ve estimated the lifespan of Fedora 16, which I’ll correct when the official number comes out. We’ll see how the two month delay has/will affect(ed) the scheduled release of Fedora 19, and as the case may be Fedora 20 and so on.

Each of Fedora’s End of Life (EOL) is scheduled at a month after the release of the second version of Fedora after, eg. Fedora 12’s end of life was one month after the release of Fedora 14, and so on.

So, while I’m making this up, if the lifespans of Fedora 1 and Fedora 2 are any indication, Fedora presumably only started with the “every six months or so release dates” and/or defining the EOL as one month after the release of the second version following a given release, somewhere around Fedora 3, or possibly Fedora 4. (Although apparently Red Hat Linux, as mentioned here, had a release schedule of about every 6 months, too — and an erratic lifespan of 18 months or 3 years or 5 years, depending on what appears to have been whim though what probably was more along the lines of support contracts tied to specific releases, public reception to a given release, or a given release’s perceived technical excellence and value, etc.)

So enjoy the Table.

My problems / Gripes with Gnome 3

Background: Regular readers of my blog — the few of you that are out there 🙂 — know I use Fedora and CentOS. Once again, Fedora is an interesting case: As a pretty strict rule, packages appearing in Fedora are as close to the upstream product — the software as it appears on the original project’s website — as is practical; generally, the only changes are those necessary to make them work under Fedora. So generally, if you were to download the sources from www.thisismyawsomelinuxapp.com and compile them yourself, without tweaking them — while making them work, of course — then that’s what the software probably looks like and how it works under Fedora.

Generally, Gnome 3 has been a mixed bag. Some things are interesting — I won’t say improvements; but I think that there are interesting additions (G2 and mobile device devotees will call retrogrades) that I’m willing to welcome, or at least I find acceptable given a paradigm change. I particularly like the hot corner that brings up all of the open windows. Other things are six of one / half dozen of the other, such as the panel/dock on the left of the activities screen.

Here are some specific gripes I have about Gnome 3 at least as installed in Fedora 15 and 16:

This is based on my experiences with Gnome 3.0-whatever and 3.2-whatever with F15 and F16 out-of-the-box installs:

– switching between windows — the default ctrl-tab is between applications, not windows. To do so requires that I hold down the ctrl key, use the mouse to choose the application, wait for it to open another window with all of the instances of that application, then choose with the mouse which one, which sometimes may be difficult unless I were to have a 50′ screen. So it’s not important that I switch, let alone easily, between two spreadsheets, or two pdf’s, or two documents in LO writer, right?
– solved on my F16 machine by “yum install gnome-shell-extensions-alternate-tab”. Needs to be activated by “gnome-tweak-tool”, listed as “Advanced settings” under the Applications menu — see below, date and time gripe
– the above solution kept on crashing my f15 machine, so I removed it.

– Opening up a new instance of an application. Linus’ well-publicized bug: You go to the activities screen, choose one and click on it — say, in Linus’ case, the terminal — and the existing instance is reopened. So in order to open up a new instance, you have to choose file/new window. Valid in and of itself, but not more efficient by removing the possibility of having many ways to do the same thing. Also, partly addressed by the fact that you can right-click on a launch icon and choose to go to the existing instance or launch a new instance; but, this works out to being the same gripe.
– the both over and under sensitive upper-left hand corner: When you move the mouse to the upper left hand corner over, you’re apparently supposed to be able to open up the Activities screen. In Fedora, it’s too sensitve when I don’t want to open it up and my mouse just happens to be in the area, such as when I am going to the File menu of a given application, and then when I want to take advantage of that cool function, boy is it slow in figuring out that it’s supposed to move to the Activities screen.
– Activities screen — closing windows. When you hover the mouse over a window, a little x in a circle appears in the upper right hand corner of that window icon, allowing you to close it. When you have enough windows, it’s real easy to accidentally click on it instead of on the icon itself (to open the window) unless I were to have a 50′ screen.
– Nautilus — when you have a file highlighted, on the bottom there is an “announcement” window stating that you have the chosen file selected — barring the easy selection of the last visible file via mouse if nautilus is maximized. Obviously you can select it by moving the highlighter down with the down key, but the only way to know what the filename is, is to read the annoying “announcement” window, and you often can’t see the the other file information (last saved, time, file size, etc.).
– notifications — lots of things get a notification, like “you just printed a file” or “the file you just opened is ready”, and they stay in the notification bar available from the lower right hand corner until you manually remove them all, individually.
– adding the date to the time at the top (Correctable by “yum install gnome-tweak-tool” F16)

really minor gripe:

– in order to turn off of the computer or reboot, you have to highlight the “suspend” option in the stats menu off the upper right hand corner, and hold down the alt key. Something I can live with, but there anyway.
– solved by “yum install gnome-shell-extensions-alternative-status-menu”. Needs to be activated by “gnome-tweak-tool”, listed as “Advanced settings under the Applications menu — see date and time gripe

Generally, at least specifically to F15:

– When I unplug my laptop to move it to a different location, using the battery, the system goes into hibernate, and doesn’t even ask if that’s what I really want to do. (Correctable by yum install gnome-tweak-tool, F16, which allows you to decide what the computer will do when AC power is lost.)

And here’s a gripe about Evolution, going back a few years, and which has absolutely nothing to do with Gnome 3, or Gnome 2, or even Gnome at all, presumably):

– when you open up a daughter window, the basic evolution program engine is still needed. It effectively makes the main window barely “first amongst equals” instead of being “the program”, from the user perspective. As such, close the main window but not a daughter window, the program engine module is still operating. That means that in my case — because, when I use my email client, I want it to pop my email, then erase it from the server so that when I go to webmail, I don’t have, what, 100 pages of old email to wade throug — email still gets popped and removed from the server, and no longer available by web mail. This is a human-interface bug, since at the very least when closing the main window, it should ask “do you want to shut down all evolution functions, or just this window”?

Bugzilla — again, not specifically a Gnome problem:

Traditionnaly when ABRT is activated because of a crash, when I get to the point of selecting to report via Bugzilla, I get messages about the wrong settings being in place and that the reporting will likely fail. I found out a few years ago that this is generally due to the lack of the relevant backtrace program for the crashed program, hence there being a lack of sufficient “useful” information. While conceptually I understand the need for a proper backtrace so that as much detailed information is available as possible, this presents a real conundrum: I have occasionally in the past gone to the trouble of installing one or two relevant backtraces — after a crash and realizing this conundrum — and noted that it slows down the system significantly, and having all the existing backtrace programs is impractical. Hence without the appropriate backtrace, a bugzilla report will fail. Yet due to current circumstances, the average (at least desktop user) is unlikely to know which they are likely to need to install, and Fedora loses out on valuable crash information that would help solve a bunch of problems.

What do I like about G3:

Most of these are indifferences (ie. I don’t much care whether they’re along the lines of G2 or G3), but I’m willing to give them a thumbs up at least on that basis:

– nautilus does two panes, although I think that it probably did it before. A certain other system doesn’t; you can only either move things on the directory tree on the left (which you can do, sort of, in nautilus) or between two windows.
– Somehow the automounter for things like memory sticks seems a bit smoother and polished under Gnome 3 than under Gnome 2.
– I have actually always found the dock, and that it’s on the left hand column, intuitive — funny, I find the dock on the bottom in XFCE, which I have on my CentOS server (from the days a few months ago when the machine itself was a celeron 1.0 with 256megs of RAM and it found that hard to handle; G2 ran it into the ground within minutes) not anywhere near as intuitive (although I suppose it can easily be moved were I to want it to). The only drawback: more intuitive and useful than Gnome 2, but, in Gnome 2, I had already been putting launchers on the upper panel for years, as have other people. It still gets the thumbs up, though.

How to put XFCE on CentOS 6

This post is following my having replaced the gnome desktop from my centos-6 home server with XFCE, due to:

– the machine is probably over 11 years old and only has 256megs of memory
– the machine is not really likely to be easily upgraded, nor replaced
– the change is worth it anyway given the light load I put on it
– because it was so slow, and even seemed to progressively grind to a halt in a short period of time (a week or less, it seemed)
– questions online about how to replace gnome to XFCE under centos were inconclusive regarding how to make XFCE the default desktop on CentOS.

No, I don’t know how to make them co-exist, I tried a bit and decided after a three or four commands that since I didn’t need Gnome anyway, that replacing it completely with XFCE was the best way to go in my case.

Required:

– an installed and working CentOS 6 setup on the “target machine”
– an internet connection
– an operating ssh server on the target machine — including, if you’re controlling things through the internet, appropriate settings to receive the connection through the internet (another topic)
– root priveleges on the target machine

Optional, but likely to be highly useful (as it was for me given the “cart-before the horse” approach I used):

– a second computer, “the head”, with an ssh client, able to connect to the target machine via ssh either locally on the same network, or through the internet.

This process can be dangerous, will probably require at least one reboot, and ideally should require having physical access to the machine in case of requiring a hard power-down and repowering of the machine. All critical processes should become less critical (ie. user announcement for downtime, transfer to another server, etc.) and a backup should be performed beforhand.

I am presuming that on the target machine, root login has been disabled and that you will be logging in via “mere mortal” accounts and then elevating to root. Should you be using sudo on the target machine, the account into which you will be logging should have appropriate sudo priveleges, and you should adjust the instructions accordingly (that much I won’t do for you, I don’t like sudo; “Don’t be afraid of root. Respect it, but don’t be afraid” as my brother says.) I am also presuming that you are controlling the target machine using the head.

– using the head, ssh into the target computer — make sure that you are logged into an account on the head which also exists on the target. If you are on the same network, on the target machine itself, determine the IP address using “ifconfig” at the command line, and look for the number with a “192.168.***.***” format in the second grouping after the “inet addr” tag.
– Elevate to root (“su” )
– Install the Fedora epel repository on the target computer.
– visit http://fedoraproject.org/wiki/EPEL/FAQ#How_can_I_install_the_packages_from_the_EPEL_software_repository.3F
– enter the following if you are running Red Hat Enterprise Linux 6 / CentOS 6 / Scientific Linux 6:
“rpm -Uvh http://download.fedora.redhat.com/pub/epel/6/i386/epel-release-6-5.noarch.rpm”
– enter the following if you are running Red Hat Enterprise Linux 5 / CentOS 5 / Scientific Linux 5:
“rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm”
– Uninstall the gnome desktop.
– “yum remove groupremove gnome-desktop”
– Install the xfce desktop.
– “yum –enablerepo=epel-testing groupinstall xfce-desktop”
– add a line about changing the desktop type
– “nano /etc/sysconfig/desktop”
– add the following lines:
” DESKTOP=XFCE”
” DISPLAYMANAGER=XDM”

– Reboot the target machine with “reboot now”. The head will lose its connection to the target machine. The machine should now reboot and the xfce desktop should appear on the target machine.

Problems:

In my case, I had installed the XFCE desktop environment first, and some dependencies were removed when I uninstalled the gnome environment; I figured this out after a reboot and the screen came up blank after a reboot. I therefore ssh’ed using the head into the target and reinstalled the xfce desktop as above, which installed the necessary missing dependencies. I also installed the evince reader (“yum install evince”) which has nautilus as a dependency (and as such installs it, although so far the only real difference between it and Thunar is that the latter doesn’t support dual panes.) Both seem to work nicely with XFCE because it uses the same libraries as Gnome.

My desktop is now a doorstop, and Fedora 15

Back in July, I thought it was time to upgrade my computers to F15. So, I took two computers over to my brother’s and we did backups, and proceeded to reformat my desktop.

First, there were a couple of problems, and we had to reinstall a few times.

Then, disaster struck: as we were trying to restore the backup, the hard drive with the backups died.

In the meantime, I ended up eventually updating my laptop using another backup medium. F15 is interesting with Gnome 3: To say the least, it’s different, and changes the paradigm. There are a bunch of things I haven’t gotten around to figuring out. However, the most curious thing about it is … in its differences to Gnome 2, it’s still intuitive and generally easy to get around. It’s a curious feeling that my only real “complaint” is that it’s different. Six of one, half a dozen of the other applies here.

So when I finally got my desktop back after about five reinstalls, of course the first thing I noticed from my beloved box was that it can only use Gnome 3’s Fallback Mode. Then I started noticing a funny thing: everything would freeze when the screen saver when on. I think it may be a video card issue, or a memory stick issue; as long as the screen is active, it’s the same old computer. However, when the screen saver is activated — not the screen lock, but rather when the signal to the screen stops, everything locks up; having to reboot it every time I wanted to use it made it useless.

After months of using my laptop only, and eventually reformatting my server since CentOS 6 finally came out, I decided to take my server box and switch out the 500 gig HD and switch in the desktop HD. Funny consequence is that the server box can handle Gnome 3. However the bells are tolling for the 80 Gig HD; the system monitors are saying that there are too many errors on it. I’ll eventually have to add it to my pile of hard drives with the word “evil” written on it, and have to either find another old hard drive to put in it, or buy another one. As it is, it appears that the 1TB drive I bought as a second backup medium is an external-only device, although I’ve been known to be wrong on many occasions. 🙂

The 500 gig HD is now in an even older Celeron 1.1GHz box I found on the street; the server has been renamed to reflect where I found it. And the old desktop … I guess I’ll either have to diagnose it or just get it recycled.

The Mac Trojan, the solution, and what I think about the millions of dollars it represents

A few weeks ago it came out that the Mac had a real trojan horse in the wild. (Here’s my archive.)

As I understand it, it was the result of a simple vulnerability, by happenstance a major beef I have with Ubuntu as well: some source — a website, a piece of software you install however you do it, or whatever, introduces a simple pop-up that says “Your machine has been compromised! Click here to remove it!” The dutiful lamb, er user, clicks “Ok”, then — and here’s the clincher — the downloaded “cleaning” software, which is actually the trojan horse, needs administrator access to install (or “root” access in unix parlance, the base of the Mac, and effectively the same thing with any linux distro), and the user obliges with their password. They do this of course understanding that it’s important to keep their machine safe from malware, and that when they’re doing certain things, they have to enter their password. “Trust me, I’m reputable computer software trying to protect you, I know what I’m doing. Professionals programmed me.” Then of course they get the user to hand out money for “licences” to keep their computer “protected” — as in, they get the “licence” money, and they send a command to the trojan to lie dormant and not do any harm to the system. (In some commercial districts, especially with small Mom & Pop style shops, it’s called “Protection” money; the cops call it “Extortion”.)

For the non-technically inclined, this works through a simple process:

– Under any Mac / unix / linux system, there is one all-powerful account called root.
– Switching to root to install software is occasionally a pain, so Macs (and Ubuntu) rely heavily on a command that is so heavily integrated into the system that it lets certain users transparently, when called upon, install any kind of software, good or bad, by simply entering their own password, on the presumption that this select group, which are listed in a special group, are trusted with system administration.

Here’s a bit more on root:

– This account literally is, within the limits of what that instance of the OS can do on the installed piece of hardware, the top god. For the purpose of this conversation, it’s Zeus, the chief god who is the most powerful god in Greek mythology, above all other gods.
– In such a system, the normal user — including the principal user of a computer with only 1 to 5 users (the typical home computer) — usually isn’t even a lesser god. They’re mere mortals, with limitations and unable to affect much beyond their own account.
– On the other hand, the root user, noticing that user X is consuming too many resources, can decide to put further limits on that user, or any other, but not vice-versa per se nor one mere mortal upon another. I have such a principal “mere mortal” account on all my computers, and I of course know the root passwords, so I can become root on those computers and do whatever I like on them; usual practice, of course, dictates that one temporarily logs into root to do whatever is necessary as root, then to log out back into the “mere mortal” account to do day-to-day stuff. Such as write this piece.
– One command in Mac / unix / linux is the “sudo” command. This allows defined “mere-mortal” users to elevate their privileges to that of the root user on a generally one-time and contextual basis which ceases when the task is complete, and which has to be re-invoked any time that they want or need to use said root privileges again. This overall makes them lesser gods: While they indeed do have Zeus’ powers, these powers are limited to the task at hand and only the moment at hand, and allows people to maintain their systems while avoiding carelessly doing damage, all in a convenient package. Therefore, the user can do necessary changes, adjustments, updates, and the like, but, since they are doing it as themselves with the necessary privileges only when the situation calls for it, they usually, under normal, routine circumstances — acting in the capacity of a mere-mortal — avoid doing system-wide damage, because, well, they aren’t root. Normally a simple command line command would either only affect their own account, but no one else’s, or, depending on the command, the system would (or should) say “you need to be root to do that”. The problem I see here is that on a Mac / Ubuntu / other similar system, as I explain below, sudo is so tightly integrated into the system that it’s normally presumed that since the user in question is typing in the command, they actually are fully aware of what the command does, and that they really want to do it, so the system automatically asks for their password, invoking Zeus’ — root’s — powers. (BTW, this is why it’s also dangerous to do day to day stuff as root — the system presumes that root knows what they’re doing. Hence why, as mentioned above, you only log into root to do rooty stuff, then you log out again to do day to day routine stuff.)
– Further, there is a certain security element to it: the system logs who invoked the privileges in this fashion, so that Zeus, er, root, can check up on his underlings. Incidentally, anyone on the system can be in a list of people with “sudo-er” privileges, and how they get it, when they get it, why they get it, for what purpose(s), and just about any other condition, can be set by root.
– Mac OSX, Ubuntu and its derivatives, and other similar systems, depend on sudo; in fact, the root user on both systems are disabled by default, and sudo is heavily integrated into the system: The principal user by default is the defacto root user, since their password is all that’s needed to do rooty things when such things are required — unless a savvy admin principal user makes another user a sudoer, which can be done quite easily, as explained above. To be fair, on any unix / linux system you can extremely limit access to the root user, and set it up such that the only way to use root privileges is by sudo; it’s just an active choice made by Ubuntu and MacOSX. Don’t despair, on an Ubuntu system, it is also trivially easy to reenable the root user, and remove the principal user’s (and other users’) sudo privileges; I have no clue regarding Mac OSX but I imagine that it can be done. And, no doubt, break your warranty and support privileges at the same time.

So, back to the subject at hand:

What happened with this trojan horse is that the wolf (the trojan’s programmer(s)), knowing that Little Red Riding Hood (the users) have been trained to protect their system from attacks and keep them updated and to trust the computer when it says “you’ve been attacked”, dressed up as Grandma and said, “Trust me. I’m trying to protect you.” Said LRRH users obliged. And, as I mentioned earlier, those who control the trojans get the “licence” money, send a command to the trojan to lie dormant, and as long as the money keeps on coming, no harm comes to the system. (In some commercial districts, especially with small Mom & Pop style shops, it’s called “Protection” money; the cops call it “Extortion”.)

Can you blame the users?

For the purpose of this piece, I am forced to say “No.” At least to the extent that the culture that the systems that they are using is blinding them. (Even under linux you’re expected to keep your system up to date to avoid such difficulties.) “Oooh, we’re easier to use. Oooh, we’re pretty. Oooh, you want to use your computer, not maintain its innards, just like you don’t need to be a mechanic to drive your car or ride your bicycle.”

Of course it’s a two way street: The fact is, computers these days are complicated, because the things we ask them to do are complicated. Which means that, when it comes to computers at least, a quote by Admiral James T. Kirk, to the young and inexperienced Lieutenant Saavik, after taking control of the bad guy’s ship by essentially using the ship’s root password (the ship’s “prefix number”), taking down defences, and causing critical damage, comes to mind: “You have to learn why things work on a starship.”

sudo is not evil, much as I might think or want you to believe it is. However, as I’ve said earlier, it likely could be — and here’s the proof of concept — the downfall of any system that depends on it. The main reasons it isn’t evil is that next year, someone will find another weakness to exploit, and it is quite convenient to use. Maybe in a fashion directly in line with my “holier than thou” approach to commoners’ computing (r).

And in other corners …

This trojan resulted in the proposition — so I’ve heard, so this is pure speculation at this point — that the App Store concept could be applied real hard, to the Mac. As it is, the App Store already exists for the Mac; I actually think that the App Store for the Mac is a good idea: It’s a repository of trusted software that will work — and knowing Apple, spectacularly well — on the Mac, instead of downloading it willy-nilly from anywhere on the internet. However, the idea here is that it would be to the exclusion of all other sources. And here I thought that Apple was pioneering in the Mac world the repository system, hopefully to be followed by others, just like in linux.

The upside:

– You can get all your software from one place, and it would be (presumably) safe.
– Apple would digitally sign each piece in a couple of ways such that you could only get usable, safe software from there.
– Presumably, Apple will run all software through thorough testing so that the chances of it being infected with a virus or it being or rendering your computer vulnerable to compromise would be remarkably low. (When was the last time you heard of an iPod Touch / iPhone / iPad virus or trojan?)

The downside:

– You won’t even be able to compile your own software to run on your own machine (oh sure, I imagine in a virtual testing platform at least conceptually similar to the platform used develop iPod Touch and iPad apps).
– Like with the iPod Touch, I imagine that you’ll never know whether you’ll ever be able to run your software — of which I imagine many apps will be important to a good number of developers, if only internally, and not just be yet another tip calculator or yet another Tetris reimplementation — until it shows up in the App Store. Apple isn’t stupid, though; I’m sure that with the current App Store money talks many languages, and that many commercially-backed apps get through a little more easily and assuredly when the developers pay a fee guaranteeing its appearance in the App Store, or a higher-than-average commission per download/sale.
– Of course, you will have to fork over the source code to Apple. And, only Apple will ever truly have full access to the source code of the final product, so you won’t know whether they’ll modified it (admittedly, sometimes no doubt for the better), how they’ve modified it, whether they’ve introduced bugs or vulnerabilities, or back doors. So far, nothing different from the way the App Store for the iPod Touch or the iPad work.

But here’s what really gets to me:

So you want to develop some internal software that will give you a competitive edge over your competitors? Say, a different, possibly revolutionary analysis scheme for the metrics in your industry? And, you use Macs because you consider them to be either superior to other platforms, or their use otherwise adds some inherent value to your operations?

I see a money stream for Apple here. No, a cash cow. “OK, you *must* submit the source code to us. We’ll compile and digitally sign it, and make sure it works properly on your machines. We’ll review the code, identify and remove bugs, and even suggest better code and functionality. And only your operation’s computers can install the software; we’ll password protect access to the software. For a fee, of course. Oh, you don’t trust us? OK, here’s a Mac server that you can have in your server room. You’ll get a control panel. Of course, the server will ultimately still be controlled by us only, the code will still have to be reviewed by our engineering team, and your “submit” button will merely inform our team to start looking at things, and we’ll still control what goes into the final, compiled code, including back doors and all sorts of unknown blobs. Possibly some critical functions of the “revolutionary analysis scheme” being disabled, removed, or massively modified, or replaced with inferior substitutions. For a massive fee, of course.

Sort of adds an ironic twist to the notion of “proprietary software”, doesn’t it?

Seems to me that the only thing that will keep Apple clean on this one is that it’s actually in Apple’s interest to be a clean and legitimate player. Apple has successfully built a business based not on being the biggest, the greatest, the cheapest or having quite the latest or greatest technology (sometimes things are slightly a step behind). Or even making the biggest profits. They’ve built a business on delivering a clearly superior user-experience and tightly integrating the software and technology; for instance, there’s actually something to the iPad’s function of closing instances of web browsers above a certain number, since allowing too many instances to remain open (sometimes as Orphan Annies or Captain Dunsels) they may not be performing any useful function to the user, yet would be consuming system resources such as memory, processor time, or battery power that could deprive other processes of necessary resources, and ultimately diminish the user experience. This idea has merit; every once in a while I have to make a point of closing down some windows simply because there are too many open and they’re slowing down my system.

Of course, such a value-added division — that of reviewing software over a diverse cross-section of industries and making them work really well on a given platform — would mean that they would develop lots of expertise. And, it would naturally make Apple quite the intellectual powerhouse. Imagine, Apple Medical Consulting Services. Apple Financial Expertise. Apple Engineering Software. Apple Human Resources Management. They actually would accumulate this expertise.

But what if the software that, in keeping with their business practices and policies, is necessarily in their care proves to be less then optimal, not because of the submitted source code, but because of Apple’s actions? Of course these are concerns that any company worries about every day; the issue here is the monopoly that they would be creating.

Now let’s not blow things out of proportion: Macs are remarkably secure. So is Ubuntu, and by and large any typical linux distro, certainly any of the mainstream distros and any other that is “properly” designed and maintained (I bet that you could take a dot-com era distro and actively administer it, and it will be relatively secure.)

But it seems that Apple has brought and will be bringing the repository model, which flourishes under linux, to a logical extreme, and will generally make billions more than Red Hat ever will or even could. And, will no doubt exploit the model for even more billions. But at what cost?

What — jumping the fence for a moment here — do only evil, maniacal control freaks have a monopoly on knowing know how to build safe, high-quality software?

Or maybe, just maybe, is it a matter of what makes the likes of Apple, MS, and Red Hat so successful the fact that they are able to command the sales revenues required to attract highly talented teams of programmers and other experts?

And — now coming back to the other side of the fence — what about the added value that volunteer programmer and other volunteer contributors bring to their software?

Realize that in a linux distro, the distributor leverages open-source software — with varying amounts of both paid-professional and volunteer (both otherwise professional) contributor content — to make their distro. The underlying OS part of OSX is based on Darwin, which is a direct derivative of FreeBSD, which again has varying amounts of both paid-professional and volunteer (both otherwise professional) contributor content. And even MS has a certain amount of BSD-derived code in it, for things such as the networking code, and probably elsewhere.

So, all this makes me wonder a bunch of things:

On trojans, malware, spyware, viruses, and social engineering:

I hate ’em all. I hate that there is an industry out there whose basis for legitimacy lies somewhere between software that is not optimal because it’s real brick-a-brack, and users who don’t use a bit of common sense. There will always be people trying to get the better of you; it’s that people fall for the charms of charlatans, who have little defence against common sense, that bugs me, be it the trojan authors or the software writers who figure that people won’t know better regarding what they buy.

On sudo:

Well, the problem there really lies somewhere between the keyboard and the chair; using sudo has its advantages and disadvantages, just like logging straight into root does. That tightly integrating sudo can arguably aggravate things by hiding things and make them “easier” for the user doesn’t change the fact that if an user just always logs in as root, they can do whatever they want, including wiping the whole directory tree. People have to understand *why* the computer works and does what it does, and why it is asking for a password, whether it’s their own (for sudo) or it’s the root password.

On repositories and safer, better software:

How is the repository system going to benefit the Mac? Organized and even better quality software (presumably). It’s about time that Apple used the repository system. It’s about time that Windows adopted it too. It’s about time that a few people set up repositories, possibly competing, for Windows; imagine the lineups for software of all kinds found in a single location that has been reviewed, works, is relatively safe and relatively virus free? You could use the iPod App Store model with prices ranging for $0.00 to $1,000,000.00 or more, or a NetFlix approach of a flat fee per month for unlimited access, or advertiser sponsorship, or some other revenue stream you dream up, or any combination of the above.

On monopolization and evil software empires:

Is Apple really all that evil? They don’t have *that* much of the desktop market share. Plenty more people buy MS — in the consumer market, either you buy a Mac, or you buy a computer, that has MS by default. A few of us, up to roughly comparable in numbers to the Mac crowd, depending on who you believe, use another OS sporting penguin stickers.

So I’m just having a knee-jerk reaction to the idea that Apple will probably become a monopoly over a really large cross-section of the economy.

Gnome 3 in Fedora 15 Freezes up in Fallback Mode

I have been going through some reformats over the past month or so: About five times on my then F14 desktop — three times until I finally ditched the hard drive which was obviously dying, and the fourth time on the “new” hard drive, and the fifth to explore the following problem as well as remove a poor choice of keyboard selection. The French in France must really be confused when using computers in North America (I was in Paris back in 2005), their keyboard layout is so weird. You have to realize that a French Canadian keyboard is not so different from a standard US keyboard, certainly compared to the French from France keyboard.

I have what would be considered an older computer; I’ve seen a date of about 2003 on the mother board. As such, the video card on it is too old to use Gnome Shell in Fedora 15 and uses the Fallback Mode. So I figured that when the computer appeared to freeze after a couple of hourse — no response to the keyboard or mouse and the like — that I might have a memory card problem causing the freeze, something I’ve seen before. However, two things are nagging at me:

1) During the install process, the computer would be sitting around idle while I either slept or was at work all day, the computer wouldn’t freeze up, and I just picked up where I would have were I to have babysitting the computer during the install;
2) After it froze up yesterday evening at what appeared to be a normal screensaver cycle, I did a hard reboot using the power button, and on a whim I turned the “lock” (the desktop) option in the screensaver settings screen to OFF. This morning after the computer was on the whole night, all I did was turn on the screen, and what do you know, I can use the computer like normal, with no freeze up.

As a control, my laptop, sitting beside my desktop — the former of which is recent enough to take advantage of Gnome Shell — does not lock up with the “lock” option to “on”.

I wonder what the deal is? A bad video card? Bad code? An incompatibility with the code — presumably the screensaver code, but I suppose any other — and my particular hardware?

I was doing some looking around last night on the subject, before I saw the results of the little “off” switch. Now that I’ve done an experiment and have some results, I can perhaps do one or two more and determine whether a bug report is warranted.

Canada Day and my beer

I’m just about finished cleaning and sorting all the beer bottles from yesterday’s big Canada Day festivities in Montreal West, Quebec.

For the past 14 years I’ve held what I believe to be the most critical job — certainly when it comes to efficiency, productivity, and morale — to the success of the event. Hence, with all due respect to the following people, as well as Paula and Joan and all the other critical volunteers without whom I wouldn’t be able hold such a prestigious position:

It’s more important that the Parade Marshall’s job. They just have to dress up, wave a big stick, and walk at the front of the parade.

It’s more important than the Mayor’s job. They just have to make a speech and lead everyone in singing “O Canada”.

It’s more important than the job of the nice guys who set up and light the fireworks. Hey, it’s the fireworks themselves that do the real job there, anyway.

It’s certainly on par with the fantastic people who run the beer tent (Hi Wayne and Sam!)

With this last we’re getting into the critical area: The fantastic people who run the barbecues cooking all the food for the public to come and consume. I’m one of this group. But, my job is more important that cooking burgers, hot dogs, buns, or cutting up all the tomatoes and onions and the like in preparation of the evening.

I serve the beer to, and only to, this fine crew of people who run the barbecues. Heck, I even get to serve the Mayor. (Glad you liked my beer, Mr. Masella!)

Every year for the past 22 years, with about five exceptions, I’ve been involved one way or another at the Montreal West Canada celebrations volunteering to make the event happen. For the past 18 years (plus the first year), I’ve been involved with the barbecues. For the past 14 years, I’ve held the above-mentioned prestigious position.

I love it. I love serving people. I love the accolades. I love the attention. I love bragging in the admittedly deluded way that I am right now that I hold the most important position of the day. And, for the past four years, I love all the extra compliments I get about supplying my own beer. The best part? This year I had three varieties of beer, instead of one variety the first year, and two in the intervening years. And, it seems from the roughly equal distribution of how much beer I have left from each variety, that all three were roughly as popular as each other.

This year I had 33 x 1.14L bottles of my beer, plus of course the corresponding extra regular sized bottles to go along with it. Overall I made about 75L of beer with Canada Day in mind, knowing that I’d have plenty left over of course. That I served the 33 bottles plus another 24 regular bottles says something about how large and thirsty my group is, considering that I also serve wine, water, soft drinks and the like.

One of the things I also found out last night, contrary to my experience last year with only about 20 such bottles, serving out of these 1.1L bottles is a charm instead of having to bottle that amount of beer in regular bottles and then cap them all, and then serve them individually. Although admittedly this last part is actually not necessarily the hardest part. But serving 3-4 beers out of a single bottle proved to be easy and convenient. And keeping track was easy: The big cooler had bottles that either had no elastic around the neck, or did. The third cooler had the third kind of beer. Keeping track, in practice, was quite easy.

And here’s the other part of what has me hyped about this post: The numbers.

33 x 1.14L bottles of beer served — about 37.6L
34 x 341mL bottles of beer served — about 8.2L more served
total of 45.8L of beer served just to the BBQ crew

This is the equivalent of about 130 beers served, if you take out the one 1.14L bottle that didn’t carbonate and was served to the grass. This is pretty strong — if there’s a downpour, I usually serve in the area of 80 beers. If it’s nice like it was yesterday, I usually serve about 100 to 120 beers; one year, I figure I served as much as 160 beers.

Now of that, I had made, as I said earlier, about 75L of beer for the event. So that’s about 61% of the beer I made for the occasion.

And more numbers:

After having collected all sorts of beer bottles off the side of the road, in bushes, and just about anywhere else that my travels take me, today I’ll be returning about 161 SURPLUS empty beer bottles that I’ve collected over the past year. That doesn’t include the 90 that are still full, but then again last year at this time I made a similar bottle return and kept to the order of 80 to 90 such bottles that were either full or empty — in order, of course, to be able to have enough bottles for the following batch of beer.

And of course, the above-mentioned 33 x 1.14L bottles won’t be returned; I’ll be keeping them for next year’s Canada Day beer!