Jan 23, 2014

How to Install Virtualbox 4.3 + Resizable Guest Video Resolution on Ubuntu 13.10 GNU/Linux

While I really like the FOSS KVM (Kernel-based Virtual Machine), KVM can be just a bit daunting for the more GUI-oriented types.

So, for the newly experienced, I usually recommend they try Oracle's Virtualbox virtual machine software from virtualbox.org.

The Ubuntu repositories have Virtualbox available, but it is the older version. This post provides instructions for installing the newer Virtualbox 4.3 in your Ubuntu 13.10 GNU/Linux box. Also, I've outlined the steps needed to install the packages inside the guest Ubuntu 13.10 virtual machine and make the video resolution re-sizable with the window. These steps also enable the Ubuntu guest to have larger screen resolution or even fullscreen resolution.

For the purpose of this writeup, I use Ubuntu 13.10 GNU/Linux as both the HOST operating system, and the Virtualbox GUEST operating system. This posting assumes that you know how to install and configure Ubuntu GNU/Linux as the guest OS.

These first set of steps are used to setup your Host Ubuntu 13.10 with the newer version of Virtualbox, version 4.3. Open the terminal from the Ubuntu menu, or hit CTRL+ALT+T, then run these commands to install the virtualbox repository for Ubuntu 13.10 Saucy Salamander, and then install Virtualbox 4.3:

wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -

sudo sh -c 'echo "deb http://download.virtualbox.org/virtualbox/debian saucy contrib" >> /etc/apt/sources.list.d/virtualbox.list'

sudo apt-get update && sudo apt-get install dkms virtualbox-4.3

If the above commands were successful, you should now be able to find Virtualbox in your Ubuntu menu. Launch it, then install Ubuntu GNU/Linux 13.10 into a virtual machine. Once you have logged in and completely updated the software packages for your Guest Ubuntu GNU/Linux machine, take a baseline snapshot of the Guest Ubuntu OS.

The next set of steps are run inside the Guest Ubuntu OS to add mouse pointer integration. These steps also allow larger, re-sizable, and fullscreen video resolution of the Ubuntu guest. Run these commands in the Terminal(**or in alternate run-level) inside the Ubuntu guest OS to install the baseline software building components:

sudo apt-get install build-essential linux-headers-$( uname -r ) dkms

I suggest taking a VM snapshot here, just in case things go wrong during the next steps. Next, add the Virtualbox repository and install the Virtualbox guest additions iso.

wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -

sudo sh -c 'echo "deb http://download.virtualbox.org/virtualbox/debian saucy contrib" >> /etc/apt/sources.list.d/virtualbox.list'

sudo apt-get update && sudo apt-get install virtualbox-guest-additions-iso

I suggest taking a VM snapshot at this point. Also, reboot the Ubuntu guest OS so the driver modules can load. After you've rebooted the Guest OS, re-open the Terminal and run the command below to install the final packages needed for resizing the Ubuntu Guest video resolution.

sudo apt-get install virtualbox-guest-dkms virtualbox-guest-utils virtualbox-guest-x11

Next, reboot the Ubuntu guest OS to allow the newly installed modules to load.

Voila, mouse integration should work, and you should be able to re-size your Guest OS VM window and the video resolution should change size dynamically.

**As an extra tip, if you ever need to enter alternate run-levels (alternate TTYs) in your Guest VM OS: simply replace

As another extra tip, if your Ubuntu Guest seems to be rather slow because of Unity, maybe you can try the gnome-session-fallback interface instead. Get to it by clicking the Ubuntu symbol near your username on login. Here's how to install it:

sudo apt-get install gnome-session-fallback

Shannon VanWagner


Oct 12, 2013

Ubuntu 13.10 - Enable "Control+Alt+Backspace to terminate the X server"

Having loaded up Ubuntu GNU/Linux 13.10 on my computer which previously ran the 13.04 version, I have to say that I am quite impressed with the speed improvements and polish this new version brings to this already awesome operating system!! Way to go Canonical and Ubuntu developers!!

One of the things I really love about GNU/Linux is the power I have to customize things in the system to my liking. Rather than force changes down your throat like the other operating systems out there, GNU/Linux gives you the power to choose!! You have the power!!

Usually, after installing the base system, I come back and tweak things a bit to my liking. After all, per the license - I already own this copy of Ubuntu GNU/Linux that's installed on my computer, so I might as well customize it to my liking as much or as little as I like!! This is what Freedom in computing is all about!!

Of course, the developers behind Ubuntu, have made it so I don't need to customize anything at all if I don't want to, and it's still a very easy to use this awesome Ubuntu GNU/Linux anyway!! But when I do want to customize,  I can do this as little or as much "tweaking" as my geek appetite calls for.

Something I always customize in my Ubuntu GNU/Linux, is having Control+Alt+Backspace "kill the session".  This makes it easy to jump out of my logged in session if I've launched something that failed or otherwise screwed things up so I can't operate in the current session.

In Ubuntu 13.04, the aforementioned option was a few short steps to being enabled. It was done by clicking the gear menu by the clock, then System>Preferences>Keyboard, then on the Layouts tab, then clicking Layout options, then under "Key sequence to kill the X server", putting a check by "Ctrl+Alt+Backspace".

In Ubuntu 13.10, it seems the option has been moved around again, because they've polished up the menus a bit more. But, unlike some other systems out there that don't let you easily change things around, in this Ubuntu GNU/Linux - we can still change this setting(and much more) rather easily. So here's how I enabled "Use Control+Alt+Backspace to terminate the X server?":

1.) Hit Control+Alt+t to launch the terminal

2.) Type or paste in the command below into the terminal, hit enter, then authenticate with your user password necessary (below is a one liner):

sudo dpkg-reconfigure keyboard-configuration

3.) Hit enter 5 times, or as necessary, to accept the already-set values for the keyboard settings and arrive at the screen to change the behavior for "Control+Alt+Backspace" (see picture above).

4.) At the screen that says, "Use Control+Alt+Backspace to terminate the X server?", hit the left arrow key to select yes, then hit Enter to save the setting and complete the kbd configuration.

Voila! At this point, if you hit Control+Alt+Backspace, you will be exited from your current session! Just make sure you save any open work before testing this out, as it will not save anything automatically.

User Customizations - Another reason to love GNU/Linux!! Get Yours!!

Shannon VanWagner


Feb 2, 2013

How to install xrdp on Ubuntu 12.04 Precise Pangolin

I absolutely love a system that can stay connected! With Ubuntu GNU/Linux, there are many ways to get connected remotely to your computer and get your Linux on!

For instance, here is a list of several ways you can stay connected:
Desktop Sharing (built-in VNC, see help)

FreeNX (requires install)
NX Free (requires install)
ssh -X (requires install and local xserver)

The default "Desktop Sharing" functionality in Ubuntu 12.04 is good for some things, but since it's based on VNC, and doesn't have any additional layer for security, it's not a necessarily secure method for connecting to remote machines. Also, people trying to connect from their windows systems will have to obtain/install a vnc client to use for connection. This can be a problem in some environments.

This is where xrdp comes in. xrdp.org is free open source software that can be easily installed in Ubuntu 12.04 GNU/Linux machine via the package management system. Since it uses the RDP protocol, xrdp is a relatively secure method of connection. Other benefits are that your windows users can easily connect with their built-in remote desktop client mstsc.exe and your Linux users with rdesktop.
In this post I'm going to outline how to quickly install the xrdp packages and connect with mstsc.exe(windows) or rdesktop(GNU/Linux.

1.)  Enable the community repository in Ubuntu - click to open the "Ubuntu Software Center". Then, on the top menu(top of screen), click Edit > Software Sources > put a checkmark by "Community-maintained free and open-source software (universe)", then click Close.

2.) Update your sources and install xrdp using apt-get. Open the terminal with ctrl+alt+t, then type or paste these commands, hit enter, authenticate, then confirm to install the xrdp packages.

  sudo apt-get update && sudo apt-get install xrdp

Note: Depending on how quickly your machine can update the sources, the sudo apt-get update command may fail after closing the Ubuntu Software Manager because there may be a process running that is updating the sources already in the background. If this happens, wait longer, then try the command again.

3.) Add this special .xsession file entry into the /home/(username) directory you intend to login with (e.g., /home/shannon/.xsession) to improve performance by converting your session to ubuntu-2d:

  echo "gnome-session --session=ubuntu-2d" > /home/YOURUSER/.xsession

4.) Reload the configuration into the xrdp server process:

  sudo /etc/init.d/xrdp restart

That's it. Now you can connect to your xrdp host with ease from GNU/Linux using:

 rdesktop ip-or-host-of-your-xrdp-server

Or from windows, with:

  windows-key+R, mstsc.exe /v:ip-or-host-of-your-xrdp-server

Tip: If you find that you're having problems changing network settings via your xrdp session, it's because of the protection configured to secure the machine to console users only via policy-kit.

See this article for more information and a workaround:

Spoiler: Basically you're modifying all "no" values in the allow_inactive attributes to "yes" in

Shannon VanWagner


Aug 3, 2012

Enable CentOS 5.8 GNU / Linux Authentication on Windows Domain

In case you should ever be finding yourself having to configure your CentOS 5.8 GNU/Linux machines to allow active directoy windows users to login to them, this post will help.

While there are a few ways to set this up, i.e., likewise-open (see beyondtrust.com), centrify (centrify.com), the built-in System, Authentication graphical controls in CentOS, etc., the method in this post focuses on touching just a few config files to enable active directory  authentication. K.I.S.S. is the way I like to roll.

Using the authentication methods below assume that you have already enabled services for Unix on your active directory server and that the users that would be logging in to CentOS have their Unix tab (on ad user and computers) populated with values.

The Authentication methods outlined here use LDAP and Kerberos. LDAP brings the UID/GID information (from the Unix tab in ad) for the user, and Kerberos provides for username/password authentication piece.

With the default install of CentOS 5.8, it's amazingly simple to setup authentication to your active directory for Unix-enabled ad users.

Here are the steps for enabling your CentOS 5.8 GNU/Linux computer to authenticate with active directory:

1.) Create a special user in active directory (e.g., ad-guest-01). Once you've created the user, add it to the group "Domain Guests", make it the Primary group, and remove all other group memberships (e.g., Domain Users should be removed).

2.) Make changes to the following configuration files on the CentOS 5.8 GNU/Linux machine as shown below:

#/etc/ldap.conf for connecting with Win-Server w/SFU Enabled #
base dc=yourcompany,dc=com
uri ldap://yourADserver.yourcompany.com ldap://yourADserver.yourcompany.com/
binddn ad-guest-01@yourcompany.COM
bind_policy soft
scope sub
pam_min_uid 1000
bind_timelimit 5 
timelimit 5
idle_timeout 3600
ssl no
referrals no
nss_base_group dc=yourcompany,dc=com?sub?&(objectCategory=group)(gidnumber=*)
nss_map_objectclass posixAccount user
nss_map_objectclass shadowAccount user
nss_map_objectclass posixGroup group
nss_map_attribute gecos cn
nss_map_attribute homeDirectory unixHomeDirectory
nss_map_attribute uniqueMember member
nss_initgroups_ignoreusers root,ldap

#/etc/krb5.conf for connecting with Win-Server w/SFU Enabled #
#  Tip: You can use predefined DNS names for your kerberos,
#+ ldap (ad) servers to make future ad dc hostname changes
#+ less painful.

 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

 default_realm = YOURCOMPANY.COM
 dns_lookup_realm = false
 dns_lookup_kdc = false
 ticket_lifetime = 24h
 forwardable = yes

  kdc = yourADserver.yourcompany.com:88
  kdc = yourADserver
  admin_server = yourADserver.yourcompany.com:749

 yourcompany.com = YOURCOMPANY.COM
 .yourcompany.com = YOURCOMPANY.COM

 pam = {
   debug = false
   ticket_lifetime = 36000
   renew_lifetime = 36000
   forwardable = true
   krb4_convert = false

#/etc/nsswitch.com for Win-Server w/SFU Enabled  Auth#

passwd:     files ldap
shadow:     files ldap
group:      files ldap

#hosts:     db files nisplus nis dns
hosts:      files dns

# Example - obey only what nisplus tells us...
#services:   nisplus [NOTFOUND=return] files
#networks:   nisplus [NOTFOUND=return] files
#protocols:  nisplus [NOTFOUND=return] files
#rpc:        nisplus [NOTFOUND=return] files
#ethers:     nisplus [NOTFOUND=return] files
#netmasks:   nisplus [NOTFOUND=return] files     

bootparams: nisplus [NOTFOUND=return] files

ethers:     files
netmasks:   files
networks:   files
protocols:  files
rpc:        files
services:   files

netgroup:   files ldap

publickey:  nisplus

automount:  files ldap
aliases:    files nisplus

#/etc/pam.d/system-auth-ac for Win-Server w/SFU Enabled  Auth#
# This file is auto-generated.
# User changes will be destroyed the next time authconfig is run.
auth        required      pam_env.so
auth        sufficient    pam_unix.so nullok try_first_pass
auth        requisite     pam_succeed_if.so uid >= 500 quiet
auth        sufficient    pam_krb5.so use_first_pass
auth        sufficient    pam_ldap.so use_first_pass
auth        required      pam_deny.so

account     required      pam_unix.so broken_shadow
#The line below allows local user login
account     sufficient    pam_localuser.so
account     sufficient    pam_succeed_if.so uid < 500 quiet
account     [default=bad success=ok user_unknown=ignore] pam_ldap.so
account     [default=bad success=ok user_unknown=ignore] pam_krb5.so
account     required      pam_permit.so

password    requisite     pam_cracklib.so try_first_pass retry=3
password    sufficient    pam_unix.so md5 shadow nullok try_first_pass use_authtok
password    sufficient    pam_krb5.so use_authtok
password    sufficient    pam_ldap.so use_authtok
password    required      pam_deny.so

session     optional      pam_keyinit.so revoke
session     required      pam_limits.so
#The line below triggers creation of home-dir upon user first login
session     optional      pam_mkhomedir.so
session     [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session     required      pam_unix.so
session     optional      pam_krb5.so
session     optional      pam_ldap.so

If you want your windows users to be able to run sudo, run visudo as root, then add:

%Domain\ Users ALL=(ALL) ALL

Note: The above setting is just an example of how to add FULL CONTROL for the ad-based "Domain Users" to the sudoers file. Changes to the sudoers file can be more finely tuned to only allow certain programs. If user restriction is a concern for your situation, I suggest you research "sudoers" and find the more granular settings that are appropriate for your needs.

Speaking of security, something else to consider is that if the user can become root with sudo -s on the machine, they will then be able to su and be seen as that user as far as the local machine is concerned. You can force them to authenticate (even as root) by commenting the line below in /etc/pam/su, but if they are root - they can still change it back:
vi /etc/pam.d/su
#auth sufficient pam_rootok.so

That's it. Reboot your CentOS, then you should be able to login as your windows user on the box. Feel free to leave a comment below with any suggestions or questions.

Shannon VanWagner


Aug 1, 2012

How To Install Clearcase 7.1.1 on CentOS 5.8

First of all, if you're going to have to use source control, get something FOSS - like git, or subversion, or mercurial, etc. Here's a great list on wikipedia.org

Otherwise, if you're one those poor bastards that are tasked(like me) with installing the less-than-FOSS IBM Rational Clearcase (c) (version 7.1.1) on the CentOS 5.8 GNU/Linux machine, you've come to the right place for some notes on a real installation.

Basically, IBM Clearcase does not include support for CentOS. To make things worse - the IBM installer will actually fail the install for "unsupported operating system" when installing on the non-supported operating system. As to why the IBM installer doesn't have the option to "try anyway" is beyond me, but since it doesn't, we will have to resort to other means.

Luckily, it is rather easy to workaround the "unsupported operating system" problem. To install Clearcase on CentOS 5.8, we simply have to trick it into thinking that our CentOS is actually Red Hat Enterprise Linux. In this post, I'm providing an overview for how the installation process worked for me.

Disclaimer: This is an experimental procedure only. By using these methods, you accept full responsibility for any subsequent damages that might happen to your system by using these instructions.

And now for the installation details:

First, on the computer you'll use for testing Clearcase, install CentOS 5.8 GNU/Linux (32-bit version in this example). There are no special requirements to this step, except that upon your first login, you should run all system/security updates. After running the updates, you should reboot to ensure you are booted to the latest installed kernel.

Related terminal command for system updates:
yum update && yum upgrade -y

And now, let's get the system ready for the install of Clearcase 7.1.1:
Note: These commands assume you are running the terminal as root, use this command to become root:
su - 

Next, check to see if you are running the PAE kernel so we can decide which dependency packages to install for the Clearcase MVFS module build. Run the command below and take note of the result:
uname -r

Example result for running the PAE kernel:

Now, let's install the dependencies needed to build the mvfs module for Clearcase:

If you are having the PAE kernel:
yum install compat-libstdc++-33 \ 
gcc glibc-devel glibc-headers kernel-headers \
kernel-devel kernel-PAE-devel -y

If you are NOT having the PAE kernel (command above would work fine too):
yum install compat-libstdc++-33 gcc glibc-devel \ 
glibc-headers kernel-headers kernel-devel -y

In this step, we'll set up the trickery that is needed to mask CentOS system to mask itself as Red Hat for the installation of Clearcase 7.1.1:

First, open the terminal, and make a backup of your redhat-release file:
cp /etc/redhat-release /etc/redhat-release.original

Then, edit /etc/redhat-release as follows:
vi /etc/redhat-release

Insert this text:
Red Hat Enterprise Linux ES release 5

Save the file and close it.

Now, let's Mount the VOB storage folder on your clearcase server using NFS so the MVFS will be able to mount the VOB folder. To do this, create a mount point (directory):
mkdir /home/clearcase

Then, modify /etc/fstab to mount the clearcase folder you just created:
vi /etc/fstab

Add a line to mount the server files:
ccaseserver(or IP):/home/ /home/clearcase nfs defaults 0 0

Then, re-process the entries from /etc/fstab with:
mount -s

Then, test to ensure you can see the files on the server with this command (should not produce an error):
ls /home/clearcase

Unzip and install the IBM installation agent (installer version 1.3.3 for this example), cd into the dir, then run the install script:

unzip -d extracted agent.installer.linux.gtk.x86_1.3.3.zip

cd extracted


Then, use the graphical interface to install the IBM Installer (Note: You should restart the installer with the button provided at the last step of the process).

Now for the install of Clearcase From the IBM Installation Manager, click File > Preferences > Add Repository, then Browse to "Disk 1" dir of your Clearcase 7.1.1 installation files and add the diskTag.inf as a repository. Click OK as necessary to get back to the IBM Installation Manager screen, then click the install button to install Clearcase.

As for the installation steps, I used the defaults pertaining to my environment, except I also added the "ClearCase Full Function Installation" package, and I tested to ensure the "kernel source" build directory was accessible on the MVFS Module page. To test this yourself, run ls of the directory in the terminal. The result should be an existing directory with a list of files:
ls /lib/modules/2.6.18-309.11.1.el5PAE/build

same as
ls /lib/modules/$(uname -r)/build
for me.
If you get an error at this step, check to ensure the dependency packages are installed (per the step above).

If for some reason the MVFS module doesn't get built by the installer, you may see a message like "... albd_server MVFS module could not be found ..." when restarting clearcase with this command:
/etc/init.d/clearcase restart

If you experience the above error, you can try building and installing the MVFS module by hand with these commands:
/etc/init.d/clearcase stop
cd /var/adm/rational/clearcase/mvfs/mvfs_src
make install

Finally, to test that the MVFS module is installed and running, perform these commands:
/etc/init.d/clearcase restart

/opt/rational/clearcase/bin/cleartool lsvob

The result should list out your VOB directories with a * (to show mounted) to the left. Example:
* /vob_storage/MyVob.vbs

So, that's it. Hopefully something here will help someone with their setup. If you have any questions or comments, please leave them below.

Shannon VanWagner


Jul 17, 2012

Fun with Bash Double Brackets, Regular Expressions, Case Matching, and Digits

After some quick searching and not finding the answer, I decided to write this up for my own reference.

My original inquiry was how do I form a double-bracketed if branch statement, using "=~" to check a variable against a regular expression for upper OR lower case of a specific search string in bash. After some working it out, I think I got it. See below.

For instance, in the example script below, the user is asked to answer yes or no, the value entered is then checked to "loosely" match a predefined value. In this case, yes/y (with any combination of case) will match the Regular Expression.

This example points out how to formulate your bracketed regular expression to match any variations in case (or even a single character answer, e.g., y OR n). There are differences with the bracketed use of regular expressions compared to how grep uses them, I'm finding.

Like, notice in the experimental script below, how single quotes are not used in the bracketed expression, and the "|" for the OR situation is not escaped with \. Also, with the example that checks for exactly 6 characters, no "\" are used to escape the curly braces.


while [ 1 ]
    echo "Are you ready to get started? Enter: Yes|No"
    read result

    #Test the result for yes/no (or variation)
    if [[ "$result" =~ ^[Yy][Ee][Ss]$|^[Yy]$ ]]
        echo "Good. Grab your stuff and let's roll!"
    elif [[ "$result" =~ ^[Nn][Oo]$|^[Nn]$ ]]
        echo "You answered No. Ok, we'll wait until later."
    elif [[ "$result" =~ ^[Ee][Gg][Gg]$ ]]
        echo "Congrats! You found the Easter Egg!"
        while [ 1 ]
            echo "Enter a 6 digit number:";
            read theresult
            if [[ "$theresult" =~ ^[0-9]{6}$ ]]
                echo "Nice work! Bye!"
                echo "Input not recognized, please enter 6 digits."
        echo "Input not recognized. Please enter Yes/No."

exit 0

So there it is, just a quick example of how bracketed regular expressions can be used to test for specific values in bash. Hopefully this little example will help somebody save a bit of time having to research for this information.


Shannon VanWagner


Jun 27, 2012

How To: Update Your Ubuntu GNU/Linux sources.list the Geeky Way

Here's my geeky tip for updating your /etc/apt/sources.list on Ubuntu GNU/Linux.

This tip is especially useful around April/October when the new Ubuntu releases are freed into the wild and the main servers are very busy.

I know what you're saying: This can easily be done from the  Ubuntu Software Center via the edit > sources menu. Yes, this is true, but now that's not a very geeky (or terminal-fast) thing to do, now is it? Besides, I like it better when I can initiate the sources update myself with sudo apt-get update, vs. having the software centre do it on exit.

To change your sources.list package server setting from the command line.

1.) Open the Terminal. Simply hit CTRL+ALT+T.

2.) Run this command to update your sources.list file:

sudo sed -i.backup 's/us.archive.ubuntu.com/mirror.anl.gov/g' /etc/apt/sources.list

3.) Run this command to see if your change took effect (you should see mirror.pnl.gov instead of us.archive.ubuntu.com on update).

sudo apt-get update

Related Notes:

a.) Edits your sources.list file in place (makes a backup of your current sources.list as /etc/apt/sources.list.backup). Keep in mind if you run the command twice - the backup will be overwritten.

b.) Assumes your Ubuntu was installed in the USA(can probably swap the us for your country code) - hence the us.archive.ubuntu.com original setting.

c.) Assumes you want to replace your current package source with mirror.pnl.gov (that one is fast for me here in Seattle, WA). See your list of options for package servers by running this command:

cat /usr/share/update-manager/mirrors.cfg

Alternatively, checkout https://launchpad.net/ubuntu/+
archivemirrors for list with speeds and other information.

Feel free to leave your suggestions for the better way below. Thanks!

See these links for more information:


Shannon VanWagner

Jun 19, 2012

How To Use xargs To grep (or rm) a Million Files

Sometimes even when a tidbit of technology one is studying is already very well documented, one still seeks to test it out for oneself to get a solid sense of the true behaviour of the subject. Plus, if you're like me, writing about a particular subject has the added benefit of committing it to memory.

And so it is for the reason of teaching myself that I document these already well-known points about grep and xargs.

Of course, as a side-effect, if another out there ends up learning from my writings too, that would be perfectly fabulous in my eyes as well.

Basically, the question in my mind is this: How do I successfully grep(search) for something in a directory that contains hundreds of thousands, or perhaps more individual files?

To illustrate an example: Using the grep command by itself to search through hundreds of thousands of files provides the following result on my Ubuntu 12.04 GNU/Linux system. The below directory contains 200,000 files.

$ grep 'rubies' *
bash: /bin/grep: Argument list too long

So why would I receive the error "Argument list too long" for this example? The key is to take a look at the number of characters for an argument that I am dealing with when using grep * in a directory with a large number of files(as in the example above ). Take a look at this example, which counts and displays the number of characters in the arguments for echo.

$ echo *|wc -c

The above command uses echo to enumerate all the names of the files in the current directory with the wildcard "*". The results are then piped to the word counting(wc) program, showing number of characters via (-c).

So as you can see, when applying "*" to a command, it's not really the number of files retrieved as arguments that's the problem, but the length of all the names of the files globbed together in the directory when all being processed as an argument to a command with the "*" wildcard.

If the number of characters you retrieve with the command above is greater than the pre-set "ARG_MAX" value on your system, that's when you will get the "Argument list too long" error with a command being used to process a great number of files.

Here's one example of how to find the ARG_MAX value:

$ getconf ARG_MAX

Obviously, if the number of characters submitted to my grep command is greater than the number shown for the ARG_MAX setting, I will not be able to process a command that uses * with that size of argument.

So, the answer to deal with this argument list problem, is to use GNU xargs from the Free Software Foundation.

Here's an excerpt from the xargs man page:

       xargs - build and execute command lines from standard input

       This manual page documents the GNU version of xargs.  xargs reads items from the
       standard input, delimited by blanks (which can be protected with double or  sin‐
       gle  quotes  or  a  backslash) or newlines, and executes the command (default is
       /bin/echo) one or more times with any initial-arguments followed by  items  read
       from standard input.  Blank lines on the standard input are ignored.

       Because  Unix  filenames can contain blanks and newlines, this default behaviour
       is often problematic; filenames containing blanks and/or newlines are incorrect‐
       ly  processed  by xargs.  In these situations it is better to use the -0 option,
       which prevents such problems.   When using this option you will need  to  ensure
       that  the  program which produces the input for xargs also uses a null character
       as a separator.  If that program is GNU find for  example,  the  -print0  option
       does this for you.

(For the complete manual, please see http://www.gnu.org/software/findutils/ )

In this writeup, I want to focus on details of the second paragraph. Specifically, I wanted to document some tests that show why you should use the find command with the -print0 setting along with the xargs -0 setting together to overcome problems like spaces in filenames, and to overcome the "Argument list too long" error.

Anyways, here's how you can see how things respond, and which way is the wrong way vs. the right. DISCLAIMER: These tests are experimental only, and I cannot responsible for any damage you cause to your machine while testing these commands for yourself. So make a backup of your important data and use caution when entering the commands.

Let's start by making 200,000 files (a task that took my computer about 8.3 seconds). Then we'll cd into the new directory.

mkdir dirWith200KFiles
cd dirWith200KFiles

Now, create 200,000 files (named file-1 thru file-200000), and echo some text into them (with just a few taps of your fingers). Note: this same process will work for a million or more files, e.g., just replace {1..200000} with {1..1000000}.

for eachfile in {1..200000}
    echo "yes there is something here" > file-$eachfile

Now, let's hide a gem in one of the files so we can search for it with grep later.

echo "rubies diamonds and gold" >> file-78432

And, let's add a file with spaces in the name so we can break some commands with that too.

echo "spaces in filename" > "myfile spaces inname"

At this point we can conduct a search with grep, and experience what might happen when one is trying to find a gem in such a large set of files and in a file with spaces in the name.

$ grep 'rubies' *
bash: /bin/grep: Argument list too long

So, in the above example, grep fails because of "Argument list too long". To resolve the problem, see the CORRECT example below.

A CORRECT way to use xargs with grep:

$ find . -type f -print0 | xargs -0 grep 'rubies'
./file-78432:rubies diamonds and gold

In the above example, the find command checks the current directory for files of type and formats the output, replacing blank spaces in names with the null character, which then gets piped to the xargs command. The xargs command accepts the output from the find command, while ensuring no blank spaces with the -0 (format of -print0 command required), and greps the results for 'rubies'. As you can see in the output, this is how it's supposed to work.

Here are a few variations with explanations that show how these are the WRONG way to use the grep and xargs commands.

$ find . -type f | xargs -0 grep 'rubies'
xargs: argument line too long

In the above example, when the find command encounters our filename with 3 spaces in it, they are piped into the xargs command as 3 arguments at once, which causes an error because our xargs command only expects 1 argument.

$ find . -type f -print0 | xargs grep 'rubies'
xargs: Warning: a NUL character occurred in the input.
It cannot be passed through in the argument list.
Did you mean to use the --null option?
xargs: argument line too long

In the above example, the output format of the find command sends the null characters in place of the spaces in the filename, but the xargs command doesn't expect them, so it causes an error.

And finally, the most chaotic example that has the potential to cause problems. Especially if using xargs to do something more destructive than grep, e.g. rm (remove) files:

$ find . -type f | xargs grep 'rubies'
grep: ./myfile: No such file or directory
grep: spaces: No such file or directory
grep: inname: No such file or directory
./file-78432:rubies diamonds and gold

In the above example, the output from the find command is processed by "xargs grep" as separate arguments and so separate filenames in this case. The xargs grep command then also succeeds in finding the correct result, but at this point the damage could already be done.

Here are some of the same tests using the command "rm" instead:


$ find . -type f -print0 | xargs -0 rm

WRONG (In my case of deleting 200K files anyway):

$ rm *
bash: /bin/rm: Argument list too long


$ find . -type f | xargs rm
rm: cannot remove `./myfile': No such file or directory
rm: cannot remove `spaces': No such file or directory
rm: cannot remove `inname': No such file or directory

So there it is. Problem solved. I'm definitely not saying this is the only way to do it. But now you can get your searching for text in large sets of files on like never before.

Credit to the site below for showing more information on ARG_MAX

Shannon VanWagner

Jun 8, 2012

Ubuntu 12.04 GNU/Linux + HP 8100 or Ricoh Aficio MP 3500 = Printing Success

Here's a quick write-up on my real-life experience with adding the HP LaserJet and Ricoh Aficio MP 3500 as printers in Ubuntu GNU/Linux.

I chose to make a note of this simple task because I was tripped up by it at first. The problem? The default setting caused nothing but garbage at the printer. After some simple trial and error, I figured out that I needed to switch the driver settings as noted below.

Ricoh Aficio 3500 Driver:
Ricoh Aficio MP 3500 PXL

HP Laserjet 8000 series Driver:
HP Laser Jet 8000 Series pcl3, hpcups 3.12.x

Adding a printer in Ubuntu GNU/Linux 12.04 is really easy, simply follow these steps:
  1.  Click the Power icon > Printers > Add + 
  2. Expand the Network Printer section > click AppSocket/HP jetDirect
  3. Enter the hostname|IP Address for the printer, Click Forward (and pause as the system will attempt to detect the printer)
  4. Select the printer from database(or leave as detected) > click Forward
  5. If not given the selection to select the specific driver, accept the default and then you can come back and open printers > properties (for the printer you want to modify), then set the driver that way. 
Basically, if your printer is not working with the default setting (usually postscript), I suggest trying the pcl3 or pxl drivers instead.

Typically, printing works great with the default settings in Ubuntu anyway, I just wanted to point out that if it's not, that you should try switching to the alternate driver.

Hopefully this helps someone out there. Please feel free to leave your on-subject, constructive comments below.

Note: If you're looking for the *.ppd file for Ricoh  Aficio MP 3500 PXL (can be imported as a printer driver), see this link.


Shannon VanWagner

May 29, 2012

Simple Bash Script to Reverse Your Name and Output it

Just a quick script to reverse some input, using Bash and the FOSS "rev" program. It's amazing how easy it is to manipulate things with Bash. I love it!

Bash script version:
#Simple script to reverse an input name

echo "Enter your name";read 'myname'

echo "Your name spelled backwards is: $( echo $myname | rev )"

exit 0

One-liner version:
echo -n "what is your name?";read name;echo "$name" |rev

Or how about this more ridiculous example (one that doesn't use the "rev" program):
#Simple script to reverse an input name

  echo -n "Enter your name:"
  read myname

  numChars=$(echo -n "$myname" |wc -c)
  while [ $numChars -gt 0 ]

      revname=$revname$(echo -n "$myname"|cut -b$numChars)
      numChars=$(expr $numChars - 1)


echo "Your name spelled backwards is: $revname"

exit 0

Ridiculous one-liner version (one that doesn't use the 'rev' program):
echo -n "Enter your name:";read myname;numChars=$(echo -n "$myname" |wc -c);revname=;while [ $numChars -gt 0 ];do revname=$revname$(echo -n "$myname"|cut -b$numChars);numChars=$(expr $numChars - 1);done;echo "$revname"

May 25, 2012

Simple Bash Script to Remove Clearcase Views (experimental)

So I just wanted to take a minute to say how awesome GNU Bash is, and that you can do a great many things with it. One of those things you can do is run commands from the cross-platform-compatible(mostly) IBM Rational Clearcase cleartool.

Since I was working on cleaning up some old user views, I thought I'd just make a quick writeup on a script for helping with this task. I did it because I like writing scripts in Bash and I'm working (very often) to get better at it. Please note, the script is experimental, and just meant for tinkering purposes. That said, I can't be held responsible for any damage you do to your systems as a result of using this script.

The script takes a parameter of the username. The script will search for views that are owned by the username, and will de-register, de-tag them. As for deleting the actual files at the view path, that part is NOT done by this script. Instead, the script will record the server name and view path into a file named "ViewsToDeleteReport..." for you to sort through for reference to the final removal.

The reason I left the deleting of the files off is two-fold, 1.) It's a bit tricky to remove the files from multiple view servers at once, and 2.) for safety - if you somehow remove the wrong view from clearcase, you can easily go back and re-register the existing view files back as a view again. I certainly hope someone can find something useful here.

#Delete clearcase Views (clearcase parts, not physical data parts) Script
#License GPL v2.0 - More information is available at fsf.org
#By: Shannon VanWagner
#TODO - read multiple users based on file input

if [ -x /opt/rational/clearcase/bin/cleartool ]
  rtimestamp=$(date +%Y_%b_%d_%H_%M-%S)
 echo "Fatal Error(31) no cleartool executable at /opt/rational/clearcase/bin/cleartool, script halted"
 exit 31 
for item in $($ct lsview -long|egrep -B10 -i "view owner: \/$1|view owner: $1"|grep -i 'tag:'|cut -d" " -f2)
  if [ $item ]
    vuid=$($ct lsview -long "$item"|grep -i 'view uuid:'|cut -d" " -f3)
    vpath=$($ct lsview -long "$item"|grep -i 'view server access path:'|cut -d" " -f5)
    vserver=$($ct lsview -long "$item"|grep -i "view on host:"|cut -d" " -f4)

    #Process clearcase view removal 
    echo "Removing clearcase references for view:$item" 
    $ct endview $item > /dev/null 2>&1
    $ct rmview -force -uuid $vuid -all > /dev/null 2>&1
    $ct unregister -view -uuid $vuid > /dev/null 2>&1
    $ct rmtag -view $item > /dev/null 2>&1

    #Call report output for view deletion info
    reportFilesToDelete $rtimestamp $vserver $vpath
    echo "Fatal Error(43) Base cleartool lsview failed, script halted."
    exit 43

reportFilesToDelete() {
#Expects timestamp:$1, server:$2, path:$3 

if [ $1 ] && [ $2 ] && [ $3 ]
  rname="$(echo $delViewReportName)_$1.txt"
  echo $2 $3 | tee -a $rname
  echo "Missing timestamp, servername, or path from function call"


# main
if [ $1 ]
  deleteUserViews $usern
 progname=$(basename $0)
 echo -e " What user views shall I delete? \n Script Usage: ./$progname username"
 #29 no parameter error code - defined by me
 exit 29

echo "Done"
exit 0

Note: If you're looking for a FOSS solution for your source code repository solution instead. I would recommend you try "git" from http://git-scm.org instead.


Shannon VanWagner

May 10, 2012

How to install FreeNX on Ubuntu 12.04 Precise Pangolin

Ask anyone who knows me and they will tell you that I am a vocal (and perhaps tireless) advocate of FOSS/GNU/Linux. I loves me some FOSS and GNU/Linux and I really like to help others with it as well!

So after writing my post: How to install NX Free Edition on Ubuntu 12.04 Precise Pangolin, I will follow it up with a story about FOSS. In this post, I will guide you through some easy instructions for installing the GPL,FOSS NX server variant called FreeNX, and the FOSS "qtnx" client that is used to connect to the FreeNX server.

The main difference is that, unlike NX Free Edition, which is licensed as proprietary ( 2 connections limit), FreeNX and qtnx are completely Free Open Source Software (FOSS, GPL)! Sounds good right? It's music to my ears. Afterall, this FOSS/GNU/Linux stuff makes the Technical world go around for everyone.

Update 01/02/13: Want to try something easier? Simply install xrdp:
1.) sudo apt-get install xrdp
2.) Add this to the ~ of the user you plan on logggin in with:
echo "gnome-session --session=ubuntu-2d" > ~/.xsession
3.) sudo /etc/init.d/xrdp restart
4.) Connect to your xrdp host from GNU/Linux with rdesktop , or from win with windows-key+R, mstsc /v:

So here are the simple instructions (and a couple of tweaks) that I used to install FreeNX on Ubuntu 12.04 GNU/Linux:

First, you need to add the freenx-team PPA for Ubuntu 12.04 GNU/Linux. Hit CTRL + ALT + t to get your Terminal, then type or paste in the command below, then hit Enter, then hit Enter to confirm the addition of the new source:
sudo add-apt-repository ppa:freenx-team
Next, update your sources list, then install the FreeNX server software (there are two commands below, the 2nd only runs if 1st is successful). After verifying that no important packages will be removed, hit Y then enter to install FreeNX server:

sudo apt-get update && sudo apt-get install freenx

Next, as noted in the community documentation for installing FreeNX - download the missing nxsetup script, untar it, then copy it to /usr/lib/nx (the command below is one entire line that runs 3 commands and ends with /usr/lib/nx):
wget https://bugs.launchpad.net/freenx-server/+bug/576359/+attachment/1378450/+files/nxsetup.tar.gz && tar xvf nxsetup.tar.gz && sudo cp nxsetup /usr/lib/nx
Now, run the nxserver setup script. I chose to use the parameter to install the default NoMachine provided encryption keys during this command so the NoMachine win-clients can connect as well as qtnx:
sudo /usr/lib/nx/nxsetup --install --setup-nomachine-key
At this point, you have FreeNX server installed, but now you'll want to configure the FreeNX server to configure clients to use ubuntu-2d session:
echo -e "\n#Use unity 2d for client sessions\nCOMMAND_START_GNOME='gnome-session --session=ubuntu-2d'"|sudo tee -a /etc/nxserver/node.conf

Next, restart the FreeNX server to ensure it takes in the .conf file:
sudo /etc/init.d/freenx-server restart
That's it for the FreeNX server, now let's move on to the client. First, install the 'qtnx' package on Ubuntu 12.04 so we can have a client application to access the FreeNX server. You'll have to launch the Ubuntu Software Center, then click 'Edit > Software Sources' from the top menu. Then place a check by "Community-maintained free and open-source software (universe). Also, uncheck "Cdrom with Ubuntu 12.04" if it's checked. Then close the Software Sources dialog and the Ubuntu Software Center.
Now, run the commands to update your sources and install the qtnx application from the terminal (CTRL + ALT + t):
sudo apt-get update && sudo apt-get install qtnx
Ok, from the Unity menu or the CLI, start the 'qtnx' application. Enter the username/password for a user on the server, set the speed to LAN, then click configure. On the configuration dialog, set a name for the prfile, the hostname(or IP address) for the FreeNX server machine, the client resolution (I used 1024x768), network speed (LAN), and set the platform type to GNOME (see the example screenshot below).

Note: For the non-GNU/Linux clients, you can use nomachine.com's NX Client Free to connect to the server. Just use GNOME as the session.

That's it! Get your FreeNX connection on!

As for the version I tested with, it's: nxserver --version 3.2.0-74-SVN OS (GPL, using backend: 3.5.0)

Extra Tip: If for some reason your client wouldn't connect after testing, try deleting the entries in the ~/.nx directory. I'm not sure why this would help at all in most cases but it seemed to work for me. 

Extra Tip 2: If your clients are seeing the Network Manager 'Edit' buttons as greyed out while connected - have a look at this workaround: http://ubuntuforums.org/showthread.php?t=1616355

Update 07-15-2012: Test out the script I wrote to set this up automatically. Download from HERE. Be sure to test the sha256sum of your downloaded file to ensure authenticity. The result should be: 268a735ee24171073ff97c81a320db7022c88a0597f2902f8d181b686dfbf6b9

Additional resources from and Credit to:


Shannon VanWagner

Apr 30, 2012

Easy 'mail by smarthost' SMTP server in Ubuntu 12.04 GNU/Linux

After being tasked with setting up some servers that need to use a local MTA (Mail Transfer Agent) (via SMTP) on our internal network, I found the setup for the Ubuntu 12.04 GNU/Linux exim4 MTA to be pleasingly simple. Easy Peasy, works for me!

Assuming you already have a main mail server in your organization that you can use as a "smarthost" relay, run through these simple steps and you will be up and running with a local SMTP server on your Ubuntu GNU/Linux box in no time.

1.) Install the MTA package on your Ubuntu 12.04 GNU/Linux box:

sudo apt-get install exim4-daemon-light

2.) Configure the MTA with this command and steps, replacing somedomain.com with your mail domain name:

sudo dpkg-reconfigure exim4-config

  • Set postmaster email: postmaster@somedomain.com
  • Select 'mail sent by smarthost; no local mail'
  • Set somedomain.com at the "System mail name" screen
  • Set defaults(hit enter) until you get to the step below
  • Enter mail.somedomain.com for the "IP address or host name of the outgoing smarthost:"
  • Set defaults all the way to the finish

That's it! exim4 should restart and you'll be ready to test. Now wasn't that easy?

Now check if the smtp server is listening on port 25 (smtp) with these terminal commands:

netstat -ano |grep :25

Should see something like:
tcp 0 0* LISTEN off (0.00/0/0)

Test the setup by sending an email to yourself from the Terminal:

echo "Yay - SMTP works" | mail -s "Test email" youremail@somedomain.com

That's it! Now you're free to go and get the beverage of your choice and drink to the awesomeness of the makers of FOSS/GNU/Linux/Ubuntu.


Shannon VanWagner

Apr 25, 2012

How to install NX Free Edition on Ubuntu 12.04 Precise Pangolin

Screenshot of NX client connection to Ubuntu 12.04 with NX Free
Ubuntu GNU/Linux comes pre-loaded with the capability to remotely connect to the graphical desktop of your machine by means of a "Desktop Sharing" utility based on the VNC protocol. Clients can connect with a VNC viewer, i.e., tightvnc, vncviewer, etc.

While this may be a viable option for others, VNC has a few drawbacks that sent me looking for something a little more suited for my situation. Namely, I needed a speedy connection, and security.

One problem with VNC is that it's a non-encrypted and therefore non-secure protocol. The workaround for this is to configure the server to tunnel VNC client connections through SSH sessions. Unfortunately, doing this requires extra configuration on both the server and the client.

Another problem with VNC (at least one that I've experienced) is the laggy connections, which can make the user experience less than optimal. So in my search for a better alternative, I found "NX Free Edition" by www.nomachine.com.
NX Free Server delivers the X Window session to clients via the encrypted SSH (Secure Shell) protocol, and it does it much faster and snappier than my experience with VNC. The only drawback of NX Free Edition is the license, as it is proprietary..

Although "NX Free" edition is said to be "free forever". Looking at the license file in the .deb package, it appears there are a number of GPL-covered items there, and then some items with the proprietary license. Checkout the license for NX Free for yourself here.

On the subject of NX servers, there is a FreeNX server PPA for Ubuntu 12.04(and other LTS versions), it's named ppa:freenx-team . Unfortunately, at the time of this writing, the packages from the ppa:freenx-team didn't seem to work properly for me. There was some error message concerning the esound dependency package not being available for install.

So, instead of FreeNX, we'll install NX Free Edition with the provided .deb installers instead. I used the 64-bit versions in my tests. Apparently, the big difference between 'FreeNX' and 'NX Free Edition' is that FreeNX is wholly FOSS and has no connection limits whereas the NX Free Edition is only partially FOSS and is limited to 2 client connections (according to the license page at the link above).

Before installing the NX Free Edition pacakages from: www.nomachine.com, first install the openssh-server package from the Ubuntu repositories.

sudo apt-get install openssh-server

After the pre-requisite has been installed, download and install the 3 NX Free Edition components from the "NX Free Edition for Linux" section at www.nomachine.com ( packages are installed in this order: client / node / server). Example:
sudo dpkg -i nxclient_3.5.0-7_amd64.deb
sudo dpkg -i nxnode_3.5.0-7_amd64.deb
sudo dpkg -i nxserver_3.5.0-9_amd64.deb

As for the connection to the NX Free Edition server, simply download the client from www.nomachine.com that works with your platform. There are versions for all 3 of the Major operating systems available.

As of this writing, the regular Ubuntu session provided by NX Free server doesn't quite work as expected . Not to worry, the 'ubuntu-2d' session does work well. I'm working on getting the appropriate server-side configuration for this so the setting won't be required at the client, but in the mean time, the workaround is to configure the NX Free client Session setting as follows:
Application > "Run the following command": gnome-session --session=ubuntu-2d

Options > Enable 'New virtual desktop'

That's it. NX Free Edition works great and it's one solution to the problem of needing more security and speed over the default VNC client in Ubuntu GNU/Linux. Here's a screenshot of the client connection.

Feel free to leave your comments below. If you are using Ubuntu 11.10 and are having problems with Unity at the client, see this link for the workaround. For more information concerning FreeNX on Ubuntu, see this link.

Looking for the FreeNX Server installation instructions? See my post "How to install FreeNX on Ubuntu 12.04 Precise Pangolin" instead. Cheers!

Shannon VanWagner

Apr 5, 2012

On Helping Others Get their GNU/Linux & Consider Doing So

So one day I'm looking at my Google + page and I get this notification of a message:

"Can you help me to configure chip ralink rt2870 on Ubuntu(GNU/Linux), please?"

I really can't imagine at all, I mean I am absolutely dumbfounded as to why this person would contact me.. Really, it's not like I post (on average) 5 stories about GNU/Linux a day or anything... haha..

Turns out the driver this person needed was one of the types where it hadn't made it into the Linux kernel just yet, but the source code was out there. And so a module had to be built from source and installed on the machine to make the wireless adapter work.

Usually these types of problems are relatively easy to get fixed, because: a.) GNU/Linux is open source and so bugs can get fixed (or worked around) by anyone with the technical know-how, and b.) there are kind people out there that take their own time to post the specific step-by-step instructions to repair the problems. However, sometimes finding the correct "fix" to match your specific hardware configuration can be tricky. GNU/Linux has a great many tools to detect what type of hardware is in the machine, but(luckily) there are many different types of hardware out there.

Aside from finding a fix that applies to your specific hardware, another problem (and this can apply to any OS), is that  you can get into these situations where, if you don't cleanly de-install previous attempts at a fix that you have made, the residual clutter can mess things up for anything new that you're trying to install. When this happens, a crucial ingredient to success can sometimes be lost. That is, the "faith" in a person that they actually can fix the problem in the first place.

So, as it turns out, this person needed to remove the older (and incorrect version) driver that he had installed ( sudo make uninstall from within the source folder ), then start fresh, rebuilding the driver from the correct source, installing the driver, and configuring things correctly for use. I'm fairly certain this person could have knocked this out, had they de-installed the incorrect version they had on their machine and taken a few more steps.

So I respond:

"Which rt2870? Is it the USB stick? Also, which Ubuntu, is it 10.04? If both true then according to this article, you need to blacklist a module, along with a few other seemingly ugly things: http://linuxforums.org.uk/index.php?topic=852.0 (Thanks to: Mark Greaves for posting there ) and http://ubuntuforums.org/showthread.php?t=960642 (Thanks to: Nilsa5 for posting there). But, we'd want to see exactly what you have in terms of wireless adapter and version of Ubuntu before we jump in. You can see the version of Ubuntu with the terminal command 'cat /etc/issue', and the kernel with 'uname -r' or 'uname -a' to show whether you have 32bit or 64bit, and if you have a built-in wireless adapter 'sudo lspci |grep -i network' or USB adapter 'sudo lsusb |grep -i network'. Also, you could see which module is loaded with 'sudo lsmod | grep rt2870sta' or 'sudo lsmod | grep rt2800usb'."

And after some back and forth, I figured we could save time if I were just to connect to this person's machine and help him fix the problem directly. This is another place where the free stuff comes in, this time it's teamviewer.com (Teamviewer is one of those cross-platform-compatible applications where I could control the remote computer and they could see what I am doing at the same time). There are definitely FOSS alternatives to this, like VNC server/client (some setup required), or we could have used Google Chrome Remote Desktop instead of course. But to me, the main thing is that I only use tools that are cross-platform-compatible(this is a must), and in this case, free of charge.

How awesome is it that there are FOSS/GNU/Linux supporting individuals and and companies out there that continue to help to make GNU/Linux better for all of us? Very awesome indeed I say. And also how great is it that some other FOSS-supporting companies (and a few Freeware ones too ) are outright handing us all the tools we need to provide one another support for Technical issues for Free?! Fabulous! How cool is that? So by helping this person get their wireless up and running, that's how I contribute to GNU/Linux/FOSS myself. This brings me great satisfaction, not only as a technologist, but it's also a great feeling to be able to help someone else free themselves from the dungeons of the coercive monopolists and their restrictive software.

So anyway, when I get connected to this person's computer, I am a bit surprised that my left click on the mouse was reversed to the right click! And the person apparently had the language setup for something totally different than mine. Talk about adding a layer of difficulty! What a nice delicious challenge! So then I bring up the gedit on his machine and type to him a message, and it's in this foreign language.. So I'm thinking, that's not going to work.. so then I open his web browser and navigate to google.com/translate (another awesome FREE tool), and we proceeded to use that to communicate, right there on his computer, for the rest of the time.

So, working at the command line, I'm already knowing what commands to use, and so I'm cranking away, de-installing the older driver, checking to ensure dependencies are installed, compiling the new driver, installing the new driver. Then, after some rmmod, insmod, and reboot between.. voila! The driver is finally working! The person, having watched how easy it was, and now seeing that his wireless was working perfectly, was elated and Thanked me profusely. This is a very delightful aspect of FOSS in my eyes, to be helping others and not having to call into some paid-for "support case" because the proprietary OSes come with no warranty.

Recapping it all, the point I wanted to make is how very proud that I was to be able to help a fellow human being with their GNU/Linux. If you have tried GNU/Linux, and have figured out something worth sharing, I suggest you do as well. But you don't have to be a technical person to do good things for FOSS/GNU/Linux. Nope. You can help by simply telling others your stories. Post them on your blog, mention them in comments, correct those nay-sayers, yell it from rooftops! Also, I want to say that I am grateful for FOSS/GNU/Linux, all the people that make those possible, and also for the free tools like Google+, Google Translate, Teamviewer, etc. (the list is exhaustive). Helping others (and ourselves) is the spirit of Technology! Don't let some profiteering, coercive monopolist change your mode of thinking.

Shannon VanWagner

FREE YOURSELF, Use GNU+LINUX+FOSS! gnu.org | fsf.org | linux.com | getgnulinux.org | ubuntuguide.org | whylinuxisbetter.net | documentfoundation.org | humans-enabled.com | ubuntu.com | distrowatch.com | makethemove.net | livecdlist.com | code.google.com/opensource | sourceforge.net