Archive for January, 2010

Helpful Resources

01.23.10

Posted by adamlinuxhelp  |  No Comments »

Linux offers something for everyone, it’s not just for geeks.

Not everyone who uses Linux are programmers or geeks who feel they have to rebuild software applications.   To this day, I still have not recompiled a Linux kernel to improve performance.  But hey, enough advanced talk for now.  This post is dedicated to the vast resources for learning Linux that are out there (and have been out there for a long time).

Some suggestions for learning about (and using) Linux.

  • Listen to Linux-oriented podcasts.  There are many linux podcasts, but one I recommend for newcomers is the Linux Reality Podcast by Chess Griffin.  I listened to these podcasts while commuting on a bus.  Visit the Linux Reality RSS Feed (links and descriptions of each episode).   See my post on how to download multiple audio files with the command line.  Podcasts are an easy-to-follow resource available on your schedule.  If something sounds confusing, you can always google the phrase or method that doesn’t make sense.  Here is a link to various linux podcasts.
  • Visit forums (not newsgroups).  You’ll encounter the phrase “Google is your friend“.  Well, if google is a friend, then google.com/linux is an even better friend.  Chances are good that your questions about Linux have been asked already.  Viewing the online forums can show you just how big and (most of the time) helpful the Linux community can be.  Don’t overlook e-mail discussion groups such as Yahoo.
  • Tinker, experiment, and have fun at your own pace.  Browse the major book stores at their computing/programming section and you’ll find lots of Linux books.  I highly recommend The Linux Phrasebook by Scott Granneman for a few  reasons.  Firstly, it’s packed with helpful tips that I still use today.  Secondly, it’s small and portable.  Lastly, it’s a Linux book as opposed to an Ubuntu or Red Hat Linux book (which are fine books, but they are likely written with a specific OS version in mind, thus it can go out of date.  The Linux Phrasebook’s subtitle is “Essential Code and Commands”, making it a “general” resource that can help you get familiar with almost any Linux distribution.

The linux find command

01.20.10

Posted by adamlinuxhelp  |  1 Comment »

Using the find command to find and move files

Today I’d like to discuss one of my favorite shell commands: find.
find is awesome.  It locates files on your unix/linux system by using options and criteria.

Many of us are familiar with using the Microsoft Windows “Search Tool” in XP to Find Files or Folders, files by date, etc.  Finding a file this way involves populating text fields, drop-down menus, and (possibly) date-range selectors to find the files you’re looking for.

In unix and linux, the find command is used like this:

find [directory to search] [options] [actions]

Here’s a find command example.  Let’s say we want to find .mp3 files within your “Music” folder (/home/yourName/Music).  For this example, let’s pretend that you have the following files: song1.mp3, song1.wav, song2.mp3, song2.wav, song3.mp3, and song3.wav.

To find files in the Music directory, the command is: [don’t type the dollar sign- it’s the prompt, start typing at the word “find”.]

$ find Music/ -type f [press ENTER]
Music/song1.mp3
Music/song1.wav
Music/song2.mp3
Music/song2.wav
Music/song3.mp3
Music/song3.wav

We used the “-type f” option to find a regular file and not a directory.  The output (above) shows that we have media files (mp3 and wav) in the Music folder.  Remember we’re not interested in listing files so much as performing a ‘find’ on them.

On a Linux system you can use a GUI program to find files, but you can find them faster using the find command.  With its many options, find’s power is in its flexibility.  Flexibility to do what??? Finding files is finding files, right? Yes, I suppose so.

But what if you wanted to perform an action on files that you find? In Windows you’d have to first find them, navigate to them, and then (possibly) perform actions on them one file at a time.  While the steps are few, they can be manual and tedious, thus begging for a shortcut.

Let’s take our example a step further.  Let’s filter the command to only find ‘mp3’ files.  We’ll add the “-name” option to search wildcard patterns for files ending in mp3.

$ find Music/ -type f -name "*mp3"
Music/song1.mp3
Music/song2.mp3
Music/song3.mp3

If you wanted to find only the .wav files, then you’d change *mp3 to *wav.  Now let’s use find to move files to another folder.  For the sake of neatness, let’s create 2 sub-folders (also known as sub-directories) to hold mp3 files and a separate sub-folder for wav files.

To create a new directory use mkdir [folderPath].  If you do not specify a folder path, the shell creates the directory within the current working folder.  More on “current working folder” later.  For now, let’s assume that you’re in the “home/yourName” folder.

Creating folders “Music/mp3” and “Music/wav” requires 2 identical commands.  However, you can create 2 sub-directories in the “Music” folder with the command below.

mkdir Music/{mp3,wav}

Brackets tell the shell to expect a sequence of characters.  The comma separates each new directory.  This is the same as issuing mkdir Music/mp3 and mkdir Music/wav.

With the sub-folders created, we have a neater storage arrangement for our music files.  But we’re not done yet.  Remember that our “Music” folder currently stores both mp3 and wav files.  Let’s move each type into their respective sub-directory.  To make this happen we add an action onto our command known as “-exec” (followed by the “command terminator” \;) and the mv command to move the files as seen in the example below:

mv fileOldLocation fileNewLocation

Note: mv also renames files and can overwrite them—so use caution because mv actions cannot be undone.  For safety, use mv -i which is interactive in the event of a possible naming conflict or unintentional overwrite—it may save you anguish.

We then add brackets {}, BUT brackets behave differently in this context.  When brackets are part of -exec they represent each found file.  Also, when moving a file to another directory, put the trailing slash / on the target directory name.

To move the mp3 files from directory level “Music” to “Music/mp3” issue this command:

$ find Music/ -type f -name "*mp3" -exec mv -i {} Music/mp3/ \;

Since there weren’t any files in Music/mp3 we didn’t get any warnings from mv -i
Result: mp3 files moved from /home/yourName/Music to /home/yourName/Music/mp3/

To check it, you can use the ls command in the shell to “list” files in a given directory

$ ls Music/mp3/
song1.mp3  song2.mp3  song3.mp3

To move the wav files from directory level “Music” to “Music/wav” issue this command:

$ find Music/ -type f -name "*wav" -exec mv -i {} Music/wav/ \;
$ ls Music/wav/
 song1.wav  song2.wav  song3.wav

Summary

  • find (and its options) allow you to find files and perform actions on the found items
  • mkdir allows you to create new directories
  • ls lists files (and directories, subdirectories) within folder(s)
  • mv moves or renames files

Download several files: part 2

01.14.10

Posted by adamlinuxhelp  |  2 Comments »

In an earlier post we used wget to download a single image file, and then used it to get all of the ‘gif’ and ‘jpg’ files from a single command.  Multi-download commands of this type are helpful when you know the URL and exact directory where the image files exist.  Let’s now take it a step further, and get lazy too.  Lazy?? Yes, lazy.  Since we’re looking to use Linux for time-saving shortcuts, the less work we have to do to get to accomplish our task, the better.

As previously mentioned, I like podcasts.   Podcasts are (usually) available in an RSS feed in the form of a web URL.  Programs such as ITunes, Amarok, Rhythmbox (or other) use feed URLs to get info about the available audio files and you can manually download them or set up preferences that do this for you.

We’re going to look at this from a “get me all the files—now” approach using the Linux command line.

To perform a multi-file “unattended” download…

  1. Make sure that “lynx” (a terminal-based web browser) is installed.  To check if lynx is installed, type which lynx at the prompt.  If the shell responds with nothing but the next prompt, then it’s not installed.  To install lynx and you’re on a debian-based OS such as Ubuntu (or similar) type “sudo aptitude install lynx” at the prompt.  If you’re using a redhat-based system type “yum install lynx” to accomplish the same.   When lynx is installed, the shell will return the executable path of lynx (it might appear as /usr/local/bin/lynx) when you type “which lynx” at your prompt.
  2. Make sure you have wget installed.  In the terminal, type “which wget” and see what the shell returns.  If it’s not on your system then install it.  Items one and two only have to be done once, if at all.  I think wget will be there, but  lynx is probably not included out of the box at install time.
  3. A URL (or RSS feed URL) where the desired files exist.

Here’s our practical example.  Let’s download all the mp3 files at Steven J. Cohen’s “Doctor Who” RSS Feed. You should view this link in a web browser to make sure that the page/feed is still there.

Time for a “trial-run” (this next command will not download, just list the mp3 files at the Feed URL).

lynx -dump http://www.stevenjaycohen.com/audio/drwho/feed | egrep -o "http:.*mp3"

lynx -dump [URL] returns a numbered list of web links from a given web page (for the complete HTML source, use lynx -source [URL]). Since we only want the links, (and not the numbering) we need to filter this list using the UNIX pipe character “|” and the search tool egrep -o [pattern].  We put in “http:.*mp3” as our pattern which will capture any link that starts with http and ends with mp3 (note the .* is a wildcard meaning `any character`). A word of caution. It’s ALWAYS a good idea to do a trial run so that you have an idea of what you will request for download and if your command will succeed in building the list properly.  This is a very important preliminary step.

Now, let’s do this for real.  The following command downloads files into the current directory of the shell.  So if you execute the command from “/home/myUserName/music” then the files get saved into “music”.

lynx -dump http://www.stevenjaycohen.com/audio/drwho/feed | egrep -o "http:.*mp3" | xargs -n1 wget

And that’s it.  The shell shows progress of each file as it downloads.  When it’s done with the first file, it downloads the next one, and so on.  It runs unattended, allowing you to do other things with your time.

To perform the “unattended” download of all the files specified in the list, we needed another pipe, and another command structure known as “xargs”.  Why xargs?  Sometimes the shell runs into a problem of having “too many arguments” in its list to act on.  xargs is your friend should this happen.

xargs [options] [command].  The option and the command work together as follows.  Option “-n1” directs the command “wget” to work one time per each url from the list resulting from the “lynx -dump” part of the command.  Like many shell commands, there’s usually more than one way to do it.

Download several files: part 1

01.14.10

Posted by adamlinuxhelp  |  No Comments »

How to use wget; download many files with one command.

A typical way to download a file is to “right-click” on it and “save as” to a folder on your computer.  Downloading a few files this way is not tedious.  But if an audio book has 25 to 30 files you can bet I don’t want to do those manual moves over and over again.

Using a terminal, there’s a faster way to download files.  I’ll introduce now one of my often-used commands: wget.  This command has many useful options.  For example, you can download files, set up custom directory structures for your download(s), or see if a file exists without actually downloading it.

Using the command (in simple terms): Open a terminal and type wget [options] [urls] at the prompt (usually a dollar sign).  You can use one or several URLs.  Options are (well…) optional.

Here’s a practical example where you can download a gif image from the O’Reilly site linked below.  When you open a Linux terminal, you are usually in your user’s “home” directory.  This is fine for the purpose of this example.  Issue the command

wget http://oreilly.com/catalog/covers/0596009305_bkt.gif

Here’s what will happen: the file 0596009305_bkt.gif gets downloaded and saved to your home folder.  Cool right? But it was a bit of work (typing) just to download one file.  How does this save me time?

Yes, the above example is overly-simplified.  You can, if you wish, download any “.gif” or “.jpg” files from a given web address in the example below.  It’s a time-saving single command, borrowed from the commandlinefu website mentioned in the “cool and advanced uses of wget” link below.

wget -r -l1 --no-parent -nH -nd -A".gif,.jpg" http://example.com/images

*Change the “example.com/images” to a valid web address.  The options above (explained) are:

  • -r for “recursive”
  • -l1 only get files in the “images” directory (don’t dive into subdirectories)
  • –no-parent and -nH and -nd : ignore directory structure (no directories—just get the files)
  • “-A” is the “accept list” for files of type [.gif and .jpg].  It’s case-sensitive, so it would not download files ending in “.JPG”, so if you needed those too, specify with -A”.gif,.jpg,.JPG”

You can find more wget info and options here.  For really cool and advanced uses of wget, see this page.

I’ll post another awesome usage of wget in another post.  Thanks for reading.

Useful Commands: introduction

01.12.10

Posted by adamlinuxhelp  |  No Comments »

Learning useful shell commands help save time & effort.

Do you have to use the command line?  No, you don’t have to use it.  I didn’t use it much when I started using Linux.  But, in my opinion, learning useful shell commands helps you get the most out of Linux.

What are some benefits of using the Command Line?

  • When you need help from the Linux community, many helpful solutions are expressed as commands to be run in the terminal.  It’s done this way for simplicity, accuracy, and consistency.  Many graphical-based (GUI) programs are “front-ends” where a user triggers events (via menu choices & button clicks) that execute terminal-based commands in the background.
  • Many jobs in the IT and web development field require candidates to be comfortable on the command line.  This implies the ability to issue shell commands and use console-based text editors such as vi and emacs.  Some jobs will also require you to understand (and perhaps troubleshoot) pre-written shell scripts in many languages.
  • For repetitive tasks, using the command line is just plain faster.  Why wait for a GUI program to open, click on things, or browse for file(s) to manipulate one file at a time?  You could simply type one or more commands [and options] into a prompt to accomplish the same.  When you find yourself issuing the same commands a few times, it becomes apparent to save these command calls in a text-based file (a shell script) to make the process even faster.  More on this to come.

Your choice: part 1

01.10.10

Posted by adamlinuxhelp  |  No Comments »

Linux offers a freedom of choice.  Part 1 of ??

  1. Choice to create an operating environment in countless variations. Of course other Operating Systems offer choice in customizing your work space.  Changing things like your desktop colors, fonts, and font sizes are possible in Mac and Windows, etc.   With Microsoft Windows, you’re stuck with “Explorer” as your file manager. In Mac it’s called the Finder.  In Linux there are many file managers to pick from.  Another example is the desktop environment.  Say you wanted a simplistic layout without all the “bloat”.  Linux offers several “minimalist” desktops where the fancy eye-candy is gone, leaving you with a clean interface that uses less system resources (such as RAM) allowing for much faster response times.
  2. Choice of how many virtual desktops (Mac OS-X has “Spaces” which does essentially the same thing).  What are “virtual desktops”?  They are workspaces.  In Windows, it’s easy to clutter your desktop when you have a lot of file folders and applications open.  It becomes tedious to remember which ones to use and which ones to minimize.  And even with the advent of “ALT+TAB” to cycle through your folders and apps, the more things you have open makes it take longer to get at what you need at that given moment.  Imagine grouping your web-based apps and folders in its own “area 1” and have your word processing and spreadsheet open in “area 2” and have some mp3 music files playing in “area 3”.  The ability to flip back and forth between these “areas” greatly reduces desktop clutter—allowing you to get things done more efficiently.

Save time on tasks

01.08.10

Posted by adamlinuxhelp  |  No Comments »

Learning basic Linux commands helps save you time when working on repetitive tasks.

Imagine you had a folder of images in *.bmp format.  File sizes of bmps are larger than jpg files because they contain more information.  To convert these bmp files to jpgs you could open a .bmp file in a graphics program (such as photoshop), select processes (such as ‘save for web’), set the format to convert to, and finally, save the file “as” filename.jpg.  Converting each image takes about five to six “manual” steps, depending on how you opened the source files and where you save the destination files.  If you had a folder with 100 images, it becomes tedious repetition.  There are better ways to do this.

You could create a batch process or “macro” in your graphics app that lets you record the individual steps performed on a single image, and then point the macro at a folder of source images and a folder to save the converted images.  Photoshop has “batch processing” that handles this fine.  Other programs (such as THE GIMP) are “scriptable” but some knowledge of Python is probably a must.

So we now have a way to convert images en masse with a GUI app, but is it the most efficient way? Is it reusable? It is reusable but you’ll probably have to reset the macro/batch settings if your source and destination folders change the next time you had to convert a lot of images.

Is there a quicker way to accomplish the same thing more or less?  There is, in my opinion, a better way perform mass image conversion in a predictable, reliable way.  It requires the command line terminal and an application known as “ImageMagick“.

Using Linux (the bash shell and ImageMagick) here are the 2 steps:

1. Use the “cd” command to get to the directory that has the bmp images
2. Issue this command:  mogrify -format jpg *.bmp
The above command was found at http://www.ofzenandcomputing.com/zanswers/1016 and I thank them for posting it.  It accomplished in one line the same work as a 50-line shell script.
Result: All of your bmp files have been converted and saved-as jpgs in the same directory
Warning: the “mogrify” command usually replaces the source, but when changing formats, Imagemagick is intelligent enough not to destroy the original bmp file.

2a. To store the new files in a folder other than the source (e.g. “bmpFiles/jpgs”), add the -path option in front of -format.  The command becomes:  mogrify -path jpgs/ -format jpg *bmp
The folder specified by -path must first exist, or the command will fail.

Save some money

01.08.10

Posted by adamlinuxhelp  |  No Comments »

Linux saves money by allowing reuse of an old computer.

Let’s say you have an old (PC-style) machine lying around, and you enjoy “tinkering” with computers.  You’d like to learn more about things such as partitioning, dual-booting, bash shell scripting, or hosting a local PHP-based websites with database connections. Having and old computer available lets you to do this without messing with your “main” computer.  Remember, if it’s PC-style, (Gateway, IBM, HP, or Dell) Mac OS-X cannot be used.  Why throw away a working machine just because the latest Microsoft Operating System won’t run on it? So what are some options?

  1. Purchase a licensed copy of Microsoft Windows (remember, old machines probably cannot run Vista or Windows 7).  If the machine is really old, it may not even be able to run Windows XP.  Even if it could run XP, do you really want to use an outdated or no-longer-supported OS?
  2. Obtain a pirated copy of Microsoft Windows.  I don’t condone this approach, but it happens.  Even though your experimental machine is old, it deserves a stable architecture.  Think about it.  Your OS should be fully-functional so you can perform updates & backups without worrying about crashes or losing your work.
  3. Download & burn a few Linux LIVE CDs.  Use that “main” computer for something awesome without erasing anything on the HD. Go to distrowatch, read some info, check out some screenshots.  If a distro appeals to you—then download and burn the .iso—the cost (monetarily) per Live-CD is one blank CD.  Let the LIVE CD attempt to detect all of the hardware (this is  important if it’s a laptop, as you’d want to ensure that the wifi is working).  It might be best to stick with the more “popular” distros at first.  Most distros are “based on” or “derivatives of” major Linux distributions such as Red Hat, Debian, and (more recently) Debian-Ubuntu.  “Debian-Ubuntu” means that Ubuntu is the base, and Ubuntu is based on the Debian distribution.

Partitioning for Linux

01.04.10

Posted by adamlinuxhelp  |  No Comments »

Partitioning is a drive-setup process where you designate areas of your hard drive as “mount points”.

Depending on the situation, you can partition the drive as you’re installing Linux, or set up the drive first, (using a utility CD such as GParted or other).  For a single-boot setup, I suggest using the Install CD’s partitioning tool if you’re wiping out the old operating system or replacing one Linux distro with another.   If you’re adding Linux (to create a dual-boot machine with MS Windows or Mac OS-X) then you should attend to the partitioning chores first.  More on dual booting later.

Partitioning can intimidate newcomers, but fear not.  The Linux install process is flexible, and you don’t have to manually create partitions for single boot setup.  The install CD may offer suggestions (depends on distro) or at the very least have an “automatic partitioning” feature that works fine.

Manual partitioning, on the other hand, is worth learning.  Even the basic “3 partition” scheme (/, swap, and /home) offers the advantage of not losing your documents if you replace your distro.  While you should make frequent backups of your files anyway, common advice from the Linux community suggests keeping /home on its own partition for that very reason.  Find a partitioning tutorial here.

Single-boot Linux

01.03.10

Posted by adamlinuxhelp  |  No Comments »

Single-boot or “single boot” is when only one (1) operating system will be installed to your hard drive.  A single boot machine is a simple way install Linux. The single boot scenario typically consists of either

  • Completely wiping out the current Operating system and installing Linux, or
  • Installing Linux on a new and blank hard drive

I’ve performed both, and as stated, the process is simple.  You put in your Linux Live or Install CD and follow the prompts.  Many distros offer graphical installers which take you step-by-step, and typically asking you to confirm all the choices (default or custom) before any changes are written to the hard drive.  Some distros offer text-based installers which can be a bit too challenging to the new Linux user.

More on this topic later, but a word on partitions.  A partition is a “chunk” or “area” of your hard drive.  If you plan on trying other distros and still want a single-boot setup, you should definitely consider creating a minimum of 3 partitions where the swap area, root partition and “/home” are the 3 separate partitions.  A separate /home partition allows you to keep all of your documents when installing the next distro.  For more detail about partitioning follow this link.