Search This Blog


Wednesday, December 3, 2014

How to change system timezone

When you initially install Linux, you specify the machine's timezone. After the install, you can manually change the timezone. The following procedure applies to Debian and Ubuntu systems.

Before you change the timezone, let's find out what timezone your system is currently in.

$ date Tue Dec 2 13:53:11 PST 2014

The above date command tells you that the system is on PST, aka Pacific Standard Time.

You can change the timezone interactively or through batch processing.

Interactive setup

The following command guides you through 2 screens to configure the timezone.

$ sudo dpkg-reconfigure tzdata

The advantage of specifying the timezone interactively is that you don't have to know the exact name of the timezone. The program will guide you to select your target timezone. But, if you want to automate the process through a shell script, please follow the batch method as explained below.

Batch setup

  1. Identify the name of the target timezone.

    Timezone data files are stored in the /usr/share/zoneinfo directory tree. Each continent has a corresponding subdirectory, e.g., /usr/share/zoneinfo/America. Each continent subdirectory contains timezone files named by cities in the continent, e.g., /usr/share/zoneinfo/America/Vancouver.

    $ ls /usr/share/zoneinfo/America

    Note the city where your system is located (or the nearest city in the same timezone). The timezone identifier is the concatenated continent and city names, e.g., America/Vancouver.

  2. Specify the timezone in /etc/timezone.
    $ sudo -s sh -c 'echo America/Vancouver > /etc/timezone'
  3. Run configure program.
    $ sudo dpkg-reconfigure -f noninteractive tzdata

Monday, November 24, 2014

Free on-line Introduction to Linux course

In August 2014, more than 300,000 people registered for the first offering of the Introduction to Linux course. This popular Massive Open Online Course (MOCC) is taught by the Linux Foundation, and hosted on edx. The same course starts again on January 5, 2015.

The course is designed for people who have limited or no previous exposure to Linux. Despite that, I have enrolled in it, thinking that I will pick up some new knowledge anyway. Because it is self-paced (and free), if it proves to be too easy, I will just skip the course content.

If you are interested, please go enroll at edx today.

Thursday, November 20, 2014

How to split an image for visual effects

Suppose that you've just taken a panorama photograph with your fancy digital camera.

You can display the picture as is on your blog. Or you can be a little bit more creative. How about splitting it up into 3 rectangular pieces?

Or even into 2 rows like the following.

To crop a photo into rectangular pieces, use the convert program from the ImageMagick software suite. If your system runs on Debian or Ubuntu, install ImageMagick like this:

$ sudo apt-get install imagemagick

The original panorama image (P3190007.JPG) is 4256 x 1144 pixels in width and height respectively. The following command crops the image into tiles of 1419 x 1144 pixels. The output files are named tile_, and numbered sequentially starting from 0.

$ convert -crop 1419x1144 P3190007.JPG tile_%d.JPG $ ls -al tile* -rw-r--r-- 1 peter peter 337615 Nov 19 21:45 tile_0.JPG -rw-r--r-- 1 peter peter 300873 Nov 19 21:45 tile_1.JPG -rw-r--r-- 1 peter peter 315006 Nov 19 21:45 tile_2.JPG

The convert program can automatically calculate the width and height dimensions of the output tiles. You simply tell it the number of columns and rows. For example, '3x1@' means 3 columns and 1 row.

$ convert -crop 3x1@ P3190007.JPG tile_%d.JPG

If you want to stitch the component images back together, execute the following command:

$ convert tile_*.JPG +append output.JPG

The +append parameter tells convert to join the images side by side. If, for whatever reason, you want to stack them up vertically, specify -append instead.

Saturday, October 25, 2014

Tools for checking broken web links - part 2

Part 1 of this 2-part series on Linux link checking tools reviewed the tool linkchecker. This post concludes the series by presenting another tool, klinkstatus.

Unlike linkchecker which has a command-line interface, klinkstatus is only available as a GUI tool. Installing klinkstatus on Debian/Ubuntu systems is as easy as:

$ sudo apt-get install klinkstatus

After installation, I could not locate klinkstatus in the GNOME menu system. No problem. To run the program, simply execute the klinkstatus command in a terminal window.

For an initial exploratory test run, simply specify the starting URL for link checking in the top part of the screen (e.g.,, and click the Start Search button.

You can pause link checking by clicking the Pause Search button, and review the latest results. To resume, click Pause Search again; to stop, Stop Search.

Now that you have reviewed the initial results, you can customize subsequent checks in order to constrain the amount of output that you need to manually analyze and address afterward.

The program's user interface is very well designed. You can specify the common parameters right on the main screen. For instance, after exploratory testing, I want to prevent link checking for certain domains. To do that, enter the domain names in the Do not check regular expression field. Use the OR operator (the vertical bar '|') to separate multiple domains, e.g.,||

To customize a parameter that is not exposed on the main screen, click Settings, and then Configure KLinkStatus. There, you will find more parameters such as the number of simultaneous connections (threads) and the timeout threshold.

The link checking output is by default arranged in a tree view with the broken links highlighted in red. The tree structure allows you to quickly determine the location of the broken link with respect to your website.

You may choose to recheck a broken link to determine if the problem is a temporary one. Right click the link in the result pane and select Recheck.

Note that right clicking a link brings up other options such as Open URL and Open Referrer URL. With these options, you can quickly view the context of the broken link. This feature would be very useful if it worked. Unfortunately, clicking either option fails with the error message: Unable to run the command specified. The file or folder http://.... does not exist. This turns out to be an unresolved linkchecker bug. A workaround is to first click Copy URL (or Copy Referrer URL) from the right click menu, and then paste it into a web browser to open it manually.

The link checking output can be exported to a HTML file. Click File, then Export to HTML, and select whether to include All or just the Broken links.

Below is a final note to my fellow non-US bloggers (I'm blogging from Canada).

If I enter as the starting URL, the search is immediately redirected to, and stops there. To klinkstatus, and are 2 different domains, and when the search reaches an "external" domain (, it is programmed to not follow links from there. To correct the problem, I specify instead as the starting URL.

Monday, October 20, 2014

Tools for checking broken web links - part 1

With a growing web site, it becomes almost impossible to manually uncover all broken links. For WordPress blogs, you can install link checking plugins to automate the process. But, these plugins are resource intensive, and some web hosting companies (e.g., WPEngine) ban them outright. Alternatively, you may use web-based link checkers, such as Google Webmaster Tools and W3C. Generally, these tools lack the advanced features, for example, the use of regular expressions to filter URLs submitted for link checking.

This post is part 1 of a 2-part series to examine Linux desktop tools for discovering broken links. The first tool is linkchecker, followed by klinkstatus which is covered in the next post.

I ran each tool on this very blog "Linux Commando" which, to date, has 149 posts and 693 comments.

linkchecker runs on both the command line and the GUI. To install the command line version on Debian/Ubuntu systems:

$ sudo apt-get install linkchecker

Link checking often results in too much output for the user to sift through. A best practice is to run an initial exploratory test to identify potential issues, and to gather information for constraining future tests. I ran the following command as an exploratory test against this blog. The output messages are streamed to both the screen and an output file named errors.csv. The output lines are in the semicolon-separated CSV format.

$ linkchecker -ocsv | tee errors.csv


  • By default, 10 threads are generated to process the URLs in parallel. The exploratory test resulted in many timeouts during connection attempts. To avoid timeouts, I limit subsequent runs to only 5 threads (-t5), and increase the timeout threshold from 60 to 90 seconds(--timeout=90).
  • The exploratory test output was cluttered with warning messages such as access denied by robots.txt. For actual runs, we added the parameter --no-warnings to write only error messages.
  • This blog contains monthly archive pages, e.g., 2014_06_01_archive.html, which link to all actual content pages posted during the month. To avoid duplicating effort to check the content pages, I specified the parameter --no-follow-url=archive\.html to skip archive pages. If needed, you can specify more than one such parameter.
  • Embedded in the website are some external links which do not require link checking. For example, links to I can use the --ignore-url=google\.com parameter to specify a regular expression to filter them out. Note that, if needed, you can specify multiple occurrences of the parameter.

The revised command is as follows:

$ linkchecker -t5 --timeout=90 --no-warnings --no-follow-url=archive\.html --ignore-url=google\.com --ignore-url=blogger\.com -ocsv | tee errors.csv

To visually inspect the output CSV file, open it using a spreadsheet program. Each link error is listed on a separate line, with the first 2 columns being the offending URLs and their parent URLs respectively.

Note that a bad URL can be reported multiple times in the file, often non-consecutively. One such URL is in red). To make easier the inspection and analysis of the broken URLs, sort the lines by the first, i.e. URL, column.

A closer examination revealed that many broken URLs were not URLs I inserted in my website (including the red ones). So, where do they come from? To solve the mystery, I looked up their parent URLs. Lo and behold, those broken links were actually URL identifiers of the comment authors. Over time, some of those URLs had become obsolete. Because they were genuine comments, and provided value, I decided to keep them.

linkchecker did find 5 true broken links that needed fixing.

If you prefer not to use the command line interface, linkchecker has a front-end which you can install like this:

$ sudo apt-get install linkchecker-gui

Not all parameters are available on the front-end for you to directly modify. If a parameter is not on the GUI, such as skip warning messages, you need to edit the linkchecker configuration file. This is inconvenient, and a potential source of human error. Another missing feature is that you cannot suspend operation once the link checking is in progress.

If you want to use a GUI tool, I'd recommend klinkstatus which is covered in part 2 of this series.