Monday, July 13, 2020

Useful command-line date utilities

My earlier post, Fun with Date Arithmetic shows how to use the date command to compute a future or past date that is a certain number of days ahead or in the past. This post expands on how you can manipulate dates on the Linux command-line interface using the cal and dateutils programs. These commands can give you results that GUI calendars can't.

cal

While the date command works with the basic unit of days, say display the day that is 3 days from today, the cal command manipulates months, say display the month that is 2 months away.

Running cal without any argument displays the calendar for the current month.

$ cal
     July 2020
Su Mo Tu We Th Fr Sa
          1  2  3  4
 5  6  7  8  9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25 
26 27 28 29 30 31

While GUI calendar programs are well capable of displaying single month calendars, they are no match to the cal command for simultaneously displaying multiple months.

To display the current and the following say 2 months, use the -A2 argument:

$ cal -A2
                            2020
        July                 August              September
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
          1  2  3  4                     1         1  2  3  4  5 
 5  6  7  8  9 10 11   2  3  4  5  6  7  8   6  7  8  9 10 11 12
12 13 14 15 16 17 18   9 10 11 12 13 14 15  13 14 15 16 17 18 19 
19 20 21 22 23 24 25  16 17 18 19 20 21 22  20 21 22 23 24 25 26
26 27 28 29 30 31     23 24 25 26 27 28 29  27 28 29 30                             
                      30 31

To display past months is equally easy with the -B argument. In fact, you can combine -B and -A to, for example, display the previous and the next month as follows:

$ cal -A1 -B1
     June 2020             July 2020            August 2020
Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa  Su Mo Tu We Th Fr Sa
    1  2  3  4  5  6            1  2  3  4                     1
 7  8  9 10 11 12 13   5  6  7  8  9 10 11   2  3  4  5  6  7  8
14 15 16 17 18 19 20  12 13 14 15 16 17 18   9 10 11 12 13 14 15
21 22 23 24 25 26 27  19 20 21 22 23 24 25  16 17 18 19 20 21 22
28 29 30              26 27 28 29 30 31     23 24 25 26 27 28 29
                                            30 31

cal has a special shortcut for displaying the above combination, namely, the current month with the immediate before and after month:

$ cal -3

dateutils

While date is a nifty program to do date arithemetic, dateutils is a more versatile collection of tools for date manipulation.

To install the dateutils program on Debian,

$ sudo apt install dateutils

dateutils.dadd

Use the dateutils.dadd sub-command to do date arithmetic. The program takes 2 input: a date and a duration either before or after the said date, specified in years, months, weeks, or days. For instance, to output the date that is 1 week and 2 days in the future from today, say 2020-07-10:

$ dateutils.dadd today +1w2d
2020-07-19

You can replace the special keyword today with a specific date. For instance, to compute the date that is 1 year, 2 months and 3 days in the past from 2020-07-10:

$ dateutils.dadd 2020-07-10 -1y2m3d
2019-05-07

dateutils.ddiff

Given 2 dates, dateutils.ddiff computes the duration between them. The default output unit is the number of days. You can customize the output units by specifying a format using the -f argument.

$ dateutils.ddiff 2020-07-02 2020-08-21 -f "%y years %m months %w weeks %d days"
0 years 1 months 2 weeks 5 days

Related posts:

. Fun with Date Arithmetic

Monday, July 6, 2020

Use Certbot to renew Let's Encrypt TLS certificates

You created a new website or perhaps even configured a SMTP mail server. You patted yourself on the back because you had not forgotten about securing the web and mail services. Specifically, you set up a free TLS certificate from Let's Encrypt. So, it is time to put your feet up and admire the good work you had done, right?

Not yet. The TLS certificate from Let's Encrypt would expire every 90 days, and is renewable only after 60 days. Doesn't this scream for automation?

Assuming that you have shell access to the host server, this blog post explains how to use Certbot to automate the renewal of Let's Encrypt certificates, and points out some gotchas to avoid.

Certbot

The recommended way to deploy Let's Encrypt certificates on a Linux system is to use the certbot tool. This tutorial assumes that you have successfully used certbot to obtain and install a Let's Encrypt certificate.

The Let's Encrypt ecosystem with certbot is designed with automation in mind. When you install certbot on various Linux distributions such as Debian, Ubuntu, Fedora, CentOS, etc, the mechanism for certificate renewal is already put in place. What you need to do is make sure that the timer for certbot is enabled.

$ systemctl status certbot.timer
● certbot.timer - Run certbot twice daily
   Loaded: loaded (/lib/systemd/system/certbot.timer; enabled; vendor preset: enabled)
   Active: active (waiting) since Sat 2020-06-27 10:20:51 PDT; 3 days ago
  Trigger: Wed 2020-07-01 12:14:52 PDT; 2h 1min left

To enable the timer,

$ sudo systemctl enable certbot.timer 

Certbot is scheduled to automatically run twice daily to check if certificate renewal is needed. A certificate is only renewed, however, if expiry is impending—within 30 days before expiration.

For your peace of mind, you can verify the current status of your TLS certificates using the web tools SSL Test and crt.sh.

Gotcha # 1: Refreshing certificates

Renewing a certificate before it expires is only half the battle. The other half is to get the web server (and email server if applicable) to use the new certificate. The obsolete version is still in use until the webserver (and email server) is reloaded.

Certbot provides a hook interface to run scripts before or after a certificate is renewed. I used the Deploy hooks and the Pre-hooks to automate the reloading of web and email server programs.

Deploy hooks

A Deploy hook is run after the successful renewal of a certificate. Deploy hooks are placed in the /etc/letsencrypt/renewal-hooks/deploy directory.

If there is more than 1 script in the directory, the scripts are executed in alphabetical order based on their filenames.

I created 2 Deploy scripts, 01-reload-nginx.sh and 02-reload-postfix.sh to reload NGINX and postfix respectively. Both scripts should be executable(file permissions set to 770).

A typical script to reload NGINX is as follows.

$ sudo cat /etc/letsencrypt/renewal-hooks/deploy/01-reload-nginx.sh
#! /bin/sh
set -e
/etc/init.d/nginx configtest
/etc/init.d/nginx reload 

To switch to the new certificate for postfix, both postfix and dovecot need to be reloaded.

$ sudo cat /etc/letsencrypt/renewal-hooks/deploy/02-reload-postfix.sh
#! /bin/sh
set -e
/etc/init.d/postfix reload
/etc/init.d/dovecot reload

Pre-hooks

Pre-hooks are scripts to be run when a certificate is due for renewal, i.e., within 30 days prior to expiration, and before the renewal is actually performed. Those scripts are placed in the /etc/letsencrypt/renewal-hooks/pre/ directory.

I created a Pre-hook script named 01-notify-renewal.sh to email me when a certificate is due for renewal.

$ sudo cat /etc/letsencrypt/renewal-hooks/pre/01-notify-renewal.sh 
#! /bin/sh
set -e
echo 'Message' | /usr/bin/mail yourEmail@example.com -s 'Subject' 

Manual execution

After auto-renewal is set up, there is little need to manually renew a certificate. But, you have the option to do so. To manually renew:

$ sudo certbot -q renew

The -q option suppresses all output except errors.

Recall that a certificate will only be renewed if it is within 30 days to expiry. You can override this restriction by specifying the --force-renewal option. Use it with caution however(see gotcha # 2).

$ sudo certbot -q --force-renewal renew

Gotcha # 2: Rate limit

Renewal is subject to a Duplicate Certificate limit of 5 per week. Please read the rate limits documentation to be acquainted with what counts toward this limit.

If the rate limit for a certificate is exceeded, renewal is temporarily suspended until the rate limit resets—on a sliding basis—after a week. For instance, if you renew a certificate 3 times on Monday and twice more on Friday, renewal is suspended until the following Monday.

Thursday, July 2, 2020

How to generate and read QR code on Linux

QR code, short for Quick Response code, was initially created to improve on bar codes used in inventory management. Nowadays, QR codes are ubiquitous, on posters, billboards, web pages, etc. This post will illustrate how to generate and read QR codes using the Linux command line interface (CLI).

The programs you will need to generate and read QR codes are qrencode and zbarimg respectively. (If you want to work with a GUI tool, there is QtQR.) To install the 2 programs on Debian:

$ sudo apt install qrencode zbar-tools 

Background

A QR code is a matrix of square dots (or 'modules' in QRspeak). QR codes have as many as 40 versions of increasing data capacity. Version 1's dimension is 21 × 21 modules, and each higher version adds 4 modules per side ending with version 40 with 177 × 177 modules.

The exact maximum data capacity of a version depends on several factors, including the type of characters stored, e.g., numeric vs alphanumeric, and the level of error correction desired. At Medium error correcting capability, version 1 can store up to 20 alphanumeric characters; version 40, 3,391.

Fortunately, as we'll see next, the qrencode utility specifies good defaults, and hides much of the gory details from you.

QR code generation

In its simplest form, qrencode takes the input string to be encoded and outputs the PNG graphic to a file. The following command encodes the URL for this website.

$ qrencode -o webURL.png  'https://linuxcommando.blogspot.com/'

You can specify different parameters to fine-tune the QR code. Use the -l parameter to change the error correction level from the default L for Lowest to M for Medium, Q for Quite High, or H for Highest. In addition, you can explicitly specify the version to use, the size of the module and the margin, etc. The following example generates a version two QR code for the same website at the Highest error correcting level.

$ qrencode -o webURL.png -l H -v 2 'https://linuxcommando.blogspot.com/'

Besides the URL, marketers typically encode information such as phone numbers and email addresses.

$ qrencode  -o webPhone.png  '(604)555-1234'
$ qrencode  -o webEmail.png 'spanish3rdlanguage@gmail.com'

Many QR code scanners will automatically open the associated app upon scanning a QR code of a special format, e.g., a browser for URLs, email client for email addresses, and phone app for telephone numbers.

QR code scanning

The Linux program zbarimg decodes the QR code stored in a file. To invoke, simply provide the input filename which contains the QR code.

$ zbarimg webURL.png
QR-Code:https://linuxcommando.blogspot.com/
scanned 1 barcode symbols from 1 images in 0 seconds

If you specify the -d parameter, zbarimg will display the QR code in addition to the decoded information.

The default camera app of recent Android or iOS phones can also function as QR code scanner. To scan, run the camera app and point it towards the QR code.

Tuesday, June 23, 2020

Adding Google Analytics tracking code to a WordPress website

My previous post revealed how to piggyback a new WordPress website on an existing WordPress instance using what is known in WordPress-speak as multisite. In this post, I'll walk through how to embed the Google Analytics tracking code, aka the Global Site Tag, in the new website.

Website administrators want to know who, when, how, and what users do on their websites. Google Analytics can provide that information if the proper tracking code is found on the web pages.

There is more than 1 way to insert the tracking code, including using the Google Tag Manager or the WordPress plugin MonsterInsights. This post details a manual method of directly embedding the tracking code in a WordPress theme, assuming that you have already set up a Google Analytics account for the target website. In addition, I assume that you have created and activated a child theme for your website. If you are hosting multiple websites within the same WordPress instance, I assume you have a separate child theme for each site.

  1. Login to Google Analytics, and select the account corresponding to the target website.
  2. Click Admin on the LHS menu bar.
  3. Click Tracking Info and then Tracking Code in the middle column.
  4. Copy the Global Site Tag script to be pasted next in the WordPress theme.
  5. SSH into web host, and copy the header.php file from the parent theme to the children theme.
  6. $ cp /var/www/example1.com/wp-content/themes/twentyseventeen/header.php  /var/www/example1.com/wp-content/themes/twentyseventeen-child/header.php
    
  7. Paste the tracking code script.
    The script should be inserted in the header.php file in your child theme directory, say /var/www/example1.com/wp-content/themes/twentyseventeen-child, just before the call to wp_head() at the end of the header specification.
  8. Navigate back to Tracking Info/Tracking Code on Google Analytics and click Send Test Traffic.
    A new session of your website pops up in the browser.
  9. Navigate to Reports section on the LHS menu bar, click Realtime, then Overview.
    You should see the just opened session being counted in the number of active users on site.

Related webpages

Sunday, June 14, 2020

How to migrate single site WordPress to multisite

The scenario

I had an existing WordPress website, say example1.com, that was hosted on a DigitalOcean VPS running LEMP(Linux, NGINX, MySQL, PHP). The website supported HTTPS using a TLS certificate issued by Let's Encrypt.

I wanted to start a new WordPress website, say example2.com. Barring a miracle, example2.com would initially have minimal traffic.

I decided that the new site would run on a virtual host on the same VPS, using WordPress's multisite feature.

WordPress multisite

Hosting multiple websites/domains on the same VPS can be a double-edged sword. Suffice to say, the advantage is economy of scale, and the disadvantage, putting all one's eggs in one basket.

The multisite model in WordPress can be summarized as '1 instance, 1 database'. The multiple websites share the same WordPress DocumentRoot directory (/var/www/example1) and the same WordPress mySQL database.

Within the single database, site-specific information is stored in tables identified by the blog IDs. For instance, the wp_posts table for example1.com retains the same name in multisite. However, the corresponding table for example2.com is named wp_2_posts (the 2 in the name refers to the official blog ID).

Multisite introduces a new level of complexity in administration. Seeing multisite in action is the best way to know what you are getting into before actually migrating your production website.

Trialing the migration

Setting up a separate VPS with the same configuration as the production system is the best option for conducting a trial migration. Notwithstanding, I opted for a poor man's platform to test the migration, my home workstation.

I won't be able to completely replicate at home the production environment. Most notably, no HTTPS for the home machine because there won't be TLS certificates.

To reflect the change from HTTPS to HTTP, I modify 2 WordPress administrative options, siteurl and home. Run the following SQL commands under mySQL:

update wp_options set option_value='http://example1a.com' where option_name = 'siteurl';
update wp_options set option_value='http://example1a.com' where option_name = 'home';

Note that instead of reusing the same names for example1.com and example2.com, I renamed them to say example1a.com and example2a.com respectively. In addition, I configure local DNS on my home workstation to map example1a.com and example2a.com to localhost's IP address. The reason is that I can access both the production and the trial websites at the same time. Add the following lines to the /etc/hosts file:

127.0.1.1   example1a.com
127.0.1.1   example2a.com

The rest of this post will detail the steps to convert WordPress from hosting a single site to hosting multiple sites.

Configuring system

  1. Configure DNS.

    Register example2.com with a domain name registry and add the appropriate DNS records.

  2. Obtain Let’s Encrypt TLS certificate for second domain.

    I assume certbot is already installed, certbot.timer enabled, and port 443 open.

    Although it is possible to bundle multiple domains, example1.com and example2.com in a single certificate, it is recommended that you create separate certificates for unique domain names.

    $ sudo certbot certonly --webroot -n --agree-tos -m sysadmin@example2.com -w /var/www/example1.com -d example2.com 
    

    -m: the email address for the certificate contact.

    -w: the DocumentRoot for example2.com which is the same as example1.com.

    -d: the domain.

  3. Install certificate.

    Link the private key and the certificate generated by Let’s Encrypt to their respective expected TLS locations.

    $ sudo ln -s /etc/letsencrypt/live/example2.com/privkey.pem /etc/ssl/private/key2.pem 
    $ sudo ln -s /etc/letsencrypt/live/example2.com/fullchain.pem /etc/ssl/certs/cert2.pem
    

    Note that the names key2.pem and cert2.pem must be different from their counterparts for example1.com. Make a note of their names as you will need them later.

  4. Configure website.

    Create /etc/nginx/sites-available/example2.com.conf by copying example1.com.conf and making the necessary changes.

    My skeleton example2.com.conf file looks like the following. The highlighted lines are relevant to the migration per se.

    server {
        listen 80;
        return 301 https://$host$request_uri;
    }
    server { 
        listen 443 ssl; 
        ssl_certificate     /etc/ssl/certs/cert2.pem;
        ssl_certificate_key /etc/ssl/private/key2.pem;
        root /var/www/example1.com; 
        server_name example2.com *.example2.com;
        index index.html index.php;
        location / {
            try_files $uri $uri/ /index.php?q=$uri&$args;
        }
        location ~ \.php$ {
          fastcgi_pass unix:/var/run/php/php7.3-fpm_example1.com.sock; 
        }
        location ~* /(?:uploads|files)/.*\.php$ {
          deny all;
        }
        location = /robots.txt {
          allow all;
          log_not_found off;
          access_log off;
        }
        add_header X-XSS-Protection "1; mode=block";
        add_header X-Content-Type-Options nosniff;
        add_header X-Robots-Tag none;
        add_header X-Download-Options noopen;
        add_header X-Permitted-Cross-Domain-Policies none;
        add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload";
        add_header Referrer-Policy no-referrer;
        add_header X-Frame-Options "SAMEORIGIN";
    }
  5. Notes:

    • The location of the certificate (cert2.pem) and key (key2.pem) need to be specified.
    • The DocumentRoot location is the same as example1.com.
    • The relevant server names are specified for this website(example2.com)
    • PHP handling is listening to the same socket as example1.com (/var/run/php/php7.3-fpm_example1.com.sock).
  6. Enable website.
    $ sudo ln -s /etc/nginx/sites-available/example2.com.conf /etc/nginx/sites-enabled/example2.com.conf
  7. Reload NGINX.

    Test the syntax of the file changes before actually reloading the configuration files.

    $ sudo nginx -t
    $ sudo systemctl reload nginx
    

Configuring WordPress

  1. Install wp_cli.

    Although one can handcraft the necessary lines in the WordPress configuration file (/var/www/example1.com/wp-config.php), I’d recommend using the command-line program [wp-cli](https://wp-cli.org/). To install, run this command sequence:

    $ curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar
    $ chmod +x wp-cli.phar
    $ sudo mv wp-cli.phar /usr/local/bin/wp
  2. Convert to multisite.
    $ wp core multisite-convert --subdomains --path=/var/www/example1.com
    

    The above command can be run while the website is up because it only statically inserts the following lines into wp-config.php.

    define( 'WP_ALLOW_MULTISITE', true );
    define( 'MULTISITE', true );
    define( 'SUBDOMAIN_INSTALL', true );
    $base = '/';
    define( 'DOMAIN_CURRENT_SITE', 'example1.com' );
    define( 'PATH_CURRENT_SITE', '/' );
    define( 'SITE_ID_CURRENT_SITE', 1 );
    define( 'BLOG_ID_CURRENT_SITE', 1 );
    
  3. Patch ‘blocked cookie’ bug.

    Unless the bug is patched, login to your new website (example2.com) is prevented. The error message from Firefox is ‘Cookies are blocked or not supported by your browser. You must enable cookies to use WordPress.’

    To patch, edit /var/www/example1.com/wp-config.php, and insert the following define statement anywhere above the “That’s all” comment line.

    define('COOKIE_DOMAIN', false);
    ...
    /* That's all, stop editing! Happy publishing. */
    
  4. Restart the PHP-FPM and NGINX daemons.
    $ sudo systemctl restart php7.3-fpm
    $ sudo systemctl restart nginx
    

Creating new site

  1. Login to the original WordPress website example1.com using the URL(https://example1.com/wp-login.php).

    The ID to use to login is the same admin ID for example1.com. In multisite, this admin ID is promoted to super-admin status, capable of administering all domains within the network.

  2. Click My Sites and select Network Admin and then Sites.
  3. Click Add New.
  4. Enter the required data, and click Add Site.
  5. The Site Address is where one is supposed to enter the URL except it expects a subdomain, such as example2 which it will then concatenate with the primary domain to become example2.example1.com, not what I really wanted as in https://example2.com/. So for now, I simply play along by entering example2, and I will change it later. If you know a better way, please let me know in a comment.

    The Admin Email can be that of an existing user, say the super-admin, or a new user.

  6. Again, click My Sites and select Network Admin and then Sites.
  7. Hover over example2.example1.com and click Edit.
  8. Enter the correct Site Address, https://example2.com/, and click Save Changes.

Now, the new website is created and ready for you to edit. Browse to https://example2.com/wp-login.php and login as the super-admin user.

Related webpages


Tuesday, May 26, 2020

Gromit-MPX: a nifty videoconference screen annotation tool

The rise of the COVID-19 pandemic propels videoconferencing to the stratosphere of user adoption. Almost overnight, the previously unknown app Zoom became a household technology name. Technology behemoths like Google and Microsoft scrambled to beef up their own videoconferencing products to match Zoom's success.

Zoom allows the meeting presenter to share their desktop with other participants. Google Meet and Skype also have that screen sharing feature. What Zoom offers, as of today, but not Google Meet or Skype, is the ability to annotate the shared screen in real time.

Undoubtedly, Google and Microsoft will eventually incorporate screen annotation in their respective products, but for the time being, gromit-mpx is a viable stopgap solution.

With gromit-mpx, presenters can annotate their desktop using free-hand drawing. It is true that Zoom as well as several third-party open-source annotation apps such as ardesia and pylote give presenters more bells and whistles, for instance, to draw geometrical shapes such as solid or dashed lines and to enter text. Yet the no-frills gromit-mpx is tailor-made for videoconferencing because of its non-obtrusive, hotkey-based mode of operation.

In contrast to Zoom and pylote, gromit-mpx does not have a toolbar, thus saving valuable screen space. In lieu of a toolbar, gromit-mpx functionalities can be activated using hotkeys(see the table below). The inconspicuous use of hotkeys is generally less obtrusive to the presentation than the clicking of the mouse on a protruding toolbar.

Hotkey combo Corresponding action
F9 Toggle drawing
Shift-F9 Clear screen
Ctrl-F9 Toggle visibility
Alt-F9 Quit app
Click Draw with red pen (default)
Shift-Click Blue pen
Ctrl-Click Yellow pen
Wheel-button click Green pen with arrow
Right click Eraser

Installation

To install gromit-mpx on Debian or Ubuntu, enter:


# apt install gromit-mpx

Conclusion

If a videoconference presenter has the most basic requirement for an annotation tool, for instance, to draw meeting participants' attention to an area of the screen, gromit-mpx fits the bill well. Its handy hotkeys make annotation more seamless and speedy than the clunky toolbar used by more feature-complete apps, even Zoom.

Thursday, May 14, 2020

Joplin vs Orgzly as note-taking to-do apps

This post evaluates 2 note-taking, to-do list managers: Joplin and org-mode/Orgzly. Both are free, open-source, cross-platform software.

As an avid emacs user, I have always used org-mode on my Linux desktop to take notes and compile my to-do lists. So much so, I held out as long as possible before I switched to another tool that could actually run on Android. Org-mode as an emacs tool did not support Android at the time. Painful as it was, I replaced org-mode with the cross-platform tool Joplin.

Joplin served me very well on both Linux and Android platforms…until I came across an Android app named Orgzly. Org-mode and Orgzly share the same plain text file format, and according to Orgzly documentation, 'files generated by Orgzly might differ in the amount of white space … Any other difference would be considered a serious bug.' File compatibility means that you can edit your tasks and notes using org-mode on Linux and Orgzly on Android.

One key difference between org-mode and Orgzly is how you edit the shared underlying files. Using org-mode, you edit the text files directly inside emacs the text editor. In contrast, you use Orgzly's GUI for editing.

Below, I compare Joplin and org-mode/Orgzly.

Portability

You can run Joplin in 3 ways:
  1. as a desktop app on Linux, Windows or macOS,
  2. as a mobile app on Android or iOS, and
  3. as a terminal program on Linux, FreeBSD, macOS or Windows.
I used both its Linux desktop version as well as the Android mobile version, and had no problem vouching for Joplin.

Orgzly runs on Android only (no iOS version yet). For non-mobile platforms, you will need to run org-mode within the emacs editor. Although it has been done before, converting to emacs just to use org-mode may be an overkill for most people.

On portability, Joplin has a definite advantage over org-mode/Orgzly.

Installability

Mobile versions of Joplin can be installed via the respective Google and Apple app stores. Installing it on a desktop (Linux, Windows, macOS) is just as convenient. Joplin is available to download from the standard repository of many Linux distributions including Debian. You can also download the AppImage version on Joplin's website.

Orgzly can be installed on Android using either Google Play or F-Droid, the repository for free and open-source Android apps. Org-mode, the desktop counterpart, is now part of emacs: installing emacs automatically installs org-mode.

It is a tie.

Data import/export

Unless you are using a note-taking to-do app for the first time, you will want to easily import any data from your existing app into the new app. Conversely, to prevent vendor lock-in to any 1 app, you want to be able to export data in a format that other apps can easily import. One such format is ENEX, the file format for Evernote, the app with arguably the largest installed user base.

Joplin can import ENEX files, but cannot export to the same. In recompense, you can import and export data in Markdown, PDF and JSON formats.

Orgzly currently only supports the import of org-mode files, and does not support any import or export of third-party file formats.

The clear winner is Joplin.

User-friendliness

Both Joplin and Orgzly are minimalistic (even spartan in the case of Orgzly) but highly functional in their user interface design.

To their credit, both offer dark mode, i.e., the ability to set the background to dark.

Joplin has a slight edge over Orgzly in aesthetics.

Winner: Joplin.

Cloud storage

Both Joplin and Orgzly support data storage on popular cloud platforms. Cloud storage enables you to access your tasks and notes from multiple devices and platforms.

Some cloud platforms provide custom API that client apps such as Joplin and Orgzly can use to make connections. There is a cost to using API: the client app needs to write custom code for each cloud service.

The advent of the WebDAV protocol has created a level playing field for apps that need to exchange data over the Internet. Client apps only need to write the WebDAV interface once to support access to all WebDAV-compliant cloud services.

Joplin supports Dropbox and OneDrive via native API, and Nextcloud via the WebDAV protocol. Note that with OneDrive, upload files are restricted to a maximum size of 4 MB, which is relatively low if you are attaching large multimedia files.

Orgzly supports Dropbox through native API and Nextcloud and other cloud services through WebDAV.

A tie.

Encryption

As discussed above, both apps can store data in the cloud. Data is regularly transmitted in the cloud to keep client apps synchronized.

Joplin data transmission is encrypted end-to-end; Orgzly, unencrypted.

With Joplin, your privacy is protected: not even Joplin developers or your cloud host such as Dropbox can access your encrypted data.

This may well be the killer feature that swings the pendulum all the way in favor of Joplin for most people.

Project viability

The 2 projects are very similar on this point – both stable and actively maintained by a team of one major developer. So, don't expect major new features every few months.

For mature applications like note-taking to-do list managers, that may actually be a benefit.

According to Google Play, both apps are downloaded roughly the same number of times(50,000+). Their numbers pale in comparison to the behemoth Evernote (100,000,000+). However, both projects have reached a critical mass in their respective user communities.

A tie.

Conclusion

Both Joplin and org-mode/Orgzly are more than capable to do the basic job. But Joplin is the eminently obvious overall winner … unless you are a hardcore emacs fan.


Saturday, April 25, 2020

How to remove passwords from PDF files

Recently, a financial institution emailed me a password-protected PDF file. Handling such PDFs was a nuisance. First, I had to call them to obtain the password. Second, because that was a file I'd like to access in the future, I needed to record the password, unless … I could somehow remove the password from the PDF file.

This post outlines several ways to remove a password from a PDF.

pdftk

pdftk is my go-to tool for manipulating PDFs. To save a password-protected PDF into a new file, without the password, simply execute a command like this:

$ pdftk MyInput.pdf input_pw PASSWORD output MyOutput.pdf
WARNING: The creator of the input PDF:
   MyInput.pdf
   has set an owner password (which is not required to handle this PDF).
   You did not supply this password. Please respect any copyright.

You can safely ignore the warning message.

An encrypted PDF file can have up to 2 passwords, the user password and the owner password, with the latter having more privileges. Either password will let you perform the operation, although pdftk prefers the owner password if the PDF has one. If you did not create the original PDF, most likely, the password given to you was the user password. Hence the warning.

Security conscious users would balk at specifying the plain-text password on the command line. Specifying the do_ask parameter allows you to enter the password via standard input.

$ pdftk MyInput.pdf output MyOutput.pdf do_ask

qpdf

An alternative solution is to use qpdf. For instance,


$ qpdf --decrypt --password=PASSWORD -- MyInput.pdf MyOutput.pdf

Note that the marker -- is used to separate the options from the input and output filenames.

To hide the password from the command line, specify the @- argument, which enables you to enter arguments via standard input. When prompted, enter --password=PASSWORD.


$ qpdf --decrypt @- -- MyInput.pdf MyOutput.pdf

Alternatively, you can specify the password inside a file, for instance, @/home/peter/arguments.txt. Note that the filepath is appended to the single character @. The file contains the line --password=PASSWORD.


$ qpdf --decrypt @/home/peter/arguments.txt -- MyInput.pdf MyOutput.pdf

pdftops/ps2pdf

This solution is more involved than the first 2: first convert the PDF to Postscript, and then back to PDF. I include it here to show an alternative approach, and it is probably not something you will actually do.

  1. To convert it to Postscript:
    
    $ pdftops -upw PASSWORD MyInput.pdf MyInput.ps
    
    
    Note that -upw refers to the user password. If you have the owner password instead, replace -upw with -opw.
  2. To save it back to PDF:
    
    $ ps2pdf MyInput.ps MyOutput.pdf
    
    

Thursday, April 9, 2020

inxi: the Swiss Army knife for displaying Linux sysinfo

Do one thing and do it well - the Unix philosophy

inxi is the antithesis of the above venerable Unix philosophy. Many excellent tools exist for providing aspects of system and hardware information —lsb_release, uname, lshw, lscpu, lspci, lsusb, dmidecode, uptime, free, ip, parted, acpi, etc. Some of those tools may even report more details than inxi, but there is a definite advantage for using inxi—with just 1 command, you can see at a glance a machine's overall hardware and system configuration and real-time status.

System administrators and technical support engineers work with many machines, often at the same time. inxi enables them to quickly get a broad system configuration overview and assess the current machine status before doing maintenance or troubleshooting.

The tool organizes the machine data into the following categories:
  • System (hostname, kernel, 64-/32-bit, desktop…)
  • Machine (model, serial #, BIOS…)
  • CPU (model, speed…)
  • Graphics
  • Audio devices
  • Network devices
  • Drives
  • Partitions
  • USB devices
  • Sensors (temperatures, fan speed…)
  • Repositories
  • Real-time status (# processes, uptime…)

Installation


To install inxi on Debian buster,

# apt update && apt install inxi


Dependency checking


inxi calls numerous helper programs to do the actual work, not all of them may be pre-installed. Run the following inxi command (as non-root user) to test what is potentially missing on your system:


# inxi --recommends
-------------------------------------------------------
Test: recommended system programs:
...
ipmitool: -s IPMI sensors (servers)........... Missing
ipmi-sensors: -s IPMI sensors (servers)....... Missing
...
The following recommended system programs are missing:
Program: ipmitool ~ Install package: ipmitool
Program: ipmi-sensors ~ Install package: freeipmi-tools
...
-------------------------------------------------------

Note that the checking does not take into consideration whether your system actually supports the use of the helper programs. For instance, the 2 missing programs above (ipmitool and ipmi-sensors) only apply to servers. In this example, the target machine is not a server and does not support IPMI, so I did not install the recommended programs.

You must judge the merits of installing each helper program reported missing.

Usage


Root or non-root


You can run inxi as either root or a regular user. Certain output is restricted to root only, e.g., the motherboard serial number and detailed RAM data.

Basics


Although you can run inxi without any argument to get basic CPU and memory information, I'd recommend running it with -F.



-F is for Full, and is a shorthand for specifying all uppercase letter arguments (with some exceptions).

For instance, specifying -F automatically includes -P, but not -p. The uppercase argument -P shows partition information for the basic partitions: /, /boot, /home, etc. The lowercase letter argument -p includes snap partitions created when installing software using snap.


# inxi -Fxxxz


inxi output may contain IP addresses, MAC addresses, serial numbers—data that can uniquely identify the target system. If privacy is an issue, specify the -z flag to filter out private data from the report. Note that the default is to display the aforementioned data.

You can dial up the level of details in inxi output using the -x flag. Optionally specify up to 3 increasing levels of details: -x, -xx, and -xxx.

Advanced


If you want more details than what the -F option gives you, you can specify additional arguments to focus on specific aspects of your system.

# inxi -Fxmip -t --usb


The following is my favourite subset of the available arguments.

-d


inxi -F only displays data about hard disk drives. To include optical/DVD drives, specify -d.

-i


Default -F output hides the IP addresses for your network interfaces. To display IP, add -i.

-m


The -m argument reports data about individual memory slots.

-p


-F only reports standard partitions (/etc, /home, /opt…) and swap partitions. -p displays all mounted partitions, including partitions created by snap.

-r


This argument reports software package repository information.

-t


By default, -t displays the top 5 memory- and CPU-using processes. You can restrict to CPU or memory only, and adjust the number of processes reported. For instance, to display the top 10 memory-using processes, specify -tm10; top 10 CPU-using processes, -tc10.

Separate -t from other arguments(or add it to the end of an argument chain); otherwise inxi may return a syntax error.

# inxi -Fxmip -tc10


--usb


--usb displays USB device information.

Make it pretty


You can choose a color theme for inxi output. The argument is -c followed by a value between 0 and 42 inclusive, corresponding to the color theme.



Using the -c95 argument, you can preview the list of available color themes and then select one for the current inxi command.

# inxi -Fx -c95