Breaking through firewalls with a ping tunnel

When traveling, you may come across wireless hotspots where you have to pay before you can send TCP packets to arbitrary destinations the internet. However, it is frequently the case that you can send ping (ICMP echo) packets to any host on the internet. This is like locking the front door and leaving the window open, because ICMP allows echo packets and their replies to carry payloads. You can therefore use ICMP as a substrate for another channel of communication. ptunnel is a tool which takes a TCP connection and tunnels it over ICMP.

In this post I assume that you want to tunnel an SSH connection over ICMP. Not only is SSH a common application, you can take other channels and tunnel them over SSH (for example, an HTTP proxy, so that you can browse the web).

You will need to install ptunnel on two hosts: the proxy (any well-known host on the internet) and your client (typically, the laptop you are taking with you). On Debian/Ubuntu, this can be done with apt-get install ptunnel.

On the proxy, do the following:

PROXY$ sudo ptunnel -x PASSWORD

replacing PASSWORD with a password of your choice.

On the client, do the following:

CLIENT$ sudo ptunnel -p nameofproxy.domainname.com -lp 6789 -da localhost -dp 22 -c wlan0 -x PASSWORD

Replace the options with (respectively) the address of the proxy, a port number of your choice, the name and port of the server you wish to connect to (as seen by the proxy; in this case we assume that the SSH server is on the proxy itself), the network interface you are using, and the password you selected.

Then, connect via SSH using the port you specified in the previous part:

CLIENT$ ssh -p 6789 localhost

Using the web over your tunnel

SSH can be easily configured to act as a web proxy and forward all HTTP requests over the line. To do this, replace the above ssh invocation with the following:

CLIENT$ ssh -p 6789 -D 8080 localhost

Then, configure your web browser to use the proxy you've just created. In Firefox, for example: Preferences/Options; Advanced tab; Network tab; Settings; Manual proxy configuration; SOCKS host: localhost; port: 8080.

From dabbrev-expand to hippie-expand

I've "graduated" from using dabbrev-expand and switched to hippie-expand. hippie-expand does much the same thing has dabbrev-expand (completes words you are typing) but supports adding new completion heuristics rather than only looking at text in other buffers for potential completions. I switched when I found myself pressing M-/ and hoping to get completions corresponding to the names of other files I had open. hippie-expand does this out of the box.

To set it up, all you need to do is bind M-/ to hippie-expand (which comes with Emacs):

(global-set-key "\M-/" 'hippie-expand)

By default, hippie-expand uses the following set of completion techniques (customizable in hippie-expand-try-functions-list):

  '(try-complete-file-name-partially
    try-complete-file-name
    try-expand-all-abbrevs
    try-expand-list
    try-expand-line
    try-expand-dabbrev
    try-expand-dabbrev-all-buffers
    try-expand-dabbrev-from-kill
    try-complete-lisp-symbol-partially
    try-complete-lisp-symbol)

Because hippie-expand uses try-expand-dabbrev-* as one of its completion techniques, its completions are a strict superset of the completions that dabbrev would have suggested. So it is a pretty good drop-in replacement for dabbrev. In addition to looking for words in other buffers, it will also fill in filenames, entire lines of files, lisp symbols, and words in the kill ring.

Improving rename order in wdired

After using Emacs' wdired for some heavy-duty work, I noticed a flaw in how it does renaming.

Some background for those who do not know what wdired is (I find it indispensable!): wdired gives you a view of a directory that looks like the output of ls -l. However, you are allowed to edit the filenames. When you "save" the buffer, wdired renames all the files whose names you have changed. (More information: a blog post about wdired; Emacs manual node for wdired)

Here's the problem:

wdired always performs renames in a fixed order, starting from from the bottom of the buffer and going up. You can easily construct sets of renames where wdired unnecessarily thinks it has to clobber a file because it is doing the renames in the wrong order.

I improved wdired-finish-edit so that it does renames in the "right" order. I've posted the improved version on my web site.

While this does complicate the implementation a bit, the apparent model that wdired presents is that all the renames you ask for happen simultaneously, so I believe there is no reason when this new behavior would be inappropriate, assuming the code is not buggy.

Installing Debian the hard way is still easy

I prefer Ubuntu in general, but one thing that Debian has really nailed is installation. Last week I installed Debian on an old machine using no removable media other than a corrupted Ubuntu installation CD.

Under Debian's hard disk booting installation method, you download two files (a kernel and a disk image) to your disk, which are under 6MB in total. Then you ask grub to boot the kernel with the specified disk image. There is enough magic in there to launch a Debian installer that downloads all the packages it needs from the internet.

All you need to do is get those two files onto the disk. Easy ways to do this include: booting from a liveCD (or another functioning OS on the disk) and downloading them, or ripping out the disk and connecting it to another computer. Unfortunately, I did not have a good OS on the disk, nor a working liveCD, nor a PATA dongle.

The disk I was using already had grub installed. The Ubuntu installation CD got as far as formatting the drive, but couldn't install any packages because they were all corrupted. Fortunately, there is a recovery shell which includes, among other things, wget. That was enough to get the ball rolling for a successful Debian install.

Vulnerability in Debian's OpenSSL revealed

A weakness has been discovered in implementation of OpenSSL that Debian and Ubuntu provide. This random number generator has been shown to be predictable in certain ways. Consequently, encryption keys generated by OpenSSL, including SSH host keys and SSH public/private keypairs, should be considered compromised. (Upgrading to the latest version of openssl in Debian and Ubuntu will offer to regenerate your host keys.)

What is interesting is how this vulnerability was created in the first place. In order to create keys, OpenSSL acquires randomness from a bunch of sources and adds it to a buffer created in uninitialized memory.

Valgrind (a debugging/profiling tool) detects, among others, situations where programs do computations based on the results of uninitialized memory. These are almost certainly bugs. Except when the express goal of your program is to produce something random.

A Debian developer added the following patch to OpenSSL,

+       /* Keep valgrind happy */
+       memset(tmpbuf, 0, sizeof tmpbuf);
+

thereby replacing perfectly good semi-random data with zeroes. As it turns out, this is enough to greatly reduce the key search space for attackers.

Diagnostics (and compiler warnings, and the like) can be dangerous when interpreted by amateurs.

Copying directory trees with rsync

You can use cp -a to copy directory trees, but rsync can do the same and give you more flexibility. rsync supports a syntax for filter rules which specify which files and directories should and should not be copied.

Examples

Copy only the directory structure without copying any files:

$ rsync -a -f"+ */" -f"- *" source/ destination/

The two -f arguments mean, respectively, "copy all directories" and then "do not copy anything else".

Copy only directories and Python files:

$ rsync -a -f"+ */" -f"+ *.py" -f"- *" source/ destination/

This is really handy for replicating the general directory structure but only copying a subset of the files.

Copy everything but exclude .git directories:

$ rsync -a -f"- .git/" -f"+ *" source/ destination/

Conclusion

Of course, rsync also works great for copying files between machines, and it knows better than to transfer files that already exist on the destination. I use something similar to the above to do backups, copying my homedir but excluding stuff like caches that are not even worth copying.