Text Link Ads

Wednesday, August 22, 2007

10 Must-Have Linux Tools

Work smarter, not harder. Even the most divine Linux gods rely on a handful of tools and utilities to make troubleshooting, management and deployment easier and faster.

Having the right tools and utilities makes any job easier. Whether it's diagnosing a troublesome system, deploying a new device or managing a complicated environment, what's in a solution provider's Linux toolbox can make the difference for tasks that would otherwise be labor-intensive and mind-numbing.

In this TechBuilder Recipe, the Test Center uncovers 10 tools that every Linux solution provider or administrator should be familiar with. While some of these are readily available for download, many of the command-line utilities generally are bundled with the distribution.

1. -pie hits the spot: Linux administrators can spend the bulk of their time creating and modifying scripts and other files on the system. The scripts can be fairly straightforward with system configurations or complex with database parameters. The most annoying thing about working with these kinds of files is realizing that a particular string of text needs to be replaced with another in a large file. And that the string occurs 1,382 times in the file. Even worse, it appears in 49 different files.

Instead of struggling to edit each file manually, give this command a shot:
perl -p -i -e 's/regexp/REGEXP/g' filename

The command requires PERL to be installed on the system, but most distributions come with it installed by default. And if PERL isn't already installed, installing it is quicker than going through 49 different files. The -pie command is just one of the many examples of PERL's versatility.

The command takes the regexp that's enclosed within quotes to search (s) and replace for all (g) instances in the file. Instead of just one file, the command can take a wildcard such as *.html, or even a list of file names.

2. Wireless, wireless, everywhere: With new wireless networks sprouting up every few days, piggybacking on a network for quick access has never been easier. When solution providers are at a client site troubleshooting a system, finding and accessing an available network can make it easier to diagnose and resolve a job. The software tool of choice for discovering wireless networks is Kismet, available as a .tar.gz file from www.kismetwireless.net. Kismet also has a friendly GUI, Kismet-Qt, which provides all the information needed to sniff and connect to the wireless networks in the immediate vicinity.

Standard named networks, networks with hidden SSIDs and non-beaconing networks are all fair game for Kismet. Kismet locates available wireless networks by passively capturing packet data. This way, it can also discover and report the IP range used for a particular wireless network, identify the network's signal and noise levels and detect network intrusions. It can also capture management data packets for available networks and optimize signal strength for access points.

Kismet is an 802.11 Layer 2 wireless network detector that can sniff 802.11a/b/g traffic. It's important to note, however, that Kismet only works with wireless cards that support raw monitoring mode, such as the ones based on the PRISM 2, 2.5, 3 and GT chipsets. Some of the more popular supported wireless adapters include the ORiNOCO Gold, the original Apple Airport (not Extreme) card, and Intel Centrino.

3. Now where was it again? Perhaps a script is failing during the start up routine. Perhaps the logs are showing some odd sequence of characters. The perl -pie command helps globally search and replace, but what if you just want to find where that particular string is occurring?

The grep command comes to the rescue. It comes with a plethora of flags and options, but the most common usage is: grep string filename. The output would list every line in the file that contains that string. This is particularly handy if the logs indicate a script is referencing a non-existent file or directory. The grep command would show every line where that file or directory is called. The filename argument can also use wildcards or have a list of filenames.

Seeing just a single line can be confusing and uninformative. As such, the command has a context flag, -C, that allows the administrator to see a number of lines before and after the relevant line: grep -C # string filename.

4. Head to the dump: When managing a network, solution providers may need to debug the network setup to ensure all routing is occurring properly. Solution providers may also need to intercept and display TCP/IP and other packets being transmitted over the network. The unencrypted information can be viewed using tcpdump, a network debugging tool that runs under the command line. Built upon the libpcap packet capture library, the utility prints out the headers of packets that match a Boolean expression. Flags and options, such as -w and -a, improve the information captured by tcpdump. In this case, the -w flag is used to save the packet data for later analysis and the -a flag is used to convert network and broadcast addresses to names. On networks with a high volume of traffic, tcpdump output can be hard to read. Using tcpdump with Berkeley Packet Filter formats the output in a more usable fashion.

5. Chain gang: Sometimes a single command is just not enough. Solution providers often write short scripts to accomplish some tasks, such as finding a certain set of files and performing an aggregate action on the resulting list. The scripts often consist of two or three commands, and output from each command is piped into temporary files for processing. A short script could create a list of files that fit a certain criteria, pipe it to a temporary file, and then process the generated list to perform, for example, a search or a word count.

The xargs command makes short scripts unnecessary. Like the way "and" connects two clauses together in grammar, xargs connects the find command with another command. The usage, find . -name 'filename' | xargs second-command, allows xargs to apply the second command to the found list. No piped list, no temporary files. For example, find . -name 'index.html' | xargs grep -l 'styles.css' looks for all index.html files in the current directory and all subdirectories beneath, and then searches for the string "styles.css" in those files. The final output will show only the paths of those index.html files that is using that particular stylesheet. The find ... xargs construct can be used with other commands as well.

6. Use the windows: Solution providers who perform remote management are well-familiar with the following scenario: After logging into a remote server via an SSH connection, the characters suddenly stop appearing on the screen and a "Connection Closed" message appears. A half-completed task now has to be started over because the session was lost for whatever reason.

Chances are that the screen utility is already installed on the system, usually in /usr/bin/screen. Otherwise, it can easily be downloaded as a package for a given distribution. Screen is started from the command line, and it creates a window that functions just like a normal shell, except for a few special characters. Using CTRL-A sends commands to screen instead of the normal shell, and CTRL-A ? displays the help page with all the commands for screen.

This utility is a keeper -- it makes disconnects less disastrous, and it also allows multiple windows within one SSH session. Without screen, running four or five SSH sessions with several tasks in each shell would require 15 SSH sessions, logins and windows. For example, a solution provider can be running TOP to see what is happening on the system and can then open a new window with "CTRL-A c" to run ps -ef. The beauty of screen is that TOP will stay running.

Screen also keeps the session open and the job running regardless of whether or not the user is logged in. If the solution provider starts a job or a download in screen at a client site, the process will continue even after logging out. The solution provider can login, re-attach to the screen and get back to work. Logging is also a snap in screen, since "CRL-A H" creates a running log of the session. Screen will append data to the file through multiple sessions, giving solution providers a log of changes made to remote servers.

7. A look back in time: Speaking of logging, solution providers look at several different logs to figure out what is happening on the system. Is a system not detecting a particular piece of hardware? Is an application not returning expected results? Is the Web server showing strange errors? Logs help solution providers collect information about what's happening and try to find out what is going on.

However, log files also can get long, and it's tedious to have to reopen the file after each change to see what new log message is generated. The tail -f command helps with troubleshooting by allowing solution providers to look at the end of the file in real-time. The file is not opened in an editor such as vi or emacs so no edits can be made. It shows what the last thing in the file is, and keeps the buffer open so when new messages are written to the file, it appears on the screen instantly. Users can test different configurations and commands in one window and instantly see log messages without having to reopen the file and navigate to the end of the file each time.

8. Open sesame: Knowing what files and network ports and sockets are open and what applications are using them is a crucial piece of information that makes management and debugging easier. And in Linux just about everything, including a network socket, is a file.

Enter lsof, a utility that provides verbose output on current network connections and the files associated with them. Securing the network is easier once the solution provider learns what program is operating on an open port, which daemons have established connections, and what ports are open on that server. Ports that are open and accepting connections show up in lsof with the word LISTEN. The word ESTABLISHED indicates that a connection on the given port has been made.

Like screen, lsof is usually installed by default. The command lsof -v will indicate whether it exists on the system. If it's not there, lsof is available as a package for most distributions.

Running lsof by itself will output all open files corresponding to every active process on the box. This can be quite lengthy, so adding the -h flag will give a rundown in a more manageable size. In the output, lsof returns the process ID (PID), user running the command, the file descriptor, type of connection, device number, the Internet protocol and the name of the file or Internet address. The information does more than list network connections. For example, in an lsof output, there may be multiple processes associated with the sshd daemon. Looking at the user field indicates who is actually logged into the box.

The -i flag lists all open files associated with Internet connections. This option is useful in identifying unnecessary security risks and shutting them down. The search can also specify a particular port, service or host name using techniques such as lsof -i :22, lsof -i :smtp or lsof -i @techbuilder.org.

9. Who you gonna call? Sometimes, something is wrong because nothing is happening. There's no error message printed on the screen. Perhaps the program is hung trying to read a particular file or spinning on disk access way more than it should. For those situations, by tracing calls the program makes to each process and the subsequent calls made by those processes, solution providers can look at the path the program is taking and where it's getting stuck.

Linux has strace, which lets the solution provider see what system calls a process is making. In many cases, a program may fail because it is unable to open a file or because of insufficient memory, and that will be evident while sifting through strace-generated data. Applications causing a segfault should be run with strace to see if there are memory issues.

The strace command is useful to determine if something hangs during a DNS lookup. Every now and then, a system will hang just for a minute while running a program (say, Telnet). Performing strace on the program shows that the problem was the program trying to do a reverse DNS lookup on an IP address. The solution provider can now take appropriate action.

10. Know thy server: Perhaps a new name server has been deployed on the network. Or perhaps there are some problems with the DNS servers. Solution providers can try ping and other tests to check if the servers are up and running. However, dig would be more useful to diagnose DNS problems, such as discovering if all the servers are responding. Since dig returns output that looks just like an actual bind zone file, solution providers can make sure the name servers all have the same configuration.

There are several record types dig can query on, including IN for Internet, NS for name server, MX for mail server and SOA (Start of Authority). Looking at dig-generated output, solution providers can see information such as the primary name server and mail server priority. For solution providers wondering about mail server priority, dig will show which of the mail servers have higher priority in delivering mail. If the one with the higher priority fails or can't connect, the second highest mail server will then deliver the mail.

Using the txt option with dig would also indicate whether the mail server has an SPF (Sender Policy Framework) record. Servers without SPF records may have trouble delivering mail to Hotmail and other mail systems that perform spam filtering using complete SPF records.

Host is similar to dig, but not as comprehensive. However, the command host -a [domain] shows the complete resolve of everything -- name servers, mx servers, etc. -- associated with the domain. With both dig and host, solution providers can query simple to very complex and detailed DNS servers.

Via (ChannelWeb.)

No comments: