Saturday, 25 May 2013

Text Processing Tools in Linux

The following command are often used when writing scripts to automate tasks.

The diff command :- to compare the content of two files for differences.

diff file1.conf file2.conf > output.txt
cat output.txt
Options are -c, -u and -r.

The patch command :- ued to apply simple patch file to a single file.

patch -b file1.conf < output.txt

The grep command :- displays the lines in a files that match pattern.

grep root /etc/passwd
ps -aux | grep sshd
Options are -i, -n, -r, -c, -v and -l.

The cut command :- ued to cut fields or colomns of text from a file and display standard output.

cut -f3 -d: /etc/passwd
/sbin/ip addr | grep 'inet' | cut -d ' ' -f6 | cut -d / -f1
Options are -d, -f and -c.

The head command :- displays first few lines of a file.

head /etc/passwd
head -n 3 /etc/passwd
Options is -n.

The tail command :- displays last few lines of a file.

tail -n 3 /etc/passwd
tail -f /etc/passwd
tail -f will continue to show updates until Ctrl+c is pressed.

The wc command :- counts the number of lines(l), words(w), bytes(c) and characters(m) in a file.

wc -l file1.conf
ls /tmp | wc -l
Options are -l, -w, -c and -m.

The sort command :- used to sort text data.

grep bash /etc/passwd | cut -d: -f1 | sort
options are -n, -k and -t.

The uniq command :- removes duplicate adjacent lines from a file.

cut -d: -f7 /etc/passwd | sort | uniq -c
Options are -u and -d

The echo command :-  output strings.

echo This is a test.
echo This is a test. > output.txt
cat output.txt

The cat command :- output or concatenate files.

cat file1.conf > output.txt
cat output.txt
cat file1.conf file2.conf | less
Options are -b, -n, -s, -v, -t, -e and -A.

The paste command :- join multiple files horizontally.

paste file1.conf file2.conf
Options are -d and -s.

The split command :- split files based on context lines.

split -l 500 myfile segment
split -b 40k myfile segment
Options are -l (line no) and -b(bytes).

The comm command :- to compare two files for common and distinct lines.

comm file1.conf file2.conf

The dirname command :-  it will delete any suffix beginning with the last slash ('/') character and return the result.

dirname /etc/httpd/conf/httpd.conf

The fold command :- used for for making a file with long lines more readable on a limited width terminal.
fold -w 30 file.txt

The sed command :- reads text input, line by line, and allows midfication.

sed 'word;wordtoreplace' < file1.conf > output.txt
sed '/word/ d' filename > output.txt
sed 's/word/ /g' filename > output.txt
sed 's/firstword//g; s/secondword//g' yourfile > output.txt

The awk command :- used as a data extraction and reporting tool.

awk "/word/" filename > output.txt

The less command :- used to view the contents of a text file one screen at a time.

less /etc/passwd


Friday, 24 May 2013

How to scp, ssh and rsync without prompting for password


For example, login in to some host linux box as root 172.16.0.155 and pass the command as,

ssh-keygen -t rsa

Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
35:50:28:70:2b:b0:9f:01:9a:8b:cb:0e:17:89:1d:a2 root@InVImFTSrv
The key's randomart image is:
+--[ RSA 2048]----+
|  o ... .o.      |
| o + .....       |
|+ o o ..  o      |
|o= + +   . .     |
|E + o   S        |
|.. .             |
|o..              |
|o.               |
| .               |
+-----------------+


now login to the host linux box as root 172.16.0.177 and copy,

scp -r root@172.16.0.155:/root/.ssh/id_rsa.pub /root/.ssh/

cd /root/.ssh/
cat id_rsa.pub >> authorized_keys
chmod 700 /root/.ssh/authorized_keys

now form 172.16.0.155 try ssh, scp or rsync as,

scp -r root@172.16.0.177:/usr/local/src/April /usr/local/src/

It won't prompt for any password.

Ref:-https://blogs.oracle.com/jkini/entry/how_to_scp_scp_and


Monday, 2 July 2012

Port Numbers

                          Internet socket port numbers are used by protocols of the Transport Layer of the Internet Protocol Suite for the establishment of host-to-host communications.

Well-known ports


         The port numbers in the range from 0 to 1023 are the well-known ports. They are used by system processes that provide widely-used types of network services.

Registered ports

         The range of port numbers from 1024 to 49151 are the registered ports. They are assigned by IANA for specific service upon application by a requesting entity. On most systems registered ports can be used by ordinary users.

Dynamic, private or ephemeral ports

         The range 49152–65535 is used for custom or temporary purposes and for automatic allocation of ephemeral ports.

Port        TCP/UDP      Protocol
 
20          TCP                FTP—data transfer

21          TCP                FTP—control

22          TCP                Secure Shell (SSH)

23          TCP                Telnet protocol

25          TCP                Simple Mail Transfer Protocol (SMTP

53          TCP/UDP       Domain Name System (DNS)

69          UDP               Trivial File Transfer Protocol (TFTP)

80          TCP                Hypertext Transfer Protocol (HTTP)

109        TCP                Post Office Protocol v2 (POP2)

110        TCP                Post Office Protocol v3 (POP3)

115        TCP                Simple File Transfer Protocol (SFTP)

123        UDP               Network Time Protocol (NTP)

143        TCP                Internet Message Access Protocol (IMAP)

161       UDP                Simple Network Management Protocol (SNMP)

389       TCP/UDP        Lightweight Directory Access Protocol (LDAP)

443       TCP                 HTTPS (Hypertext Transfer Protocol over SSL/TLS)

2049     TCP/UDP        NFS (Network File System)

3306     TCP/UDP        MySQL database system

2483     TCP/UDP        Oracle database listening for unsecure client connections to the listener



Reference:-http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers


Thursday, 14 June 2012

Install Nagios on CentOS/Fedora

* Nagios and the plugins will be installed underneath /usr/local/nagios
* Nagios will be configured to monitor a few aspects of your local system (CPU load, disk usage, etc.)
* The Nagios web interface will be accessible at http://localhost/nagios/

Prerequisites

Make sure that the following packages are installed

    * Apache
    * PHP
    * GCC compiler
    * GD development libraries

yum install httpd php

yum install gcc glibc glibc-common

yum install gd gd-devel

Create a new nagios user account and give it a password

/usr/sbin/useradd -m nagios

passwd nagios

Create a new nagcmd group for allowing external commands to be submitted through the web interface. Add both the nagios user and the apache user to the group.

/usr/sbin/groupadd nagcmd

/usr/sbin/usermod -a -G nagcmd nagios

/usr/sbin/usermod -a -G nagcmd apache

Download Nagios and the Plugins

http://www.nagios.org/download/

Compile and Install Nagios

Extract the Nagios source code tarball.
tar xzf nagios-3.2.3.tar.gz

cd nagios-3.2.3

Run the Nagios configure script, passing the name of the group created earlier
./configure --with-command-group=nagcmd

Compile the Nagios source code.

make all

Install binaries, init script, sample config files and set permissions on the external command directory.

make install

make install-init

make install-config

make install-commandmode

Customize Configuration

vi /usr/local/nagios/etc/objects/contacts.cfg

Configure the Web Interface

Install the Nagios web config file in the Apache conf.d directory.

make install-webconf


Create a nagiosadmin account for logging into the Nagios web interface. Remember the password you assign to this account – you’ll need it later.

htpasswd -c /usr/local/nagios/etc/htpasswd.users nagiosadmin


Restart Apache to make the new settings take effect.

service httpd restart

Compile and Install the Nagios Plugins

Extract the Nagios plugins source code tarball.

cd ~/downloads

tar xzf nagios-plugins-1.X.tar.gz

cd nagios-plugins-1.X.

Compile and install the plugins.

./configure --with-nagios-user=nagios --with-nagios-group=nagios

make

make install

Start Nagios

Add Nagios to the list of system services and have it automatically start when the system boots.

chkconfig --add nagios

chkconfig nagios on

Verify the sample Nagios configuration files.

/usr/local/nagios/bin/nagios -v /usr/local/nagios/etc/nagios.cfg

If there are no errors, start Nagios.

service nagios start

Modify SELinux Settings

See if SELinux is in Enforcing mode.

getenforce

Put SELinux into Permissive mode.

setenforce 0

To make this change permanent, you’ll have to modify the settings in /etc/selinux/config and reboot.

Login to the Web Interface

Access the Nagios web interface at the URL below, prompted for the username (nagiosadmin) and password you specified earlier.

http://localhost/nagios/







Reference:- http://cuongk6t.wordpress.com/2011/06/07/how-to-install-nagios-on-centos-6-or-fedora/



Wednesday, 16 May 2012

Cron in Linux

               Cron job are used to schedule commands to be executed periodically. You can setup commands or scripts, which will repeatedly run at a set time

#edit cron table
crontab -e

MIN HOUR DOM MON DOW CMD

   *        *        *        *        *    command to be executed

    OR

* * * * *    USERNAME    command to be executed
|  |  | |  |
|  |  | |  |----- Day of week (0 - 6) (Sunday=0)
|  |  | |-------- Month (1 - 12)
|  |  |---------- Day of month (1 - 31)
|  |------------- Hour (0 - 23)
|---------------- Minute (0 - 59)

#List
crontab -l
crontab -u username -l

#Erase
crontab -r
crontab -r -u username

#run every 10 minutes
*/10 * * * * command

#stop receiving email output
0 3 * * * /root/backup.sh >/dev/null 2>&1

#Backup
crontab -u root(username) -l > /home/cron.backup
0 12 * * * /usr/bin/top -n 1 -b -S >> /home/cron.backup

#Version
rpm -qa | grep cron
rpm -qil version

#Schedule
Keyword     Equivalent
@yearly      0 0 1 1 *
@daily        0 0 * * *
@hourly      0 * * * *
@reboot      Run at startup.

The asterisk (*) : specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month.

The comma (,) : specifies a list of values, for example: "1,5,10".

The dash (-) : specifies a range of values, for example: "5-10" days , which is equivalent to typing "5,6,7,8,9,10" using the comma operator.


NFS in Linux

                The Network File System (NFS) was developed to allow machines to mount a disk partition on a remote machine as if it were on a local hard drive. This allows for fast, seamless sharing of files across a network. It also gives the potential for unwanted people to access your hard drive over the network.

NFS Configration in RHEL 6/CentOS 6

yum install nfs*

rpm packeages
nfs-utils-lib-1.1.5-4.el6.x86_64
nfs4-acl-tools-0.3.3-5.el6.x86_64
nfs-utils-1.2.3-15.el6.x86_64

vim /etc/exports
/var/ftp/pub    *(ro,sync)    #for entire network
/data    192.168.1.5(rw,sync)    #for particular machine
/jp    192.168.1.0/24(ro,sync)    #for particular network

mkdir /data
mkdir /jp

service rpcbind status
rpcbind (pid 9905) is running...

service nfs restart
Shutting down NFS mountd:                                  [  OK  ]
Shutting down NFS daemon:                                 [  OK  ]
Shutting down NFS quotas:                                   [  OK  ]
Shutting down NFS services:                                 [  OK  ]
Starting NFS services:                                           [  OK  ]
Starting NFS quotas:                                             [  OK  ]
Starting NFS daemon:                                           [  OK  ]
Starting NFS mountd:                                            [  OK  ]

chkconfig rpcbind on
chkconfig nfs on

exportfs
/data        192.168.1.5
/jp        192.168.1.0/24
/var/ftp/pub    <world>

#see what the exported file system parameters look like
/usr/sbin/exportfs −v
exportfs -av

exporting 192.168.1.0/24:/jp
exporting 192.168.1.5:/data
exporting *:/var/ftp/pub

showmount -e 192.168.1.4
/var/ftp/pub *
/jp        192.168.1.0/24
/data        192.168.1.5

mount 192.168.1.4:/data /mnt/

#verify nfs is running
rpcinfo −p localhost run on the server
rpcinfo −p servername run on the client

#to check allow and deny
/etc/hosts.allow
/etc/hosts.deny

exportfs −ra to force nfsd to re−read the /etc/exports file.

netstat −rn should show:
Destination    Gateway        Genmask        Flags    MSS Window   irtt Iface
192.168.1.0    0.0.0.0        255.255.255.0    U               0 0                0 eth0               

nfsstat
rpm -qa | grep -i nfs
rpm -qi nfs-utils
tracepath server(IP)
# strings /sbin/portmap | grep hosts

Monday, 14 May 2012

Linux Command for System Information

uname -r
To display the info more about kernel.

free
Memory info (in kilobytes).

df -h
(disk free) Print disk info about all the filesystems in human-readable form.

du / -bh | more
(=disk usage) Print detailed disk usage for each subdirectory starting at the "/" (root) directory (in human legible form).

cat /proc/cpuinfo
Cpu info--it show the content of the file cpuinfo.

cat /proc/interrupts
List the interrupts in use.

cat /proc/version
Linux version and other info

cat /proc/filesystems
Show the types of filesystems currently in use.

cat /etc/printcap
Show the setup of printers.

lsmod
Show the kernel modules currently loaded.(As root. Use /sbin/lsmod to execute this command when you are a non-root user.)

set|more
Show the current user environment.

echo $PATH
Show the content of the environment variable "PATH". This command can be used to show other environment variables as well. Use "set" to see the full environment.

dmesg | less
Print kernel messages (the content of the so-called kernel ring buffer). Use less /var/log/dmesg  to see what "dmesg" dumped into this file right after the last system bootup.


Sunday, 13 May 2012

Kernel Version in Linux

uname --version

o/p:
uname (GNU coreutils) 8.4
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by David MacKenzie.

uname -r       

o/p:
2.6.32-220.13.1.el6.x86_64
   
Explanation,
    2   : Kernel version
    6   : The major revision of the kernel
    32  : The minor revision of the kernel
    220 : Immediate fixing / bug fixing for critical error
    generic : Distribution specific sting. For example, Redhat appends string such as EL6 to indicate RHEL 6 kernel.

uname -a

o/p:
Linux www.snk.com 2.6.32-220.13.1.el6.x86_64 #1 SMP Tue Apr 17 23:56:34 BST 2012 x86_64 x86_64 x86_64 GNU/Linux

uname -mrsn

o/p:   
Linux www.snk.com 2.6.32-220.13.1.el6.x86_64 x86_64

cat /proc/version

o/p:   
nux version 2.6.32-220.13.1.el6.x86_64(mockbuild@c6b6.bsys.dev.centos.org) (gcc version 4.4.6 20110731 (Red Hat 4.4.6-3) (GCC) ) #1 SMP Tue Apr 17 23:56:34 BST 2012

rpm -q kernel

o/p:   
kernel-2.6.32-220.el6.x86_64
kernel-2.6.32-220.13.1.el6.x86_64

dpkg --list | grep linux-image    #in Ubuntu/Debain

o/p:
ii  linux-image 2.6.22.14.21 Generic Linux kernel image.
rc  linux-image-2.6.20-15-generic 2.6.20-15.27 Linux kernel image for version 2.6.20 on x86/
ii  linux-image-2.6.20-16-generic 2.6.20-16.32 Linux kernel image for version 2.6.20 on x86/
ii  linux-image-2.6.22-14-generic  2.6.22-14.47 Linux kernel image for version 2.6.22 on x86/

cat /etc/grub.conf
cat /proc/sys/kernel/osrelease
cat /etc/issue
lsb_release -a
cat /etc/lsb-release
also see uname man page


Friday, 11 May 2012

CentOS LAMP Install via YUM

CentOS Apache Install

yum install httpd

/etc/init.d/httpd start

chkconfig httpd on


CentOS MySQL Server Install

yum install mysql-server mysql

/etc/init.d/mysqld start

chkconfig mysqld on


CentOS PHP & php-mysql Install

To install PHP and php-mysql (php-mysql is required so that PHP can talk to MySQL)

yum install php php-mysql


FYI here is the full CentOS LAMP command (this will install Apache, MySQL & PHP)

yum install httpd php php-mysql mysql mysql-server

Useradd in Linux using script

vim adduser.sh

#!/bin/bash
# Script to add a user to Linux system

if [ $(id -u) -eq 0 ]; then

    read -p "Enter username : " username
    read -s -p "Enter password : " password

    egrep "^$username" /etc/passwd >/dev/null

    if [ $? -eq 0 ]; then

        echo "$username exists!"
        exit 1

    else

        pass=$(perl -e 'print crypt($ARGV[0], "password")' $password)
        useradd -m -p $pass $username
        [ $? -eq 0 ] && echo "User has been added to system!" || echo "Failed to add a user!"

    fi
else
    echo "Only root may add a user to the system"
    exit 2
fi

chmod 777 adduser.sh

[root@snk Desktop]# ./adduser.sh
Enter username : testuser
Enter password : **********
User has been added to system!


Tuesday, 8 May 2012

File Permissions in Linux


              In LINUX Each file belongs to a user and to a user group, For restricting file access Linux defines three different types of rights,

Read(r)      - file can be read
Write(w)    - content of the file can be changed
Execute(x) - file can be executed

Each of these rights are defined for three sets of users,

user(u)     - the owner of the file
group(g)   - the users who are members of the group
others(o)  - neither members of the group nor the owner

Example

ls -l testfile
-rw-r--r--. 1 annie team1 0 May  4 16:03 testfile

name           : testfile
permissions : -rw-r--r--
owner          : annie
group          : team1
other           : 0

The first character of permissions indicates,

Character   Type of file

   -           regular file
   d          directory
   l           symbolic link
   s          socket
   p          named pipe
   c          character device file (unbuffered)
   b          blocked device file (buffered)

Letter Permission

  r          Read
  w         Write
  x          Execute
  -          No permission

Letter Type of users

  u         User
  g         Group
  o         Other
  a         All (everybody)

Permission Value

  -                 0
  x                 1
  w                 2
  r                  4

Permission Value

    ---            0
    --x            1
    -w-            2
    -wx           3
    r--             4
    r-x            5
    rw-            6
    rwx           7

Changing the access mod of a file

Example

chmod 752 testfile
or
chmod u=rwx,g=rx,o=w test

chmod u+x testfile   -user execute permission
chmod +x testfile    -everyone execute permission
chmod ugo+x testfile -everyone execute permission

Changing file owner or group

Example

chown glen testfile
chgrp admin testfile
or
chown glen:admon testfile

Special permissions for executables

Setting the sticky bit on a directory

If you have a look at the /tmp permissions,

drwxrwxrwt   10 root root  4096 2006-03-10 12:40 tmp

t is called the sticky bit and indicates that in this directory, files can only be deleted by their owners, the owner of the directory or the root superuser. it is not enough for a user to have write permission on /tmp, he also needs to be the owner of the file to be able to delete it.

In order to set or to remove the sticky bit,

chmod +t tmp
chmod -t tmp

SGID attribute on a directory

chmod g+s directory
chmod g-s directory


Setting SUID and SGID attributes on executable files 

chmod g+s myscript.sh
chmod g-s myscript.sh

chmod u+s myscript.sh
chmod u-s myscript.sh

Setting the default file creation permissions

The default umask value is usually 022.

umask 022

By default umask is 000, files get mode 666 and directories get mode 777. As a result, with a default umask value of 022, newly created files get a default mode 644 (666 - 022 = 644) and directories get a default mode 755 (777 - 022 = 755).

Friday, 4 May 2012

Simple Backup & Compression in Linux


Compression reduces file to a fraction of its original size so the file can more efficiently store or transmit.

gzip -c test.txt > test.txt.gz #compress
gunzip test.txt.gz                #decompress
zcat test.txt.gz                    #view
less test.txt.gz                     #view pagewise

bzi2 -c test.txt > test.txt.bz2 #compress
bunzip2 test.txt.bz2              #decompress
bzcat test.txt.bz2                  #view
less test.txt.bz2                     #view pagewise

zip test.txt.zip test.txt #compress
unzip test.txt             #decompress

Backup 

#backup
tar -cvf filename.tar testfile

#backup with zip
tar -czvf filename.tar.gz testfile

#backup with gzip
tar -cjvf filename.tar.bz2 testfile

#uncompress tarfile
tar -xvf filename.tar

#uncompress .bz tarfile
tar -xjvf filename.tar.bz2

#uncompress .gz tarfile
tar -xzvf filename.tar.gz

c -create
v -verbose
x -extrat
f -filename
z -compress the tar file with gzip
j -compress the tar file with bzip2

#more
lzma
rar
7zip
lbzip2
xz
lrzip
PeaZip
arj

Thursday, 3 May 2012

LVM in RHEL 6


The Linux Logical Volume Manager (LVM) is a mechanism for virtualizing disks. It can create "virtual" disk partitions out of one or more physical hard drives, allowing you to grow, shrink, or move those partitions from drive to drive as your needs change. It also allows you to create larger partitions than you could achieve with a single drive.

LVM in RHEL 6

#Create Partitions
fdisk /dev/sdc
make 3 partitions
change system ID as 8e for Linux LVM

#Create physical volumes
pvcreate /dev/sdc1 /dev/sdc2

Physical volume "/dev/sdc1" successfully created
Physical volume "/dev/sdc2" successfully created

pvdisplay
pvs

#Create Virtual Group
vgcreate vg0 /dev/sdc1 /dev/sdc2

Volume group "vg0" successfully created

vgdisplay
vgs

#Create Logical Volumes
lvcreate -L 2G Vg0 -n lv0

Logical volume "lv0" created

#Create File system on logical volumes
mkfs.ext3 /dev/vg0/lv0

#Edit /etc/fstab

/dev/vg0/lv0 /lvm ext3 defaults 0 0

#Mount logical volumes
mkdir /lvm
mount -a
mount

lvdisplay
lvs
lv0     vg0      -wi-ao  2.00g

#Extend logical volume
pvcreate /dev/sdc3
Physical volume "/dev/sdc3" successfully created

vgextend vg0 /dev/sdc3
Volume group "vg0" successfully extended

lvextend -L +1G /dev/vg0/lv0
Extending logical volume lv0 to 3.00 GiB
Logical volume lv0 successfully resized

resize2fs -p /dev/vg0/lv0
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg0/lv0 is mounted on /lvm; on-line resizing required
old desc_blocks = 1, new_desc_blocks = 1
Performing an on-line resize of /dev/vg0/lv0 to 786432 (4k) blocks.
The filesystem on /dev/vg0/lv0 is now 786432 blocks long.

lvs
lv0     vg0      -wi-ao  3.00g

#Reduce logical volume
lvreduce -L -500M /dev/vg0/lv0
WARNING: Reducing active and open logical volume to 2.51 GiB
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce lv0? [y/n]: y
Reducing logical volume lv0 to 2.51 GiB
Logical volume lv0 successfully resized

resize2fs -p /dev/vg0/lv0
resize2fs 1.41.12 (17-May-2010)
Filesystem at /dev/vg0/lv0 is mounted on /lvm; on-line resizing required
On-line shrinking from 786432 to 658432 not supported.

lvs
lv0     vg0      -wi-ao  2.51g

#Remove logical volume
vim /etc/fstab (edit fstab to dd lvm entry line)
mount -a
umount /lvm

lvremove /dev/Vg0/lv0
Do you really want to remove active logical volume lv0? [y/n]: y
Logical volume "lv0" successfully removed

Vgremove vg0
Volume group "vg0" successfully removed


Wednesday, 2 May 2012

RAID in RHEL 6


                  Redundant Array of Independent Disks, originally Redundant Array of Inexpensive Disks is a storage technology that combines multiple disk drive components into a logical unit.

Disk arrays stripe data across multiple disks and access them in parallel to achieve:

Higher data transfer rates on large data accesses and
Higher I/O rates on small data accesses.

Level     Description

RAID 0 Block-level striping without parity or mirroring.

RAID 1 Mirroring without parity or striping.

RAID 2 Bit-level striping with dedicated Hamming-code parity.

RAID 3    Byte-level striping with dedicated parity.

RAID 4 Block-level striping with dedicated parity.

RAID 5 Block-level striping with distributed parity.

RAID 6 Block-level striping with double distributed parity.

RAID 10 Nested RAIDs or hybrid RAIDs are constructed by distributing data over mirrored sets of storage devices.


Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10)

            The combination of striping above mirroring is called a striped-mirror layout. Putting mirroring below striping mirrors each column of the stripe. If there are multiple sub disks per column, each subdisk can be mirrored individually instead of each column. A striped-mirror volume is an example of a layered volume. 

Nesting


Level Description Minimum # of disks
 
RAID 0+1     Top Level RAID 1, Bottom Level RAID 0.    3
RAID 1+0     Top Level RAID 0, Bottom Level RAID 1. 4
RAID 5+0 Top Level RAID 0, Bottom Level RAID 5.    6
RAID 5+1 Top Level RAID 1, Bottom Level RAID 5. 6
RAID 6+0     Top Level RAID 0, Bottom Level RAID 6.    8
RAID 6+1     Top Level RAID 1, Bottom Level RAID 6.    8
RAID 1+0+0   RAID 0 RAID 0 RAID 1 8


Software RAID - the RAID task runs on the CPU of your computer system.

Hardware RAID - solution has its own processor and memory to run the RAID application.


RAID in RHEL 6

The minimum number of software RAID partitions required for each RAID level:
        • RAID   0,1,10   – 2 partitions
        • RAID   4,5        – 3 partitions
        • RAID   6           – 4 partitions

#create partitions
fdisk /dev/sdb
Make 4 partitions of each 1GB
Change System ID as fd to Linux raid autodetect

#create raid
mdadm -C /dev/md0 -l 5 -n 3 /dev/sdb1 /dev/sdb2 /dev/sdb3
C-create
5-raid level
3-raid devices

#mount
mkfs.ext4 /dev/md0
mkdir /raid5

vim /etc/fstab
/dev/md0 /raid5 ext4 defaults 0 0

cd /raid5/
ls

#raid status
mdadm --detail /dev/md0
cat /proc/mkstat

#set faulty partition
mdadm -f /dev/md0 /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md0

#add new raid partition
mdadm -a /dev/md0 /dev/sdb4
mdadm: added /dev/sdb4

#remove faulty raid partition
mdadm -r /dev/md0 /dev/sdb3
mdadm: hot removed /dev/sdb3 from /dev/md0


Monday, 26 March 2012

Compile Linux Kernel 2.6


Let us see how to compile Linux kernel version 2.6.xx under Debian GNU Linux. However, instructions remains the same for any other distribution except for apt-get command. Compiling custom kernel has its own advantages and disadvantages.

1. Get Latest Linux kernel code

Visit http://kernel.org/ and download the latest source code. File name would be linux-x.y.z.tar.bz2, where x.y.z is actual version number.

# cd /tmp
# wget http://www.kernel.org/pub/linux/kernel/v2.6/linux-x.y.z.tar.bz2

2. Extract tar (.tar.bz2) file

# tar -xjvf linux-2.6.xx.tar.bz2 -C /usr/src
# cd /usr/src

3. Configure kernel

Before you configure kernel make sure you have development tools (gcc compilers and related tools) are installed on your system. If gcc compiler and tools are not installed then use apt-get command under Debian Linux to install development tools.

# apt-get install gcc

Now you can start kernel configuration by typing any one of the command:

# make menuconfig - Text based color menus, radiolists & dialogs. This option also useful on remote server if you wanna compile kernel remotely.

# make xconfig - X windows (Qt) based configuration tool, works best under KDE desktop.

# make gconfig - X windows (Gtk) based configuration tool, works best under Gnome Dekstop.

4. Compile kernel

# make

Start compiling to kernel modules:
# make modules

Install kernel modules (become a root user, use su command):
# su -
# make modules_install

5. Install kernel

So far we have compiled kernel and installed kernel modules. It is time to install kernel itself.

# make install

6. Create an initrd image

# cd /boot
# mkinitrd -o initrd.img-2.6.xx

initrd images contains device driver which needed to load rest of the operating system later on. Not all computer requires initrd, but it is safe to create one.

7. Modify Grub configuration file - /boot/grub/menu.lst

# vim /boot/grub/menu.lst

or

# update-grub

8. Reboot computer and boot into your new kernel

# reboot


Thursday, 9 February 2012

Define

Multiblock allocation

When Ext3 needs to write new data to the disk, there's a block allocator that decides which free blocks will be used to write the data. But the Ext3 block allocator only allocates one block (4KB) at a time. That means that if the system needs to write the 100 MB data mentioned in the previous point, it will need to call the block allocator 25600 times (and it was just 100 MB!). Not only this is inefficient, it doesn't allow the block allocator to optimize the allocation policy because it doesn't knows how many total data is being allocated, it only knows about a single block. Ext4 uses a "multiblock allocator" (mballoc) which allocates many blocks in a single call, instead of a single block per call, avoiding a lot of overhead. This improves the performance, and it's particularly useful with delayed allocation and extents. This feature doesn't affect the disk format. Also, note that the Ext4 block/inode allocator has other improvements, described in detail in this paper.

Delayed allocation

Delayed allocation is a performance feature (it doesn't change the disk format) found in a few modern filesystems such as XFS, ZFS, btrfs or Reiser 4, and it consists in delaying the allocation of blocks as much as possible, contrary to what traditionally filesystems (such as Ext3, reiser3, etc) do: allocate the blocks as soon as possible. For example, if a process write()s, the filesystem code will allocate immediately the blocks where the data will be placed - even if the data is not being written right now to the disk and it's going to be kept in the cache for some time. This approach has disadvantages. For example when a process is writing continually to a file that grows, successive write()s allocate blocks for the data, but they don't know if the file will keep growing. Delayed allocation, on the other hand, does not allocate the blocks immediately when the process write()s, rather, it delays the allocation of the blocks while the file is kept in cache, until it is really going to be written to the disk. This gives the block allocator the opportunity to optimize the allocation in situations where the old system couldn't. Delayed allocation plays very nicely with the two previous features mentioned, extents and multiblock allocation, because in many workloads when the file is written finally to the disk it will be allocated in extents whose block allocation is done with the mballoc allocator. The performance is much better, and the fragmentation is much improved in some workloads.

Journal checksumming

The journal is the most used part of the disk, making the blocks that form part of it more prone to hardware failure. And recovering from a corrupted journal can lead to massive corruption. Ext4 checksums the journal data to know if the journal blocks are failing or corrupted. But journal checksumming has a bonus: it allows one to convert the two-phase commit system of Ext3's journaling to a single phase, speeding the filesystem operation up to 20% in some cases - so reliability and performance are improved at the same time.

Online defragmentation

While delayed allocation, extents and multiblock allocation help to reduce the fragmentation, with usage filesystems can still fragment. For example: You write three files in a directory and continually on the disk. Some day you need to update the file of the middle, but the updated file has grown a bit, so there's not enough room for it. You have no option but fragment the excess of data to another place of the disk, which will cause a seek, or allocate the updated file continually in another place, far from the other two files, resulting in seeks if an application needs to read all the files on a directory (say, a file manager doing thumbnails on a directory full of images). Besides, the filesystem can only care about certain types of fragmentation, it can't know, for example, that it must keep all the boot-related files contiguous, because it doesn't know which files are boot-related. To solve this issue, Ext4 will support online fragmentation, and there's a e4defrag tool which can defragment individual files or the whole filesystem.

Persistent preallocation

This feature, available in Ext3 in the latest kernel versions, and emulated by glibc in the filesystems that don't support it, allows applications to preallocate disk space: Applications tell the filesystem to preallocate the space, and the filesystem preallocates the necessary blocks and data structures, but there's no data on it until the application really needs to write the data in the future. This is what P2P applications do in their own when they "preallocate" the necessary space for a download that will last hours or days, but implemented much more efficiently by the filesystem and with a generic API. This has several uses: first, to avoid applications (like P2P apps) doing it themselves inefficiently by filling a file with zeros. Second, to improve fragmentation, since the blocks will be allocated at one time, as contiguously as possible. Third, to ensure that applications always have the space they know they will need, which is important for RT-ish applications, since without preallocation the filesystem could get full in the middle of an important operation. The feature is available via the libc posix_fallocate() interface.

ext2 ext3 ext4 in Linux

ext2, ext3 and ext4 are all filesystems created for Linux.

ext2
  • Ext2 stands for second extended file system.It was introduced with the 1.0 kernel in 1993. Developed by Rémy Card.
  • This was developed to overcome the limitation of the original ext file system.
  • Ext2 does not have journaling feature.
  • On flash drives, usb drives, ext2 is recommended, as it doesn’t need to do the over head of journaling.
  • Maximum individual file size can be from 16 GB to 2 TB.
  • Overall ext2 file system size can be from 2 TB to 32 TB.
It has sparse super blocks feature which increase file system performance.In case any user processes fill up a file system ext2 normally reserves about 5% of disk blocks for exclusive use by root so that root can easily recover from that situation.

Creating an ext2 file system:

mke2fs /dev/sda1

journaling:

A journaling filesystem is a filesystem that maintains a special file called a journal that is used to repair any inconsistencies that occur as the result of an improper shutdown of a computer.

Journaling filesystems that added to the Linux kernel are,

ReiserFS
JFS
XFS

ext3
  • Ext3 stands for third extended file system.
  • It was introduced in 2001. Developed by Stephen Tweedie.
  • Starting from Linux Kernel 2.4.15 ext3 was available.
  • The main benefit of ext3 is that it allows journaling.
  • Maximum individual file size can be from 16 GB to 2 TB.
  • Overall ext3 file system size can be from 2 TB to 32 TB.
There are three types of journaling available in ext3 file system.
  1.     Journal – Metadata and content are saved in the journal.
  2.     Ordered – Only metadata is saved in the journal. Metadata are journaled only after writing the content to disk. This is the default.
  3.     Writeback – Only metadata is saved in the journal. Metadata might be journaled either before or after the content is written to the disk.

You can convert a ext2 file system to ext3 file system directly (without backup/restore).

Creating an ext3 file system:
mkfs.ext3 /dev/sda1

(or)

mke2fs –j /dev/sda1

How to  convert  ext2 to ext3 :-
# umount /dev/sda2
# tune2fs -j /dev/sda2
# mount /dev/sda2  /var


ext4
  •     Ext4 stands for fourth extended file system.
  •     It was introduced in 2008.
  •     Starting from Linux Kernel 2.6.19 ext4 was available.
  •     Supports huge individual file size and overall file system size.
  •     Maximum individual file size can be from 16 GB to 16 TB.
  •     Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB =       1024 TB (terabyte).
  •     Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3).
  •     You can also mount an existing ext3 fs as ext4 fs (without having to upgrade it).
  •     Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3.
  •     In ext4, you also have the option of turning the journaling feature “off”.

Features of Ext4 file system :
1. Compatibility
2. Bigger filesystem/file sizes
3. Subdirectory scalability
4. Extents
5. Multiblock allocation
6. Delayed allocation
7. Fast fsck
8. Journal checksumming
9. Online defragmentation
10. Inode-related features
11. Persistent preallocation
12. Barriers on by default

Creating an ext4 file system:

mkfs.ext4 /dev/sda1

(or)

mke2fs -t ext4 /dev/sda1

Converting ext3 to ext4
( Warning :- Never try this live or production servers )
# umount /dev/sda2
# tune2fs -O extents,uninit_bg,dir_index  /dev/sda2
# e2fsck -pf /dev/sda2
# mount /dev/sda2 /var

Linux Boot Process

The six stages of linux boot process

1. BIOS          Basic Input/Output System executes MBR
2. MBR           Master Boot Record executes GRUB
3. GRUB        Grand Unified Bootloader executes Kernel
4. Kernel         Kernel executes /sbin/init
5. Init              Init executes runlevel programs
6. Runlevel      Runlevel programs are executed from  /etc/rc.d/rc*.d/

1. BIOS 

           When the PC is powered up, the BIOS is the first program that runs which is stored in flash memory on the motherboard. The BIOS contains the following parts:

POST (Power On Self Test): The principal duties of the main BIOS during POST are as follows:
Verify CPU registers, verify the integrity of the BIOS code itself, verify some basic components like DMA, timer, interrupt controller, find, size, and verify system main memory, initialize BIOS, discover, initialize, and catalog all system buses and devices, pass control to other specialized BIOSes (if and when required), provide a user interface for system's configuration, identify, organize, and select which devices are available for booting construct whatever system environment that is required by the target operating system.

The Setup Menu: That lets you set some parameters and lets you adjust the real time clock. Most modern BIOS versions let you set the boot order, the devices that BIOS checks for booting. These can be A (the first floppy disk), C (the first hard disk), CD-ROM and possibly other disks as well. The first device in the list will be tried first.

The boot sector loader: This loads the first 512-byte sector from the boot disk into RAM and jumps   to it.

The BIOS interrupts: These are simple device drivers that programs can use to access the screen, the keyboard and disks. Boot loaders rely on them, most operating systems do not (the Linux kernel does not use BIOS interrupts once it has been started). MSDOS does use BIOS interrupts.

2. MBR

           When a computer boots, the BIOS transfers control to the first boot device, which is hard disk.The first sector on a hard disk is called the Master Boot Record. The primary boot loader that resides in the MBR is a 512 byte image containing both program code and a small partition table. The first 446 bytes are the primary boot loader, which contains both executable code and error message text. The next 64 bytes are the partition table, which contains a record for each of four partitions (sixteen bytes each). The MBR ends with two bytes that are defined as the magic number (0xAA55). The magic number serves as a validation check of the MBR.

          The job of the primary boot loader is to find and load the secondary boot loader. It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed.

           By default, MBR code looks for the partition marked as active and once such a partition is found, it loads its boot sector into memory and passes control to it.

3. GRUB

           GNU GRUB is a bootloader capable of loading a variety of free and proprietary operating systems. GRUB will work well with Linux, DOS, Windows, or BSD.

           GRUB is dynamically configurable. This means that the user can make changes during the boot time, which include altering existing boot entries, adding new, custom entries, selecting different kernels, or modifying initrd. GRUB also supports Logical Block Address mode. This means that if your computer has a fairly modern BIOS that can access more than 8GB (first 1024 cylinders) of hard disk space, GRUB will automatically be able to access all of it.

           GRUB can be run from or be installed to any device (floppy disk, hard disk, CD-ROM, USB drive, network drive) and can load operating systems from just as many locations, including network drives. It can also decompress operating system images before booting them.

LILO (Linux bootloader)

LILO is another Linux bootloader.

Advantage of GRUB over LILO

   LILO supports only up to 16 different boot selections; GRUB supports an unlimited number of boot entries.  
   LILO cannot boot from network; GRUB can.
   LILO must be written again every time you change the configuration file; GRUB does not.
   LILO does not have an interactive command interface.
GRUB replaces the default MBR with its own code.

Furthermore, GRUB works in stages.

Stage 1 is located in the MBR and mainly points to Stage 2, since the MBR is too small to contain all of the needed data.Stage 1 can do little more than load the next stage of GRUB by loading a few disk sectors from a fixed location near the start of the disk (within 1024 cylinders).

Stage 2 points to its configuration file, which contains all of the complex user interface and options we are normally familiar with when talking about GRUB. Stage 2 can be located anywhere on the disk. If Stage 2 cannot find its configuration table, GRUB will cease the boot sequence and present the user with a command line for manual configuration.

Stage 1.5 also exists and might be used if the boot information is small enough to fit in the area immediately after MBR. Stage1.5 is located in the first 30 kilobytes of hard disk immediately following the MBR and before the first partition. If this space is not available (Unusual partition table, special disk drivers, GPT or LVM disk) the install of Stage 1.5 will fail. The stage 1.5 image contains filesystem specific drivers. This enables stage 1.5 to directly load stage 2 from a known location in the filesystem, for example from /boot/grub. Stage 2 will then load the default configuration file and any other modules needed

             The Stage architecture allows GRUB to be large (~20-30K) and therefore fairly complex and highly configurable, compared to most bootloaders, which are sparse and simple to fit within the limitations of the Partition Table.

After loading GRUB, but before the operating system starts

            Once GRUB has loaded, it presents an interface where the user can select which operating system to boot. This normally takes the form of a graphical menu. If this is not available, or the user wishes direct control, GRUB has its own command prompt. The user can then manually specify the boot parameters. GRUB can be set to automatically load a specified kernel after a user defined timeout.

Perhaps the most important commands that GRUB accepts in the operating system selection (kernel selection) menu are the following two commands.

            By pressing 'e', it is possible to edit parameters for the selected operating system before the operating system is started. Typically, this is used for changing kernel parameters for a Linux system. The reason for doing this in GRUB (i.e. not editing the parameters in an already booted system) can be an emergency case: the system has failed to boot. Using the kernel parameters line it is possible, among other things, to specify a module to be disabled (blacklisted) for the kernel. This could be needed, if the specific kernel module is broken and thus prevents boot-up.

            By pressing 'c', the user enters the GRUB command line. This is not a regular Linux shell. It accepts certain GRUB-specific commands.

           Once boot options have been selected, GRUB loads the selected kernel into memory and passes control to the kernel. Alternatively, GRUB can pass control of the boot process to another loader, using chain loading. This is the method used to load operating systems such as Windows, that do not support the Multiboot standard or are not supported directly by GRUB. In this case, copies of the other system's boot programs have been saved by GRUB. Instead of a kernel, the other system is loaded as though it had been started from the MBR. This could be another boot manager, such as the Microsoft boot menu, allowing further selection of non-Multiboot operating systems.

            With the second-stage boot loader in memory, the file system is consulted, and the default kernel image and initrd image are loaded into memory. With the images ready, the stage 2 boot loader invokes the kernel image.

4. Kernel

           With the kernel image in memory and control given from the stage 2 boot loader, the kernel stage begins. The kernel image isn't so much an executable kernel, but a compressed kernel image. Typically this is a zImage (compressed image, less than 512KB) or a bzImage (big compressed image, greater than 512KB), that has been previously compressed with zlib. At the head of this kernel image is a routine that does some minimal amount of hardware setup and then decompresses the kernel contained within the kernel image and places it into high memory. If an initial RAM disk image is present, this routine moves it into memory and notes it for later use. The routine then calls the kernel and the kernel boot begins.

5. Init

           After the kernel is booted and initialized, the kernel starts the first user-space application. This is the first program invoked that is compiled with the standard C library. Prior to this point in the process, no standard C applications have been executed.

6. Runlevel

           The term runlevel refers to a mode of operation in one of the computer operating systems.Conventionally, seven runlevels exist, numbered from zero to six.The exact setup of these configurations will vary from OS to OS, and from one Linux distribution to another.

 Some important files and definitions.

1. /etc/rc.d/rc.sysinit - script to initialize the system(includes mounting local filesystems)

2. /etc/inittab - file containing default runlevel

3. /etc/init/start-ttys.conf - starts 5 or 6 text based virtual consoles

4. /etc/init/rc.conf - init job responsible for starting System V based services

5. /boot/grub/grub.conf - contains bootloader configuration instructions

6. /etc/init/ - directory containing init jobs

7. /etc/rc.d/rc5.d/ - contains links to System V services scripts to be started or stopped when entering runlevel5

8. /etc/init/prefdm.conf - starts graphical loging prompt

9. /etc/init/rc.S.conf - init job to run system initialization script and spawn job to start System V services.


Tuesday, 7 February 2012

The Linux w Command

The w command is used to find out who is logged on and what they are doing, w command displays information about the users currently on the machine, and their processes.

The header shows, in this order, the current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes.

The following entries are displayed for each user: login name, the tty name, the remote host, login time, idle time, JCPU, PCPU, and the command line of their current process.

The JCPU time is the time used by all processes attached to the tty. It does not include past background jobs, but does include currently running background jobs.

The PCPU time is the time used by the current process, named in the "what" field.

Command-line options

-h     Don't print the header.
-u     Ignores the username while figuring out the current process and cpu times.
-s     Use the short format. Don't print the login time, JCPU or PCPU times.
-f     Toggle printing the from (remote hostname) field.
-V     Display version information.
user  Show information about the specified user only.


The Linux vmstat Command

vmstat  stands for system activity, hardware and system information, The command vmstat reports virtual memory information about processes, memory, paging, block IO, traps, and cpu activity.

Syntax    : vmstat [-V] [-n] [delay [count]]

-V      Print version information.
-n       causes the headers not to be reprinted regularly.
-a       print inactive/active page stats.
-d       prints disk statistics
-D      prints disk table
-p       prints disk partition statistics
-s       prints vm table
-m      prints slabinfo
-S      unit size(unit size k:1000 K:1024 m:1000000 M:1048576 (default is K))
delay    delay between updates in seconds.
count     number of updates.

Field Description

Procs

r    The number of processes waiting for run time. 
b   The number of processes in uninterruptable sleep.
w  The number of processes swapped out but otherwise runnable. 

Memory

swpd    the amount of virtual memory used (kB).
free      the amount of idle memory (kB).
buff      the amount of memory used as buffers (kB).
cache   the amount of memory used as cache (kB).
inact     the amount of inactive memory. (-a option)
active   the amount of active memory. (-a option)

 Swap

si     Amount of memory swapped in from disk (kB/s).
so    Amount of memory swapped to disk (kB/s).

 IO

bi    Blocks sent to a block device (blocks/s).
bo   Blocks received from a block device (blocks/s).

 System

in    The number of interrupts per second, including the clock.
cs   The number of context switches per second.

These are percentages of total CPU time.

us   user time
sy   system time
id    idle time
wa  time spent waiting for IO. included n idle time.
st    time stolen from a virtual machine.