Saturday 8 October 2016

Monitoring Memory with Swap


A low amount of free memory is an indication that the system has many thing to  do; this may lead to disk swapping. A high use of swap space make a system go slow because disk access is many times slower than access.

cat /usr/local/src/memory.sh
#!/bin/bash
C=$(/usr/bin/free -m| head -2 | tail -n +2 | awk {'print $4'})
C=$(/usr/bin/free -m| head -4 | tail -n +4 | awk {'print $3'})
E=$(date)
printf "%s %s %s %s %s %s\n" "Mem: " "$C" "Swap: " "$D" "At" "$E"

crontab -l
*/5 * * * * /usr/local/src/memory.sh >> /var/log/memory.data

tail -5 /var/log/memory.data
Mem: 76080 Swap: 5274188 At Sat Oct  8 12:44:48 PDT 2016
Mem: 72128 Swap: 5274264 At Sat Oct  8 12:45:01 PDT 2016
Mem: 67780 Swap: 5274428 At Sat Oct  8 12:50:01 PDT 2016
Mem: 68720 Swap: 5274492 At Sat Oct  8 12:55:01 PDT 2016
Mem: 68472 Swap: 5274944 At Sat Oct  8 13:00:01 PDT 2016

Monitoring the TCP ESTABLISHED Connections


The following script tcpConnect.sh uses nestat to record the number of connection as well as the date and the time of the measurement.

cat /usr/local/src/tcpConnet.sh
#!/bin/bash
C=$(/bin/netstat -nt | tail -n +3 | grep ESTABLISHED | wc -l)
D=$(date)
printf "%s %s %s\n" "$C" "||" "$D"

crontab -l
*/5 * * * * /usr/local/src/tcpConnet.sh >> /var/log/tcpConnet.data

tail -5 /var/log/tcpConnet.data
25 || Sat Oct  8 11:10:01 PDT 2016
26 || Sat Oct  8 11:15:01 PDT 2016
23 ||  Sat Oct  8 11:20:01 PDT 2016
23 || Sat Oct  8 11:25:01 PDT 2016
23 || Sat Oct  8 11:30:01 PDT 2016

Monitoring Load Average


The easiest way to monitor the load average is by using the output o the uptime command and a small awk script that will run as a cron job to store your data into a text file.

The awk script is the following :

cat /usr/local/src/uptime.sh
#!/bin/bash
uptime | awk  {'print $9 $10 $11'} | awk -F, {'print $1 " " $2 " " $3'}

sh /usr/local/src/uptime.sh
0.68 0.86 1.48

crontab -l
*/5 * * * * /usr/local/src/uptime.sh >> /var/log/uptime.data

head -5 /var/log/uptime.data
0.61 1.40 1.96
0.47 1.33 1.92
0.49 0.94 1.68
0.64 0.80 1.41
0.45 0.61 1.16

       
The Linux system we used has only one CPU, so any load average greater than 1.00 indicates a performance issue.

The first number shows the load average during the last minute, the second during the last five minutes and the third during the last 15 minute,. The three values can show you if the load is increasing, decreasing or steady.

Wednesday 20 July 2016

Shell script for disk usage


Script for disk usage

cat diskUsage.sh
#!/bin/bash
/bin/df -t ext4 -m | tail -l| awk {'print $3 " " $4 " " $5'}

sh diskUsage.sh
Used Available Use%
1248 16786 7%

Thursday 5 May 2016

Install Magento on Ubuntu 14.04


Prerequisites

apt-get install libcurl3 php5-curl php5-gd php5-mcrypt

Apache Virtual Host Settings

vi /etc/apache2/sites-enabled/magento.conf

<VirtualHost *:80>

        DocumentRoot /var/www/html/magento/

        <Directory /var/www/html/magento/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride All
        </Directory>

</VirtualHost>

sudo a2ensite magento.conf
service apache2 restart

PHP Settings

vi /etc/php5/apache2/php.ini

memory_limit = 512M

Create a MySQL Database and User

mysql -u root -p
CREATE DATABASE magento;
CREATE USER magento_user@localhost IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES ON magento.* TO magento_user@localhost IDENTIFIED BY 'password';
FLUSH PRIVILEGES;
exit

Download and Set Up Magento Files

https://www.magentocommerce.com/download

cd /usr/local/src
tar xzvf magento-1.9.0.1.tar.gz
cp -r magento /var/www/html
chown -R www-data:www-data /var/www/html/magento

To access the web interface with your browser,
http://server_domain_name_or_IP/

Ref :- Magento-Ubuntu-14

Upgrade java 1.7 to 1.8 in Ubuntu 14.04


java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
echo $JAVA_HOME

Set Java home path,

export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
echo $JAVA_HOME
/usr/lib/jvm/java-8-openjdk-amd64

Don't pass MaxPermSize to Java 8+

java -version
openjdk version "1.8.0_91"
OpenJDK Runtime Environment (build 1.8.0_91-8u91-b14-0ubuntu4~14.04-b14)
OpenJDK 64-Bit Server VM (build 25.91-b14, mixed mode)

Tuesday 3 May 2016

Disable USB with regedit in Windows



To disable USB

Go to run and pass 'regedit' command to open registry editor and navigate to the following settings,

   --> KEY_LOCAL_MACHINE
        --> SYSTEM
            --> CurrentControlSet
                --> Services
                   --> USBSTOR
       
        Modify 'Start' file value >3 and save
    
To enable USB

        Modify 'Start' file value ≤ 3 and save



Tuesday 19 April 2016

Wapiti 2.3.0 on Ubuntu 14.04


Requirement : - python 2.6 or later

get the wapiti-2.3.0.tar.gz from https://sourceforge.net/projects/wapiti/files/wapiti/wapiti-2.3.0/,

cd /usr/local/src
wget https://sourceforge.net/projects/wapiti/files/wapiti/wapiti-2.3.0/wapiti-2.3.0.tar.gz
tar -xzvf wapiti-2.3.0.tar.gz

python setup.py install

Traceback (most recent call last):
  File "setup.py", line 2, in <module>
    from setuptools import setup, find_packages
ImportError: No module named setuptools

 apt-get install python-setuptools

 python setup.py install


In the prompt, just type the following command to get the basic usage :

python wapiti -h


python wapiti.py http://server.com/base/url/ [options]

Example :wapiti http://domainname.com/

Ref :- https://sourceforge.net/projects/wapiti/files/wapiti/wapiti-2.3.0/

Thursday 14 April 2016

Install Apache Spark on Ubuntu 14.04


Install Scala

To install Java in Ubuntu machine,
apt-add-repository ppa:webupd8team/java
apt-get update
apt-get install oracle-java7-installer
java -version

java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

cd /usr/local/src/
wget http://www.scala-lang.org/files/archive/scala-2.11.8.tgz
sudo mkdir /usr/local/src/scala


tar xvf scala-2.11.8.tgz -C /usr/local/src/scala/

vi .bashrc
export SCALA_HOME=/usr/local/src/scala/scala-2.11.8
export PATH=$SCALA_HOME/bin:$PATH

restart bashrc
. .bash

scala -version
Scala code runner version 2.11.8 -- Copyright 2002-2016, LAMP/EPFL

scala
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_80).
Type in expressions for evaluation. Or try :help.
scala>

Install Spark

cd /usr/local/src/
wget http://d3kbcqa49mib13.cloudfront.net/spark-1.6.1.tgz
tar xvf spark-1.6.1.tgz
cd spark-1.6.1/


SBT(Simple Build Tool) is used for building Spark, which is bundled with it. To compile the code,
 
sbt/sbt assembly
NOTE: The sbt/sbt script has been relocated to build/sbt.
      Please update references to point to the new location.

      Invoking 'build/sbt assembly' now ...

Attempting to fetch sbt
Launching sbt from build/sbt-launch-0.13.7.jar
Getting org.scala-sbt sbt 0.13.7 ...
.....................................
[info] Packaging /usr/local/src/spark-1.6.1/assembly/target/scala-2.10/spark-assembly-1.6.1-hadoop2.2.0.jar ...
[info] Done packaging.
[success] Total time: 2597 s, completed Apr 14, 2016 5:01:34 PM

test a sample program

./bin/run-example SparkPi 10
Pi is roughly 3.139988

run Spark interactively through the Scala shell
./bin/spark-shell

scala> val textFile = sc.textFile("README.md")
textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[1] at textFile at <console>:27

scala> textFile.count()
res0: Long = 95

scala> exit

If want to check some particular sections of spark using shell.
For example run MQTT interactevely, the mqtt is defined under external for import that into spark-shell just follow the instructions

sbt/sbt "streaming-mqtt/package"
bin/spark-shell --driver-class-path external/mqtt/target/scala-2.10/spark-streaming-mqtt_2.10-1.1.0.jar
scala> import org.apache.spark.streaming.mqtt._

Ref : http://blog.prabeeshk.com/blog/2014/10/31/install-apache-spark-on-ubuntu-14-dot-04/

Tuesday 12 April 2016

Wapiti on ubuntu 14.04


Wapiti is an open source web application vulnerability scanner. It can detect the following vulnerabilities:

backup: This module search backup of scripts on the server.
blindsql: Time-based blind sql scanner.
crlf: Search for CR/LF injection in HTTP headers.
exec: Module used to detect command execution vulnerabilities.
file: Search for include()/fread() and other file handling vulns.
htaccess: Try to bypass weak htaccess configurations.
nikto: Use a Nikto database to search for potentially dangerous files.
permanentxss: Look for permanent XSS.
sql: Standard error-based SQL injection scanner.
xss: Module for XSS detection.
buster: Module for a file and directory buster attack – checking for “bad” files.
shellshock: Module for Shellshock bug detection.

sudo apt-get install wapiti

wapiti http://example.org/cool-things -u -n 5 -b domain -v 2 -o /tmp/outfilename

-u, --color
    use colours

-b, --scope
    set the scope of the scan:
    page: only analyse the page given in the url
    folder: analyse all urls in the root url given (default option)
    domain: analyse all links to pages in the same domain

-n, --nice
    use this to prevent infinite loops, I usually go with 5

-f, --format
    change the output format
    json:
    html:
    openvas:
    txt:
    vulneranet:
    xml:

-v verbose
    0: none
    1: print each url
    2) print each attack

# if you dont specify a -v flag, then you get a blank screen for ages

Ref:- https://jonathansblog.co.uk/wapiti-tutorial

Friday 1 April 2016

Creating a FEDERATED Table on MySQL


The FEDERATED Storage Engine

The FEDERATED storage engine lets you access data from a remote MySQL database without using replication or cluster technology. Querying a local FEDERATED table automatically pulls the data from the remote (federated) tables. No data is stored on the local tables.

mysql>  show engines;
| FEDERATED          | NO      | Federated MySQL storage engine

vi /etc/mysql/my.cnf
under [mysqld]
federated
   
mysql>  show engines;      
| FEDERATED          | YES     | Federated MySQL storage engine

/etc/init.d/mysql restart


SAMPLE 1

CREATE TABLE destination_databasename.`table_name_1_dtl` (
  Provider_ID VARCHAR(100) NOT NULL,
  Provider_Name VARCHAR(250) NOT NULL,
  Provider_Type VARCHAR(100) NOT NULL,
  Address VARCHAR(255) DEFAULT NULL,
  District_Name VARCHAR(100) NOT NULL,
  Provider_Phone_Number VARCHAR(50) NOT NULL,
  Specialty_Name VARCHAR(100) NOT NULL,
  Availability_of_Pharmacy_or_ChemistShop_within_Hospital TINYINT(1) DEFAULT NULL,
  Laboratory TINYINT(1) DEFAULT NULL,
  Empanelment_Status VARCHAR(50) DEFAULT NULL,
  Empanelment_Valid_Upto DATE DEFAULT NULL,
  De_Empaneled_Reason VARCHAR(250) DEFAULT NULL,
  Contact_Person VARCHAR(200) DEFAULT NULL,
  PRIMARY KEY (Provider_ID)
) ENGINE=FEDERATED DEFAULT CHARSET=utf8
CONNECTION='mysql://root:ussgnovbl5r@localhost:3306/source_databasename/table_1_name_dtl';

SAMPLE 2

CREATE TABLE destination_databasename.`table_name_2_dtl` (
  Package_ID VARCHAR(100) CHARACTER SET latin1 NOT NULL,
  Dept_ID VARCHAR(100) CHARACTER SET latin1 DEFAULT NULL,
  DeptSI_No VARCHAR(200) CHARACTER SET latin1 DEFAULT NULL,
  Package_Type VARCHAR(200) CHARACTER SET latin1 NOT NULL,
  Category VARCHAR(200) DEFAULT NULL,
  Specialty VARCHAR(200) CHARACTER SET latin1 DEFAULT NULL,
  Sub_Specialty VARCHAR(200) CHARACTER SET latin1 DEFAULT NULL,
  Package_Name VARCHAR(255) CHARACTER SET latin1 NOT NULL,
  Length_of_Stay VARCHAR(50) DEFAULT NULL,
  Rate_Card_A DECIMAL(65,2) NOT NULL,
  Rate_Card_B DECIMAL(65,2) DEFAULT NULL,
  Rate_Card_C DECIMAL(65,2) DEFAULT NULL,
  PRIMARY KEY (Package_ID),
  KEY Package_Type (Package_Type),
  KEY Package_Name (Package_Name),
  KEY Rate_Card_A (Rate_Card_A,Rate_Card_B,Rate_Card_C)
) ENGINE=FEDERATED DEFAULT CHARSET=utf8
CONNECTION='mysql://root:ussgnovbl5r@localhost:3306/source_databasename/table_2_name_dtl';

SAMPLE 3

CREATE TABLE destination_databasename.`table_name_3_dtl` (
  Preauth_Status VARCHAR(100) DEFAULT NULL,
  Total BIGINT(21) NOT NULL DEFAULT '0',
  Approved_Amount DECIMAL(65,2) DEFAULT NULL
) ENGINE=FEDERATED DEFAULT CHARSET=utf8
CONNECTION='mysql://root:ussgnovbl5r@localhost:3306/source_databasename/table_2_name_dtl';

Ref:- http://dev.mysql.com/doc/refman/5.7/en/federated-storage-engine.html

Tuesday 15 March 2016

'ping' is not recognized in Windows 10


'ping' is not recognized as an internal or external command

1. Click Start, then type the three letters cmd into the Search box and press Enter.
2. Type these commands and press Enter after each:
    dir %SystemRoot%\System32\ping.exe
    path %path%;%SystemRoot%\System32
    ping www.google.com
3. Report the result

Ref: google

Monday 14 March 2016

Install Hadoop on Ubuntu

Install Hadoop 2.6 on Ubuntu 14.04 as Single-Node Cluster

apt-get update

cat /proc/sys/net/ipv6/conf/all/disable_ipv6
0

Installing Java

Hadoop framework is written in Java!!

apt-get install openjdk-7-jdk

 update-alternatives --config java
There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java
Nothing to configure.

Adding a dedicated Hadoop user

addgroup hadoop

Adding group `hadoop' (GID 1000) ...
Done.

 adduser --ingroup hadoop hduser

Adding user `hduser' ...
Adding new user `hduser' (1000) with group `hadoop' ...
Creating home directory `/home/hduser' ...
Copying files from `/etc/skel' ...
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
Changing the user information for hduser
Enter the new value, or press ENTER for the default
        Full Name []:
        Room Number []:
        Work Phone []:
        Home Phone []:
        Other []:
Is the information correct? [Y/n] y

Create and Setup SSH Certificates

apt-get install ssh
 
su hduser

 ssh-keygen -t rsa -P ""

Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
39:ac:ed:aa:cd:b0:34:7f:97:31:c2:18:e2:1c:cf:ea hduser@server1
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|                 |
|                 |
|    o .. .       |
|   o = +S        |
|    o +oo.o      |
|    +.. .. +     |
|   ..B .. o      |
|   .E.=o..       |
+-----------------+


cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

ssh localhost

The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is 7f:f0:56:ee:e4:f1:f6:1e:04:30:2d:f8:e8:e7:f4:8e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Password:
Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.13.0-74-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Install Hadoop

cd /usr/local/src

wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

tar -xzvf hadoop-2.6.0.tar.gz

sudo adduser hduser sudo

Adding user `hduser' to group `sudo' ...
Adding user hduser to group sudo
Done.

sudo su hduser
sudo mkdir /usr/local/hadoop
sudo mv * /usr/local/hadoop
sudo chown -R hduser:hadoop /usr/local/hadoop

Setup Configuration Files

The following files will have to be modified to complete the Hadoop setup:

~/.bashrc
/usr/local/hadoop/etc/hadoop/hadoop-env.sh
/usr/local/hadoop/etc/hadoop/core-site.xml
/usr/local/hadoop/etc/hadoop/mapred-site.xml.template
/usr/local/hadoop/etc/hadoop/hdfs-site.xml

vi ~/.bashrc

#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

source ~/.bashrc

javac -version
javac 1.7.0_95

 which javac
/usr/bin/javac

 readlink -f /usr/bin/javac
/usr/lib/jvm/java-7-openjdk-amd64/bin/javac

vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64

sudo mkdir -p /app/hadoop/tmp
sudo chown hduser:hadoop /app/hadoop/tmp

vi /usr/local/hadoop/etc/hadoop/core-site.xml

<configuration>
<property>
  <name>hadoop.tmp.dir</name>
  <value>/app/hadoop/tmp</value>
  <description>A base for other temporary directories.</description>
 </property>

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
 </property>
</configuration>

cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml

vi /usr/local/hadoop/etc/hadoop/mapred-site.xml

<configuration>
<property>
  <name>mapred.job.tracker</name>
  <value>localhost:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
 </property>
</configuration>

sudo mkdir -p /usr/local/hadoop_store/hdfs/namenode
sudo mkdir -p /usr/local/hadoop_store/hdfs/datanode
sudo chown -R hduser:hadoop /usr/local/hadoop_store

vi /usr/local/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is created.
  The default is used if replication is not specified in create time.
  </description>
 </property>
 <property>
   <name>dfs.namenode.name.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/namenode</value>
 </property>
 <property>
   <name>dfs.datanode.data.dir</name>
   <value>file:/usr/local/hadoop_store/hdfs/datanode</value>
 </property>
</configuration>

Format the New Hadoop Filesystem


Reboot the machine before format hadoop

cd /usr/local/hadoop/bin
su hduser
 hadoop namenode -format

DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

16/03/14 22:52:05 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = server1.ussg.com/119.81.98.115
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:....
............................................................
...........................................................
16/03/14 22:52:06 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at server1.ussg.com/119.81.98.115
************************************************************/

Starting Hadoop

cd /usr/local/hadoop/sbin
sudo su hduser

 
hduser@ubuntu-hadoop:/usr/local/hadoop/bin$ start-all.sh

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
16/03/15 13:01:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-ubuntu-hadoop.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-ubuntu-hadoop.out
Starting secondary namenodes [0.0.0.0]
The authenticity of host '0.0.0.0 (0.0.0.0)' can't be established.
ECDSA key fingerprint is 42:68:e7:3d:d7:9d:ba:97:d3:9d:cf:1f:f3:c1:df:82.
Are you sure you want to continue connecting (yes/no)? yes
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-ubuntu-hadoop.out
16/03/15 13:01:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-ubuntu-hadoop.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-ubuntu-hadoop.out

jps

1665 DataNode
2040 ResourceManager
2175 NodeManager
1532 NameNode
2212 Jps
1895 SecondaryNameNode

netstat -plten | grep java

(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 127.0.0.1:38612         0.0.0.0:*               LISTEN      1001       16153       1665/java
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       15012       1532/java
tcp        0      0 0.0.0.0:50010           0.0.0.0:*               LISTEN      1001       16150       1665/java
tcp        0      0 0.0.0.0:50075           0.0.0.0:*               LISTEN      1001       16558       1665/java
tcp        0      0 0.0.0.0:50020           0.0.0.0:*               LISTEN      1001       16563       1665/java
tcp        0      0 127.0.0.1:54310         0.0.0.0:*               LISTEN      1001       15484       1532/java
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       17713       1895/java
tcp6       0      0 :::49138                :::*                    LISTEN      1001       21511       2175/java
tcp6       0      0 :::8088                 :::*                    LISTEN      1001       21523       2040/java
tcp6       0      0 :::8030                 :::*                    LISTEN      1001       18902       2040/java
tcp6       0      0 :::8031                 :::*                    LISTEN      1001       18895       2040/java
tcp6       0      0 :::8032                 :::*                    LISTEN      1001       21507       2040/java
tcp6       0      0 :::8033                 :::*                    LISTEN      1001       21536       2040/java
tcp6       0      0 :::8040                 :::*                    LISTEN      1001       21518       2175/java
tcp6       0      0 :::8042                 :::*                    LISTEN      1001       21522       2175/java


Stopping Hadoop

hduser@ubuntu-hadoop:/usr/local/hadoop/bin$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
16/03/15 13:03:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
16/03/15 13:04:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop

Hadoop Web Interfaces


NameNode
http://localhost:50070

DataNode
http://localhost:50070

SecondaryNameNode
http://localhost:50090


Ref :-    http://www.bogotobogo.com/Hadoop/BigData_hadoop_Install_on_ubuntu_single_node_cluster.php
    https://www.youtube.com/watch?v=SaVFs_iDMPo

Thursday 3 March 2016

Set Java MaxPermSize and Min Heap Size

MaxPermSize

The MaxPermSize is the maximum size for the permanent generation heap, a heap that holds the byte code of classes and is kept separated from the object heap containing the actual instances.

Min Heap Size

Min Heap Size (-Xms) is a costly operation. Especially in case of financial applications this could mean delays. With similar real-time requirements, it could be a good idea to set -Xms and -Xmx to the same value.

Example for setting MaxPermSize and MinPermSize in Linux.

Go to bin dir,
cd /opt/apache-tomcat-7.0.61/bin
add a new file with
vi setenv.bat
set JAVA_OPTS=-Dfile.encoding=UTF-8 -Xms512m -Xmx4096m -XX:PermSize=128m -XX:MaxPermSize=512m

Ref: http://stackoverflow.com/questions/6902135/side-effect-for-increasing-maxpermsize-and-max-heap-size

Wednesday 20 January 2016

SecuGen Hamster Pro 20 with Ubuntu 14.04


SYSTEM INSTALLATION STEPS

1. Get the package by sending Request Free SDK Download from the,
    URL:- http://www.secugen.com/download/sdkrequest.htm
     FDxSDK_Pro_for_Linux_v3.71c/FDx SDK Pro for Linux v3.71c/FDx_SDK_PRO_LINUX_X64_3_7_1_REV570

2. Install the following packages if no0t already installed on your system.
    libgtk1.2-dev (1.2.10-18.1build2)

3. Install the SecuGen USB Device Drivers
    cd <install_dir>/lib/linux
    make install

sudo cp libsgfdu05.so.1.0.1 /usr/local/lib
sudo cp libsgfdu04.so.1.0.4 /usr/local/lib
sudo cp libsgfdu03.so.2.0.6 /usr/local/lib
sudo cp libsgfplib.so.3.7.1 /usr/local/lib
sudo cp libsgfpamx.so.3.5.1 /usr/local/lib
sudo cp libjnisgfplib.so.3.7.1 /usr/local/lib
sudo /sbin/ldconfig /usr/local/lib

    If you need to uninstall, the command is (make uninstall)

4. By default, only the root user can access the SecuGen USB device because the device requires
     write permissions, To allow non-root users to use the device, perform the following steps:
   
    4.1 Create a SecuGen Group
        # groupadd SecuGen

    4.2 Add fingerprint users to the SecuGen group.
        #gpasswd -a myUserID SecuGen
        (substitute user name for myUserID)
        Ex:- useradd sectestuser
                groupadd SecuGen
                gpasswd -a sectestuser SecuGen

    4.3 Create a file in /etc/udev/rules.d/99SecuGenSDU03M.rules.
        Add the following lines:

SYSFS{idVendor}=="1162", SYSFS{idProduct}=="0330", SYMLINK+="input/fdu04-%k", MODE="0660", GROUP="SecuGen"
SYSFS{idVendor}=="1162", SYSFS{idProduct}=="2000", SYMLINK+="input/sdu04p-%k", MODE="0660", GROUP="SecuGen"
SYSFS{idVendor}=="1162", SYSFS{idProduct}=="0322", SYMLINK+="input/sdu03m-%k", MODE="0660", GROUP="SecuGen"
SYSFS{idVendor}=="1162", SYSFS{idProduct}=="0320", SYMLINK+="input/fdu03-%k", MODE="0660", GROUP="SecuGen"
KERNEL=="uinput", MODE="0660", GROUP="SecuGen"

    4.4 Reboot

5. Plug in the Hamster Plus or Hamster IV device

6. Now you are ready to run the demo programs in the
    <installdir>/bin/linux directory

7. Configuration for java applications
   libjnisgfplib.so supports only one class of SecuGen device at a time.
   The default configuration is for the SecuGen U20 device.

8. Configuration for Hamster PRO 20
   cd <install_dir>/lib/linux
   cp libjnisgfplib.so.3.7.0.fdu05_rename libjnisgfplib.so
   make uninstall install

9. cp libjnisgfplib.so /usr/lib

=================================================================
Running the Java Samples
=================================================================
-----------------------------------------------------------------
FPLIB TEST SAMPLE
    cd <installdir>/java
    sh run_jsgfplibtest.sh
-----------------------------------------------------------------
SGD SWING SAMPLE
    cd <installdir>/java    make
    sh run_jsgd.sh
-----------------------------------------------------------------
MULTIPLE DEVICE SAMPLE
    cd <installdir>/java
    sh run_jsgmultidevicetest.sh
-----------------------------------------------------------------


Ref:-http://www.secugen.com/index.htm