Wednesday, November 28, 2012

What is Hibernate Caching?

In a typical application, you perform lot of operations like instantiate objects, load object from the database and so on. Sometime in multiuser application you may face a situation in handling multiple call of databases.

Hibernate offers caching functionality which is designed to reduces the amount of necessary database access. This is a very powerful feature if used correctly. It increases your application performance and works between your application and the database as it avoids the number of database hit as many as possible.



Hibernate Cache Types :

Hibernate uses different types of caches. Each type of cache is used for different purposes. Let us first have a look at this cache types.
  • First level cache
  • Second level cache
  • Query level cache

1. First level cache :

First-level cache is the session cache and is always Associates with the Session object. Hibernate uses this cache by default. The Session object keeps an object under its own cache before committing to the database. Here, it processes one transaction after another one, means wont process one transaction many times. Mainly it reduces the number of SQL queries it needs to generate within a given transaction. That is instead of updating after every modification done in the transaction, it updates the transaction only at the end of the transaction.

2. Second level cache :

Second-level cache is an optional cache and is always associates with the Session Factory object. The second-level cache can be configured on a per-class and per-collection basis and mainly responsible for caching objects across sessions. While running the transactions, in between it loads the objects at the Session Factory level, so that those objects will available to the entire application, don’t bounds to single user. Since the objects are already loaded in the cache, whenever an object is returned by the query, at that time no need to go for a database transaction. In this way the second level cache works.

Hibernate supports four open-source cache implementations named EHCache (Easy Hibernate Cache), OSCache (Open Symphony Cache), Swarm Cache, and JBoss Tree Cache.

Each cache has different performance, memory use, and configuration possibilities.
  
S.N.Cache NameDescription 
1EHCacheIt can cache in memory or on disk and clustered caching and it supports the optional Hibernate query result cache. 
2OSCacheSupports caching to memory and disk in a single JVM, with a rich set of expiration policies and query cache support. 
3warmCacheA cluster cache based on JGroups. It uses clustered invalidation but doesn't support the Hibernate query cache 
4JBoss CacheA fully transactional replicated clustered cache also based on the JGroups multicast library. It supports replication or invalidation, synchronous or asynchronous communication, and optimistic and pessimistic locking. The Hibernate query cache is supported




3. Query level cache :

Hibernate also implements a cache for query resultsets that integrates closely with the second-level cache. This is an optional feature and requires two additional physical cache regions that hold the cached query results and the timestamps when a table was last updated. This is only useful for queries that are run frequently with the same parameters.

Useful java Keytool Command

Generate a Java keystore and key pair :

keytool -genkey -alias mycert -keyalg RSA -keystore keystore.jks -keysize 1024
Generate a keystore and self-signed certificate :

 keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048
keytool command to view certificate details from keyStore :

keytool -list -v -keystore keystore.jks

Check a particular keystore entry using an alias:
keytool -list -v -keystore keystore.jks -alias mydomain

keytool command option is -printcert which prints details of a certificate stored in .cer file :
keytool -printcert -file test.cer

Export a certificate from a keystore:
keytool -export -alias mydomain -file mydomain.crt -keystore keystore.jks
 keytool -export -alias mydomain -keypass keypass -keystore keystore.jks -storepass jkspass -rfc -file keytool_crt.pem

Note: "keytool -export" command uses DER format by default. The "-rfc" option is to change it to PEM (RFC 1421) format.


Friday, November 2, 2012

How to install mod_jk.so on Cent OS

The Basics - What is mod_jk?


The mod_jk connector is an Apache HTTPD module that allows HTTPD to communicate with Apache Tomcat instances over the AJP protocol.

Steps:


1. Download the latest apache connector from http://tomcat.apache.org/download-connectors.cgi.
2. Untar the download by
         tar zxvf <filename>
3. Goto native directory of connector
         cd <connector dir>/native/
4. Run the buildconf.sh scripts
       ./buildconf.sh
Note: If you get any issue like "autocong" not installed then install following things:
       yum install autoconf
       yum install libtool

5. You need "httpd-devel" tools to build it. So make sure you have already installed it by
         yum list installed | grep httpd-devel    else install it "yum install httpd-devel"
6. From the native directory Run
      ./configure --with-apxs=/usr/sbin/apxs
      make

 

Congrats!! Now you can see the mod_jk.so module under your native directory.

Tuesday, October 30, 2012

Load Balancing on Web Application server clusters

Overview

A cluster is a group of servers running a Web application simultaneously, appearing to the world as if it were a single server. To balance server load, the system distributes requests to different nodes within the server cluster, with the goal of optimizing system performance. This results in higher availability and scalability -- necessities in an enterprise, Web-based application.
High availability can be defined as redundancy. If a single Web server fails, then another server takes over, as transparently as possible, to process the request.
Scalability is an application's ability to support a growing number of users. If it takes an application 10 milliseconds(ms) to respond to one request, then it should take 10 ms to respond to 10,000 concurrent requests.
Of the many methods available to balance a server load, the main two are:
  • DNS round robin and
  • Hardware load balancers.

DNS Round Robin

To balance server loads using DNS, the DNS server maintains several different IP addresses for a site name. The multiple IP addresses represent the machines in the cluster, all of which map to the same single logical site name. Using our example, www.loadbalancedsite.com could be hosted on three machines in a cluster with the following IP addresses:
203.34.23.3
203.34.23.4
203.34.23.5
In this case, the DNS server contains the following mappings:
www.loadbalancedsite.com  203.34.23.3
www.loadbalancedsite.com  203.34.23.4
www.loadbalancedsite.com  203.34.23.5
Diagram.
When the first request arrives at the DNS server, it returns the IP address 203.34.23.3, the first machine. On the second request, it returns the second IP address: 203.34.23.4. And so on. On the fourth request, the first IP address is returned again.

Advantages of DNS Round Robin

The main advantages of DNS round robin are that it's cheap and easy:

Inexpensive and easy to set up

  • The system administrator only needs to make a few changes in the DNS server to support round robin, and many of the newer DNS servers already include support.
  • It doesn't require any code change to the Web application.

Simplicity

  • It does not require any networking experts to set up or debug the system in case a problem arises.


Disadvantages of DNS Round Robin

Two main disadvantages of this software-based method of load balancing are :

No support for server affinity

  • Server affinity is a load-balancing system's ability to manage a user's requests, either to a specific server or any server, depending on whether session information is maintained on the server or at an underlying, database level.
    Without server affinity, DNS round robin relies on one of three methods devised to maintain session control or user identity to requests coming in over HTTP, which is a stateless protocol.
    • cookies
    • hidden fields
    • URL rewriting
    When a user makes a first request, the Web server returns a text-based token uniquely identifying that user. Subsequent requests include this token using either cookies, URL rewriting, or hidden fields, allowing the server to appear to maintain a session between client and server. When a user establishes a session with one server, all subsequent requests usually go to the same server.
    The problem is that the browser caches that server's IP address. Once the cache expires, the browser makes another request to the DNS server for the IP address associated with the domain name. If the DNS server returns a differnt IP address, that of another server in the cluster, the session information is lost.

No support for high Availability

  • Consider a cluster of n nodes. If a node goes down, then every nth request to the DNS server directs you to the dead node.
  • Changes to the cluster take time to propagate through the rest of the Internet. One reason is that many large organizations -- ISPs, corporations, agencies -- cache their DNS requests to reduce network traffic and request time. When a user within these organizations makes a DNS request, it's checked against the cache's list of DNS names mapped to IP addresses. If it finds an entry, it returns the IP address to the user. If an entry is not found in its local cache, the ISP sends this DNS request to the DNS server and caches response.
    When a cached entry expires, the ISP updates its local database by contacting other DNS servers. When your list of servers changes, it can take a while for the cached entries on other organizations' networks to expire and look for the updated list of servers. During that period, a client can still attempt to hit the downed server node, if that client's ISP still has an entry pointing to it. In such a case, some users of that ISP couldn't access your site on their first attempt, even if your cluster has redundant servers up and running.
  • This is a bigger problem when removing a node than when adding one. When you drop a node, a user may be trying to hit a non-existing server. When you add one, that server may just be under-utilized until its IP address propogates to all the DNS servers. Although this method tries to balance the number of users on each server, it doesn't necessarily balance the server load. Some users could demand a higher load of activity during their session than users on another server, and this methodology cannot guard against that inequity.

Hardware Load Balancers

The above problem can be solved through virtual IP addresses. The load balancer shows a single (virtual) IP address to the outside world, which maps to the addresses of each machine in the cluster. So, in a way, the load balancer exposes the IP address of the entire cluster to the world.



Diagram.



When a request comes to the load balancer, it rewrites the request's header to point to other machines in the cluster. If a machine is removed from the cluster, the request doesn't run the risk of hitting a dead server, since all of the machines in the cluster appear to have the same IP address. This address remains the same even if a node in the cluster is down. Moreover, cached DNS entries around the Internet aren't a problem. When a response is returned, the client sees it coming from the hardware load balancer machine. In other words, the client is dealing with a single machine, the hardware load balancer.

Advantages of Hardware Load Balancers

Support Server affinity

  • The hardware load balancer reads the cookies or URL readings on each request made by the client. Based on this information, it can rewrite the header information and send the request to the appropriate node in the cluster, where its session is maintained.
  • Hardware load balancers can provide server affinity in HTTP communication, but not through a secure channel, such as HTTPS. In a secure channel, the messages are SSL-encrypted, and this prevents the load balancer from reading the session information.

High Availability Through Failover

    Failover happens when one node in a cluster cannot process a request and redirects it to another. There are two types of failover:
  • Request Level Failover. When one node in a cluster cannot process a request (often because it's down), it passes it along to another node.
  • Transparent Session Failover. When an invocation fails, it's transparently routed to another node in the cluster to complete the execution.
    Hardware load balancers provide request-level failover; when the load balancer detects that a particular node has gone down, it redirects all subsequent requests to that dead node to another active node in the cluster. However, any session information on the dead node will be lost when requests are redirected to a new node.
    Transparent session failover requires execution knowledge for a single process in a node, since the hardware load balancer can only detect network-level problems, not errors. In the execution process of a single node, hardware load balancers do not provide transparent session failover.
  • To achieve transparent session failover, the nodes in the cluster must collaborate among each other and have something like a shared memory area or a common database where all the session data is stored. Therefore, if a node in the cluster has a problem, a session can continue in another node.
  • Metrics. Since all requests to a Web application must pass through the load-balancing system, the system can determine the number of active sessions, the number of active sessions connected in any instance, response times, peak load times, the number of sessions during peak load, the number of sessions during minimum load, and more. All this audit information is used to fine tune the entire system for optimal performance.

Disadvantages of Hardware Load Balancers

The drawbacks to the hardware route are the costs, the complexity of setting up, and the vulnerability to a single point of failure. Since all requests pass through a single hardware load balancer, the failure of that piece of hardware sinks the entire site.

Load Balancing HTTPS Requests

As mentioned above, it's difficult to load balance and maintain session information of requests that come in over HTTPS, as they're encrypted. The hardware load balancer cannot redirect requests based on the information in the header, cookies, or URL readings. There are two options to solve this problem:
  • Web server proxies
  • Hardware SSL decoders.

Implementing Web Server Proxies

A Web server proxy that sits in front of a cluster of Web servers takes all requests and decrypts them. Then it redirects them to the appropriate node, based on header information in the header, cookies, and URL readings.
Diagram.
The advantages of Web server proxies are that they offer a way to get server affinity for SSL-encrypted messages, without any extra hardware. But extensive SSL processing puts an extra load on the proxy.
Apache and Tomcat. In many serving systems, Apache and Tomcat servers work together to handle all HTTP requests. Apache handles the request for static pages (including HTML, JPEG, and GIF files), while Tomcat handles requests for dynamic pages (JSPs or servlets). Tomcat servers can also handle static pages, but in combined systems, they're usually set up to handle dynamic requests.
Diagram.
You can also configure Apache and Tomcat to handle HTTPS requests and to balance loads. To achieve this, you run multiple instances of Tomcat servers on one or more machines. If all of the Tomcat servers are running on one machine, they should be configured to listen on different ports. To implement load balancing, you create a special type of Tomcat instance, called a Tomcat Worker.
Diagram.
As shown in the illustration, the Apache Web server receives HTTP and HTTPS requests from clients. If the request is HTTPS, the Apache Web server decrypts the request and sends it to a Web server adapter, which in turn sends the request to the Tomcat Worker, which contains a load-balancing algorithm. Similar to the Web server proxy, this algorithm balances the load among Tomcat instances.

Hardware SSL Decoder

There are hardware devices capable of decoding SSL requests.that sit in front of the hardware load balancer, allowing it to decrypt information in cookies, headers and URLs.
Diagram.
These hardware SSL decoders are faster than Web server proxies and are highly scalable. But as with most hardware solutions, they cost more and are complicated to set up and configure.




Conclusion

As per the above information, I guess Hardware load balancing would be more better to go for web Server load balancing. It supports server affinity and High Availability and also the audit information(Metrics) can be used to fine tune the entire system for optimal performance.



Thursday, October 25, 2012

How to create self signed certificates programmatically ?

The most common approach of generating a self-signed certificate is using the  java keytool.

There may be a situation when you want to create a self signed certificates programmatically One approach of programmatically generating these self-signed certificates is through the Bouncy Castle API.

To start with this, you need to have the Bouncy Castle jar in your classpath.(You can download it from here)


Steps to generate self signed certificate key:


1. Create a public/private key pair for the new certificate

 
        KeyPairGenerator keyPairGenerator = KeyPairGenerator.getInstance("RSA");
        keyPairGenerator.initialize(1024, new SecureRandom());
        KeyPair keyPair = keyPairGenerator.generateKeyPair();

 

2. Create new certificate Structure

        // GENERATE THE X509 CERTIFICATE
        X509V3CertificateGenerator v3CertGen =  new X509V3CertificateGenerator();
        v3CertGen.setSerialNumber(BigInteger.valueOf(System.currentTimeMillis()));
        v3CertGen.setIssuerDN(new X509Principal("CN=cn, O=o, L=L, ST=il, C= c"));
        v3CertGen.setNotBefore(new Date(System.currentTimeMillis() - 1000L * 60 * 60 * 24));
        v3CertGen.setNotAfter(new Date(System.currentTimeMillis() + (1000L * 60 * 60 * 24 * 365*10)));
        v3CertGen.setSubjectDN(new X509Principal("CN=cn, O=o, L=L, ST=il, C= c"));
        v3CertGen.setPublicKey(keyPair.getPublic());
        v3CertGen.setSignatureAlgorithm("SHA256WithRSAEncryption");
        cert = v3CertGen.generateX509Certificate(keyPair.getPrivate());

3. Store the Certificate with the private key

       KeyStore keyStore = KeyStore.getInstance("JKS");   
        keyStore.load(null, null);
        keyStore.setKeyEntry("YOUR_CERTIFICATE_NAME", key, "YOUR_PASSWORD".toCharArray(),  new java.security.cert.Certificate[]{cert});
        File file = new File(".", "keystore.test");
        keyStore.store( new FileOutputStream(file), "YOUR_PASSWORD".toCharArray() );


I have uploaded the tutorial over here.

How to generate Self-Signed Certificate Using keytool

The example uses the keytool utility to create a new self signed certificate.

  1. Open the command console (Run as Administartor) on whatever operating system you are using and navigate to the directory where keytool.exe is located.
  2. Run the following command (where validity is the number of days before the certificate will expire):
    keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks  -keysize 1024
  3. Fill in the prompts for your organization information. 

This will create a keystore.jks file containing a private key and  self signed certificate. 

Wednesday, September 26, 2012

MySql: Give Root User Logon Permission From Any Host

I have already updated about this in my previous blog but that was from UI, In this tutorial You can do the same task from MySQL client itself.


To configure this feature, you’ll need to update the mysql user table to allow access from any remote host, using the % wildcard.

Open the command-line mysql client on the server using the root account.
Then you will want to run the following two commands, to see what the root user host is set to already:
use mysql;
select host, user from user;
Here’s an example of the output on my database.

mysql> use mysql;
Database changed

mysql> select host,user from user;
+-----------+------+
| host      | user |
+-----------+------+
| localhost        | root |
| 127.0.0.1 | root |
| ::1       | root |
| localhost |      |
+-----------+------+
4 rows in set (0.07 sec)

Now I’ll update the localhost host to use the wildcard, and then issue the command to reload the privilege tables. If you are running this command, substitute the hostname of your box for localhost.

update user set host=’%’ where user=’root’ and host=’localhost’;
flush privileges;

Now You are able to connect to mysql from any other machine using the root account.

Friday, September 21, 2012

Liquibase Tutorial



What is Liquibase ?

  1. LiquiBase — available since 2006 — is an open source, freely available tool for migrating from one database version to another, It is an open source database-independent library for tracking, managing and applying database changes.
  2. A handful of other open source database-migration tools are on the scene as well, including openDBcopy and dbdeploy. LiquiBase supports 10 database types, including DB2, Apache Derby, MySQL, PostgreSQL, Oracle, Microsoft® SQL Server, Sybase, and HSQL.
  1. All changes to the database are stored in XML files and identified by a combination of an "id" and "author" tag as well as the name of the file itself.
  2. A list of all applied changes is stored in each database which is consulted on all database updates to determine what new changes need to be applied.
  3. LiquiBase executes changes based on this XML file to handle different revisions of database structures and data.
  4. When you first run a changelog, LiquiBase manages those changelogs by adding two tables into your database.
    databasechangelog: maintains the database changes that were run.
    databasechangeloglock: ensures that two machines don't attempt to modify the database at one time.

To install LiquiBase, download the compressed LiquiBase Core file, extract it, and place the included liquibase-version.jar file in your system's path.
Getting started with LiquiBase takes four steps:
  1. Create a database change log file.
  2. Create a change set inside the change log file.
  3. Run the change set against a database via the command line or a build script.
  4. Verify the change in the database.

Sample changeLog file:  The above is an example of creating table EMPLOYEE and adding columns into it.

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
      
    <changeSet author="waheed" id="123456789-1">

 <comment> You can add comments to changeSets.</comment>
        <createTable tableName="EMPLOYEE">
            <column autoIncrement="true" name="EMPLOYEE_ID" type="BIGINT">
                <constraints nullable="false" primaryKey="true" />
            </column>
            <column name="NAME" type="VARCHAR(255)" />
            <column name="GENDER" type="VARCHAR(2)" />
            <column name="COUNTRY" type="VARCHAR(255)" />
            <column name="ABOUT_YOU" type="VARCHAR(255)" />
        </createTable>
    </changeSet>
</databaseChangeLog> 



Running LiquiBase from the command line:
After defining the change set, I can run LiquiBase from the command line:
Running LiquiBase from the command line
liquibase --driver=com.mysql.jdbc.Driver \
--classpath=mysql_connector.jar \
--changeLogFile=database.changelog.xml \
--url=jdbc:mysql://localhost:3306/Employees;create=true \ 
--username= --password= \
update



In this example, I run LiquiBase passing in:
  • The database driver
  • The classpath for the location of the database driver's JAR file
  • The name of the change log file “database.changelog.xml”
  • The URL for the database
  • A username and password
Running LiquiBase in an automated build
Instead of using the command-line option, I can make the database changes as part of the automated build by calling the Ant task provided by LiquiBase.

Ant script to execute the updateDatabase Ant task
<target name="update-database">
  <taskdef name="updateDatabase" classname="liquibase.ant.DatabaseUpdateTask" 
    classpathref="project.class.path" />
  <updateDatabase changeLogFile="database.changelog.xml"
    driver="com.mysql.jdbc.Driverr"
    url="jdbc:mysql://localhost:3306/Employees"
    username=""
    password=""
    dropFirst="true"
    classpathref="project.class.path"/>
</target>


I create a target called update-database. In it, I define the specific LiquiBase Ant task I wish to use, calling it updateDatabase. I pass the required values, including the changeLogFile and connection information for the database. The classpath defined in classpathref must contain liquibase-version.jar.



Applying refactorings to an existing database
As new features are added to an application, the need often arises to apply structural changes to a database or modify table constraints. LiquiBase provides support for more than 30 database refactorings.



Add Column
It's sometimes next to impossible to consider all of the possible columns in a database at the beginning of a project. And sometimes users request new features — such as collecting more data for information stored in the system — that can require new columns to be added.
Using the Add Column database refactoring in a LiquiBase change set
<changeSet author="waheed" id=”123456789-2”>
  <addColumn tableName="EMPLOYEE">
    <column name="PHONE_NUMBER" type="varchar(255)"  defaultValue=”SOME_DEFAULT_VALUE”/>
  </addColumn> 
</changeSet>
The new PHONE_NUMBER column is defined as a varchar datatype.


Drop Column
Suppose, you choose to remove the PHONE_NUMBER column you added above. This is as simple as calling the dropColumn refactoring:

Dropping a database column
<dropColumn tableName="EMPLOYEE" columnName="PHONE_NUMBER"/>



Create Table
Adding a new table to a database is also a common database refactoring. Creates a new table called USER, defining its columns, constraints, and default values:

Creating a new database table in LiquiBase
<changeSet author="waheed" id=”123456789-3”>
  <createTable tableName="USER">
    <column name="ID" type="int">
      <constraints primaryKey="true" nullable="false"/>
    </column>
    <column name="NAME" type="varchar(255)">
      <constraints nullable="false"/>
    </column>
    <column name="ADDRESS" type="varchar(255)">
      <constraints nullable="true"/>
    </column>
    <column name="active" type="boolean" defaultValue="1"/>
  </createTable>
</changeSet>



This example uses the createTable database refactoring as part of a change set (createTable was also used back).



Rename Column
<changeSet author="waheed" id=”123456789-5”>
        <comment>Add a username column so we can use "person" for authentication</comment>
        <addColumn tableName="EMPLOYEE">
            <column name="usernae" type="varchar(8)"/>
        </addColumn>
    </changeSet>
The second update will add “usernae” (typo mistakes is on purpose) with width 8 characters
Now, we need to fix the usernae become username
    <changeSet author="waheed" id=”123456789-6”>
        <comment>Fix misspelled "username" column</comment>
        <renameColumn tableName="EMPLOYEE" oldColumnName="usernae" newColumnName="username" columnDataType="varchar(8)"/>
    </changeSet>



Manipulating data
After applying structural database refactorings (such as Add Column and Create Table), you often need to insert data into tables affected by these refactorings. Furthermore, you might need to change the existing data in lookup tables or other types of tables. Below example shows how to insert data using a LiquiBase change set:

Inserting data in a LiquiBase change set
<changeSet author="waheed" id=”123456789-4”>
  <code type="section" width="100%">
  <insert tableName="USER">
    <column name="ID" valueNumeric="3"/>
    <column name="NAME" value="ABDUL"/>
  </insert>
  <insert tableName="USER">
    <column name="ID" valueNumeric="4"/>
    <column name="NAME" value="WAHEED"/>
  </insert>
</changeSet>



Suppose, You may have already written SQL scripts to manipulate data, or the LiquiBase XML change set may be too limiting. And sometimes it's simpler to use SQL scripts to apply mass changes to the database. LiquiBase can accommodate these situations too.

Running a custom SQL file from a LiquiBase change set
<changeSet author="Waheed" id=”123456789-5”>
  <sqlFile path="insert-distributor-data.sql"/>
</changeSet>

How to integrate With Spring:
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="com.mysql.jdbc.Driver" />

<property name="jdbcUrl" value="jdbc:mysql://localhost:3306/Employees" />
<property name="user" value="root" />
<property name="password" value="root123" />
<property name="minPoolSize" value="8" />
<property name="maxPoolSize" value="16" />
<property name="maxIdleTime" value="3600" />
</bean>

<!-- Updater is used to automatically update DB  upon startup if
        Application version has changed -->
    <bean id="LiquibaseUpdater" class="liquibase.integration.spring.SpringLiquibase">
        <property name="dataSource" ref="dataSource" />
        <property name="changeLog" value="classpath:db-changelog.xml" />
    </bean>

<!-- Needed here to make sure Liquibase updater runs prior to DAO's startup, Your DAO class -->
    <bean class="com.waheed.spring.hibernate.DaoImpl" id="dao"
        depends-on="LiquibaseUpdater">
    </bean>



Resources:
http://www.liquibase.org

Monday, September 17, 2012

How to get Current Thread Example

  1. /*
  2.         This Java example shows how to get reference of current thread using
  3.         currentThread method of Java Thread class.
  4. */
  5.  
  6. public class GetCurrentThread {
  7.  
  8.         public static void main(String[] args) {
  9.                
  10.                 /*
  11.                  * To get the reference of currently running thread, use
  12.                  * Thread currentThread() method of Thread class.
  13.                  *
  14.                  * This is a static method.
  15.                  */
  16.                
  17.                 Thread currentThread = Thread.currentThread();
  18.                 System.out.println(currentThread);
  19.                
  20.         }
  21. }
  22.  
  23. /*
  24. Output of the example would be
  25. Thread[main,5,main]
  26. */

Thursday, September 13, 2012

Should Logger members of a class be declared as static?


Advantages for declaring loggers as static Disadvantages for declaring loggers as static
  1. common and well-established idiom
  2. less CPU overhead: loggers are retrieved and assigned only once, at hosting class initialization
  3. less memory overhead: logger declaration will consume one reference per class
  1. For libraries shared between applications, not possible to take advantage of repository selectors. It should be noted that if the SLF4J binding and the underlying API ships with each application (not shared between applications), then each application will still have its own logging environment.
  2. not IOC-friendly


Advantages for declaring loggers as instance variables Disadvantages for declaring loggers as instance variables
  1. Possible to take advantage of repository selectors even for libraries shared between applications. However, repository selectors only work if the underlying logging system is logback-classic. Repository selectors do not work for the SLF4J+log4j combination.
  2. IOC-friendly


  1. Less common idiom than declaring loggers as static variables
  2. higher CPU overhead: loggers are retrieved and assigned for each instance of the hosting class
  3. higher memory overhead: logger declaration will consume one reference per instance of the hosting class

Explanation

Static logger members cost a single variable reference for all instances of the class whereas an instance logger member will cost a variable reference for every instance of the class. For simple classes instantiated thousands of times there might be a noticeable difference.
However, more recent logging systems, e.g log4j or logback, support a distinct logger context for each application running in the application server. Thus, even if a single copy of log4j.jar or logback-classic.jar is deployed in the server, the logging system will be able to differentiate between applications and offer a distinct logging environment for each application.
More specifically, each time a logger is retrieved by invoking LoggerFactory.getLogger() method, the underlying logging system will return an instance appropriate for the current application. Please note that within the same application retrieving a logger by a given name will always return the same logger. For a given name, a different logger will be returned only for different applications.
If the logger is static, then it will only be retrieved once when the hosting class is loaded into memory. If the hosting class is used in only in one application, there is not much to be concerned about. However, if the hosting class is shared between several applications, then all instances of the shared class will log into the context of the application which happened to first load the shared class into memory - hardly the behavior expected by the user.
Unfortunately, for non-native implementations of the SLF4J API, namely with slf4j-log4j12, log4j's repository selector will not be able to do its job properly because slf4j-log4j12, a non-native SLF4J binding, will store logger instances in a map, short-circuiting context-dependent logger retrieval. For native SLF4J implementations, such as logback-classic, repository selectors will work as expected.


Summary
In summary, declaring logger members as static variables requires less CPU time and have a slightly smaller memory footprint. On the other hand, declaring logger members as instance variables requires more CPU time and have a slighlty higher memory overhead. However, instance variables make it possible to create a distinct logger environment for each application, even for loggers declared in shared libraries. Perhaps more important than previously mentioned considerations, instance variables are IOC-friendly whereas static variables are not.

Check link for more info.

How to get directory size in linux ?

 du -sm <dir_name>

Wednesday, August 29, 2012

What is Liquibase ?

1. Liquibase is an open source database-independent library for tracking, managing and applying database changes.
2. All changes to the database are stored in XML files and identified by a combination of an "id" and "author" tag as well as the name of the file itself.
3. A list of all applied changes is stored in each database which is consulted on all database updates to determine what new changes need to be applied.
4. Liquibase executes changes based on this XML file to handle different revisions of database structures and data.
5. When you first run a changelog, LiquiBase manages those changelogs by adding two tables into your database.
databasechangelog: maintains the database changes that were run.
databasechangeloglock: ensures that two machines don't attempt to modify the database at one time.
6. Limitations do exist such that it will not export triggers, stored procedures, functions and packages.


Sample changeLog file:

 The above is an example of creating table EMPLOYEE and adding columns into it.


<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
      
    <changeSet author="waheed" id="123456789-1">
        <createTable tableName="EMPLOYEE">
            <column autoIncrement="true" name="EMPLOYEE_ID" type="BIGINT">
                <constraints nullable="false" primaryKey="true" />
            </column>
            <column name="NAME" type="VARCHAR(255)" />
            <column name="GENDER" type="VARCHAR(2)" />
            <column name="COUNTRY" type="VARCHAR(255)" />
            <column name="ABOUT_YOU" type="VARCHAR(255)" />
        </createTable>
    </changeSet>
</databaseChangeLog>

How to integrate Liquibase with Spring and Hibernate ?

A sample tutorial on how to integrate Liquibase with Spring and Hibernate.

While writing this tutorial, I have added javadoc in the code for better understanding and I believe You already have good knowledge on Spring and Hibernate. The main motto of this tutorial is to give an idea on how you can integrate Liquibase with Spring and Hibernate.

If you are new to Liquibase : Click Here

To integrate liquibase into your project, you need liquibase jars, So download it before starting the project.

I have created an application named "SHLIntegration". The Structure of the project is as follows :

The dependencies are also listed here:

Lets start with Employee class :

1. Create Employee class having getter/setter and add proper JPA annotation to each variable as below.

  public class Employee {

    @Id
    @GeneratedValue
    @Column(name="EMPLOYEE_ID")
    private long id;

    @Column(name="NAME")
    private String name;
   
    @Column(name="GENDER")
    private String gender;
   
    @Column(name="COUNTRY")
    private String country;
   
    @Column(name="ABOUT_YOU")
    private String aboutYou;

.....
....
.... // getter setter of each object

}

2. Create liquibase file i,e db-changelog.xml file

<?xml version="1.0" encoding="UTF-8"?>
<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
       
    <changeSet author="waheed" id="123456789-1">
        <createTable tableName="EMPLOYEE">
            <column autoIncrement="true" name="EMPLOYEE_ID" type="BIGINT">
                <constraints nullable="false" primaryKey="true" />
            </column>
            <column name="NAME" type="VARCHAR(255)" />
            <column name="GENDER" type="VARCHAR(2)" />
            <column name="COUNTRY" type="VARCHAR(255)" />
            <column name="ABOUT_YOU" type="VARCHAR(255)" />
        </createTable>
    </changeSet>
</databaseChangeLog>




3. Add liquibase bean in your bean :

    <bean id="LiquibaseUpdater" class="liquibase.integration.spring.SpringLiquibase">
        <property name="dataSource" ref="dataSource" />
        <property name="changeLog" value="classpath:db-changelog.xml" />
    </bean>


 and others beans which are required for Spring/Hibernate.  Check bean file


The complete tutorial : 
https://github.com/abdulwaheed18/SHLIntegration


Please feel free to do comment or drop me a mail regarding any suggestion/Feedback.
Email : waheedtechblog@gmail.com















 


Some ANT task

1. How to build  project from another build.


<target name="project2"
            description="Builds project2 project, required depedency">
        <!-- Build project2 first  -->

        <subant target="dist" verbose="yes" inheritall="false">
            <filelist dir="../com.waheed.project2"
                      files="build.xml" />
        </subant>
    </target>

 

2. How to read SVN revision and write into some file


<loadfile property="revision" srcFile="./.svn/entries">
        <filterchain>
            <headfilter skip="3" lines="1"/>
                  </filterchain>
        </loadfile>
            <tstamp>
                <format property="date" pattern="dd/MM/yyyy hh:mm:ss" />
            </tstamp>
            <echo append="true" file="<FILE_NAME>" >revision=${revision}${line.separator}</echo>


3. How to generate Keystore


    <target name="keystore">
        <delete file="workdir/keystore" failonerror="false"/>
        <genkey keystore="./keystore"
                alias="jetty"
                storepass="password"
                keypass="password"
                keyalg="RSA"
                validity="10">
            <dname>
                <param name="CN" value="NAME" />
                <param name="OU" value="NAME_OF_ORGANIZATION_UNIT" />
                <param name="O" value="ORGANIZATION_NAME" />
                <param name="C" value="COUNTRY_NAME" />
            </dname>
        </genkey>
    </target>


4. How to get current time

<target name="time">
        <tstamp>
            <format property="build-time" pattern="yyyy-MM-dd-HH-mm" />
        </tstamp>
        <echo>${build-start-time}</echo>
    </target>


5 . How to create jar with manifest


 <jar jarfile="${dist}/name_of_jar.jar"
             basedir="${build}">
            <manifest>
                <!-- Who is building this jar? -->
                <attribute name="Built-By" value="${user.name}" />
                <!-- Information about the program itself -->
                <attribute name="Implementation-Vendor"
                           value="Implementation-Vendor" />
                <attribute name="Implementation-Title"
                           value="Implementation-Title" />
                <attribute name="Implementation-Version" value="1.0" />
                <!-- details -->
                <section name="PATH_TO_MAIN_CLASS">
                    <attribute name="Sealed" value="false" />
                </section>
            </manifest>
        </jar>


6. How to compile source


 <!-- Compile the java code from ${src} into ${build} -->

        <javac destdir="bin" debug="true">
            <src path="src" />
            <classpath>
                <pathelement location="../dependency/bin" />
                <fileset dir="../lib">
                    <include name="*.jar" />
                </fileset>
            </classpath>
        </javac>


7. How to compile source with dependency class path


 <path id="class.path">
        <fileset dir="../lib/folder1">
            <include name="*.jar" />
        </fileset>
        <fileset dir="../lib/folder2">
             <include name="*.jar" />
        </fileset>
    </path>


 <!-- Compile the java code from ${src} into ${build} -->
        <javac destdir="bin" debug="true">
            <src path="src" />
            <classpath refid="class.path" />
        </javac>



8.  How to build Zip file


    <property name="product.name" value="product1" />
    <property name="product.version" value="1.0" />

<zip basedir="${dist}" destfile="${dist}/${product.name}-${product.version}.zip">
        </zip>


9. How to build tar file

    <property name="product.name" value="product1" />
    <property name="product.version" value="1.0" />

<exec executable="tar" dir="${dist}">
            <arg value="czf" />
            <arg value="${dist}/${product.name}-${product.version}.tgz" />
            <arg value="." />
        </exec>

10 . How to check OS 


<condition property="isWindows">
        <os family="windows" />
    </condition>

    <condition property="isUnix">
        <os family="unix" />
    </condition>


    <target name="dist.windows" if="isWindows" depends="a">
<!--  Task to be done-- >
     </target>

    <target name="dist.unix" if="isUnix" depends="b">

<!--  Task to be done-- >
    </target>


11.  How to set permission to file


 <chmod perm="500">
            <fileset dir="${dist}">
                <include name="**/*.sh" />
                <include name="jsvc" />
            </fileset>
        </chmod>


12 .How to read property file

Suppose I have following data in my filename1.properties file and want to read it and write it to another file lets say name filename2.properties. 

filename1.properties
REVISION 777



     <property file="${dist}/filename1.properties" prefix="version"/>
       <echo file="${dist}/filename2.properties" append="true">revision ${version.REVISION}${line.separator}</echo>

Spring MVC tutorial

Before Starting, I believe you must have basic idea about JAVA, SPRING and Spring MVC.

For Spring MVC : http://waheedtechblog.blogspot.in/2012/08/spring-mvc.html

In this tutorial , I will just tell you what are the basic thing that you need to start MVC.

Step 1 : Create a class 


@Controller
public class HelloWorld {
 
    @RequestMapping("/hello")
    public String helloWorld() {
         return = "Hello World, Spring 3.0!";
    }
}


1 . The class HelloWorld  has the annotation @Controller and @RequestMapping("/hello"). When Spring scans this class, it will recognize this bean as being a Controller bean for processing requests. 2 .The @RequestMapping annotation tells Spring that this Controller should process all requests beginning with /hello in the URL path.

Step 2. Mapping Spring MVC in WEB.xml

The entry point of Spring 3.0 MVC is the DispatcherServlet. DispatcherServlet is a normal servlet class which implements HttpServlet base class. Thus we need to configure it in web.xml.

<!-- ========================== -->
    <!-- Spring MVC: Core -->
    <!-- ========================== -->

    <servlet>
        <servlet-name>spring</servlet-name>
        <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class>
        <load-on-startup>1</load-on-startup>
    </servlet>
    <servlet-mapping>
        <servlet-name>spring</servlet-name>
        <url-pattern>/rest/*</url-pattern>
    </servlet-mapping>

<!-- This loads the root webapp Spring context -->
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>classpath:beans.xml</param-value>
    </context-param>
   
    <welcome-file-list>
        <welcome-file>index.jsp</welcome-file>
    </welcome-file-list>

    <listener>
        <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
    </listener>

Note that I have mapped /rest/* url pattern with example DispatcherServlet. Thus any url with /rest/* pattern will call Spring MVC Front controller.

                  The REST call would be http://ip:port/rest/hello

If your controller class has some dependency which you have defined in your spring context file.Then you have to load it in <context-param>.(red line).
Once the DispatcherServlet is initialized, it will looks for a file name [servlet-name]-servlet.xml in WEB-INF folder of web application. I have created the file named spring-servlet.xml


3 . Spring Configuration file

Create a file spring-servlet.xml in WEB-INF folder and copy following content into it.
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:task="http://www.springframework.org/schema/task"
    xmlns:tx="http://www.springframework.org/schema/tx" xmlns:context="http://www.springframework.org/schema/context"
    xmlns:mvc="http://www.springframework.org/schema/mvc" xmlns:util="http://www.springframework.org/schema/util"
    xmlns:aop="http://www.springframework.org/schema/aop"
    xsi:schemaLocation="http://www.springframework.org/schema/task
    http://www.springframework.org/schema/task/spring-task-3.0.xsd
        http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context-3.0.xsd
        http://www.springframework.org/schema/tx
        http://www.springframework.org/schema/tx/spring-tx-3.0.xsd
        http://www.springframework.org/schema/mvc
        http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
        http://www.springframework.org/schema/util
        http://www.springframework.org/schema/util/spring-util-2.0.xsd
        http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
        ">

    <!-- ========================== -->
    <!-- Spring MVC: Core -->
    <!-- ========================== -->

    <context:annotation-config />
    <mvc:annotation-driven />
    <mvc:default-servlet-handler />

    <!-- class name of the controller or If you have package use component-scan -->
    <bean class="com.waheed.spring.hibernate.HelloWorld" />

 </beans>

The highlighted red line allow Spring to load the components from class. This will load our HelloWorld class.

Congratulation..!!! You are done here...

Download source code : https://github.com/abdulwaheed18/SpringMVC-Hibernate-Integration

Spring MVC

Spring MVC helps in building flexible and loosely coupled web applications. The Model-view-controller design pattern helps in seperating the business logic, presentation logic and navigation logic. Models are responsible for encapsulating the application data. The Views render response to the user with the help of the model object . Controllers are responsible for receiving the request from the user and calling the back-end services.


When a request is sent to the Spring MVC Framework the following sequence of events happen.
  • The DispatcherServlet first receives the request.
  • The DispatcherServlet consults the HandlerMapping and invokes the Controller associated with the request.
  • The Controller process the request by calling the appropriate service methods and returns a ModeAndView object to the DispatcherServlet. The ModeAndView object contains the model data and the view name.
  • The DispatcherServlet sends the view name to a ViewResolver to find the actual View to invoke.
  • Now the DispatcherServlet will pass the model object to the View to render the result.
  • The View with the help of the model data will render the result back to the user.

The new Servlet
•DispatcherServlet requests are mapped to @Controller methods
–@RequestMapping annotation used to define mapping rules
–Method parameters used to obtain request input
–Method return values used to generate responses
•Simplest possible @Controller

@Controller
public class HelloController {
    @RequestMapping(“/”)
    public @ResponseBody String hello() {
        return “Hello World”;
    }
}

Mapping Requests
•By path
–@RequestMapping(“path”)
•By HTTP method
–@RequestMapping(“path”,method=RequestMethod.GET)
–POST,PUT,DELETE,OPTIONS and TRACE are also supported
•By presence of query parameter
–@RequestMapping(“path”,method=RequestMethod.GET,params=“foo”)
–Negation also supported: params={“foo”,”!bar”}
•By presence of request header
–@RequestMapping(“path”,header=“content-type=text/*”)
–Negation also supported

For more details : Click Here

For tutorial: http://waheedtechblog.blogspot.in/2012/08/spring-mvc-tutorial.html
 



Sending Email Via JavaMail API Example

From last one month, My internet device is missing from my terrace. So, Mostly I use mobile to access my mail but accessing mail via mobile is very irritating thing because of the slow bandwidth. To send one single mail I have to wait till the mailbox get opened.

This is very simple way to send mail without opening your account. :)

This is the example to show you how to use JavaMail API method to send an email via Gmail SMTP server.

You need mail.jar library to run this code.


import java.util.Properties;

import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.PasswordAuthentication;
import javax.mail.Session;
import javax.mail.Transport;
import javax.mail.internet.InternetAddress;
import javax.mail.internet.MimeMessage;

/**
 * @author abdul
 *
 */
public class SendEmail {

    public static void main(String[] args) {
       
        final String username="YOUR_USER_NAME"; //abdulwaheed18
        final String password="YOUR_PASSWORD";   //*******

        final String to = "SENDER_EMAIL"; // abdulwaheed143@yahoo.co.in
        final String from = "YOUR_EMAIL"; // abdulwaheed18@gmail.com

       
        Properties props = new Properties();
        props.put("mail.smtp.host", "smtp.gmail.com");
        props.put("mail.smtp.socketFactory.port", "465");
        props.put("mail.smtp.socketFactory.class",
        "javax.net.ssl.SSLSocketFactory");
        props.put("mail.smtp.auth", "true");
        props.put("mail.smtp.port", "465");

        Session session = Session.getDefaultInstance(props,
                new javax.mail.Authenticator() {
            protected PasswordAuthentication getPasswordAuthentication() {
                return new PasswordAuthentication(username,password);
            }
        });

        try {
            Message message = new MimeMessage(session);
            message.setFrom(new InternetAddress(to));
            message.setRecipients(Message.RecipientType.TO,
                    InternetAddress.parse(from));
            message.setSubject("Hi");
            message.setText("Hi User,\n Surprise Message.\n Regards,\nWaheed");
            Transport.send(message);
            System.out.println(“Message sent Successfully");
        } catch (MessagingException e) {
            throw new RuntimeException(e);
        }
    }
}


Gmail SMTP Detail : http://support.google.com/mail/bin/answer.py?hl=en&answer=13287

How TOPT Works: Generating OTPs Without Internet Connection

Introduction Have you ever wondered how authentication apps like RSA Authenticator generate One-Time Passwords (OTPs) without requiring an i...