Thursday, December 24, 2015

Hibernate 5 Maven example

Hello!

Here in this post we will be looking into the basic setup of "Hibernate 5".
Hibernate 5 is being recently released to explore its feature we started setting up work space and the desired libraries needed for "Hibernate 5".

Here in this post we will be using
  1. Eclipse Mars.1 Release (4.5.1)
  2. Maven
  3. JDK 1.8
  4. Hibernate 5
  5. MySQL
You can also download/clone the complete workspace from GitHub : - https://github.com/rahulwagh/hibernate5.git

If you are not able to clone the repository than download the complete workspace and import in eclipse using following instruction 

File - > Import - > General - > Existing project into workspace


Maven Dependencies for Hibernate 5 

  
   org.hibernate
   hibernate-core
   5.0.2.Final
  
  
   mysql
   mysql-connector-java
   5.1.37
  
Here in this post we will try to insert an employee entry into the "employee" table which we have created.
Next step is to create an "Employee" @Entity

@Employee
package com.hibernate.tutorial.entity;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table(name = "employee")
public class Employee {

 @Id
 @Column(name = "id")
 Long id;

 @Column(name="employee_name")
 String employeeName;

 @Column(name="employee_address")
 String employeeAddress;

 public Employee(Long id, String employeeName, String employeeAddress) {
  this.id = id;
  this.employeeName = employeeName;
  this.employeeAddress = employeeAddress;
 }

 public Employee() {

 }

 public Long getId() {
  return id;
 }

 public void setId(Long id) {
  this.id = id;
 }

 public String getEmployeeName() {
  return employeeName;
 }

 public void setEmployeeName(String employeeName) {
  this.employeeName = employeeName;
 }

 public String getEmployeeAddress() {
  return employeeAddress;
 }

 public void setEmployeeAddress(String employeeAddress) {
  this.employeeAddress = employeeAddress;
 }

}

Next we need to write a code to insert a record into the table. Refer to following code where we have created following instances

  1. SessionFactory 
  2. Session
  3. Transaction
package com.hibernate.tutorial.mainclass;

import org.hibernate.Session;
import org.hibernate.SessionFactory;
import org.hibernate.Transaction;
import org.hibernate.cfg.Configuration;

import com.hibernate.tutorial.entity.Employee;

public class Hibernate5InsertTest {

 public static void main(String[] args) {
  SessionFactory sessionFactory;
  sessionFactory = new Configuration().configure().buildSessionFactory();

  Session session = sessionFactory.openSession();

  Transaction tx = session.beginTransaction();

  Employee emp = new Employee();
  emp.setId(new Long(1));
  emp.setEmployeeName("Rahul Wagh");
  emp.setEmployeeAddress("Indore, India");
  session.save(emp);
  tx.commit();
  session.close();
 }
}


SQL Script for "employee" table  : -

CREATE TABLE employee(
id INT NOT NULL AUTO_INCREMENT,
employee_name VARCHAR(100) NOT NULL,
employee_address VARCHAR(40) NOT NULL,
 PRIMARY KEY ( id));
Hope this article will help you to setup your Hibernate 5 Workspace.

For any issue please post your comments and leave your feedback

Monday, September 7, 2015

Hadoop File Already Exists Exception : org.apache.hadoop.mapred.FileAlreadyExistsException


Hadoop File Already Exists Exception


org.apache.hadoop.mapred.FileAlreadyExistsException



Hello folks!
Aim behind writing this article is to make developers aware about the issue which they might face while developing the MapReduce application. Well the above error "org.apache.hadoop.mapred.FileAlreadyExistsException" is one of the most basic exception which every beginner face while writing their first map reduce program.

Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://localhost:9000/home/facebook/crawler-output already exists
    at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)
    at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:269)
    at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:142)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
    at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
    at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
    at com.wagh.wordcountjob.WordCount.main(WordCount.java:68)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

Let's start from scratch.

To run a map reduce job you have to write a command similar to below command

 $hadoop jar {name_of_the_jar_file.jar} {package_name_of_jar} {hdfs_file_path_on_which_you_want_to_perform_map_reduce} {output_directory_path}

Example : - hadoop jar facebookCrawler.jar com.wagh.wordcountjob.WordCount /home/facebook/facebook-cocacola-page.txt /home/facebook/crawler-output

 Just pay attention on the {output_directory_path} i.e. /home/facebook/crawler-output . If you have already created this directory structure in your HDFS than Hadoop EcoSystem will throw the exception "org.apache.hadoop.mapred.FileAlreadyExistsException".

 Solution: - Always specify the output directory name at run time(i.e Hadoop will create the directory automatically for you. You need not to worry about the output directory creation).

 As mentioned in the above example the same command can be run in following manner - "hadoop jar facebookCrawler.jar com.wagh.wordcountjob.WordCount /home/facebook/facebook-cocacola-page.txt /home/facebook/crawler-output-1"

 So output directory {crawler-output-1} will be created at runtime by Hadoop eco system.

Tuesday, July 28, 2015

How to extend facebook access token



To extend the facebook token goto the URL https://developers.facebook.com/tools/access_tokens and look for the application name which you have created. In our case the application name is “DummyTestApplication”

 

Once you see the application name just click on the link “need to grant permissions” under the User Token section. After clicking on the above link you will get following message than click on continue.



After you click on the continue option your will be able to see the token in the user token section of the page. Please refer the screen shot below : -


Now you have the user token which only valid for one hour or so. You can test your token validity at https://developers.facebook.com/tools/debug/ .
Just copy the access token and paste it here and click on the Debug button


We need to extend this token for longer duration. There two ways to extend the access token .
-          Click on the Extend access token button available just below the Access token debugger screen
-          Second option would be : User the following URL
-          (* You can get the AppID, App Secret from application page which you have created)
-          Once you submit above URL you will the new access_token on the browser
-          You can use this access token