Tag Archives: bigdata

Apache Storm

Architecture / Components Nimbus and Supervisor daemons are designed to be fail-fast (process self-destructs whenever any unexpected situation is encountered) stateless (all state is kept in Zookeeper or on disk) Nimbus and Supervisor daemons must be run under supervision using … Continue reading

Posted in Storm, Uncategorized | Tagged , | Leave a comment

Spark -> Parquet,ORC

Create Java RDDs String filePath = “hdfs://<HDFSName>:8020/user…” String outFile = “hdfs://<HDFSName>:8020/user…” SparkConf conf = new SparkConf().setAppName(“appname”); JavaSparkContext jsc = new JavaSparkContext(conf); JavaRDD<String> inFIleRDD = jsc.textFile(filePath); Remove initial empty lines from a file import org.apache.spark.api.java.function.Function2; Function2 removeSpace = new Function2<Integer, Iterator<String>, Iterator<String>>(){ … Continue reading

Posted in spark, Uncategorized | Tagged , , , , , , , | Leave a comment

Tips: Spark

Configuration Pass configuration values from a property file spark-submit supports loading configuration values from a file read whitespace-delimited key/value pairs from this file customize the exact location of the file using the –properties-file flag to spark-submit $ bin/spark-submit \ –class … Continue reading

Posted in spark, Tips | Tagged , , , , , , , , , , | Leave a comment

Developer’s template: MapReduce (Java)

Developer’s template series is intended to ease the life of  Bigdata developers with their application development and leave behind the headache of starting from the scratch. Here is a mapreduce java program with its pom file. Prerequisites Hadoop cluster Eclipse Maven Java … Continue reading

Posted in Java-Maven-Hadoop | Tagged , , | Leave a comment

Developer’s template: Spark

Developer’s template series is intended to ease the life of  Bigdata developers with their application development and leave behind the headache of starting from the scratch. Following program helps you develop and execute an application using  Apache Spark with Java. Prerequisites Hadoop … Continue reading

Posted in Java-Maven-Hadoop, spark | Tagged , , , | Leave a comment

Tips: Spark

Execute a Spark Pi From Spark directory (usually /usr/hdp/current/spark-client , in case of Hortonworks HDP 2.3.2) run ./bin/spark-submit –class org.apache.spark.examples.SparkPi –master yarn-cluster  –num-executors 3 –driver-memory 512m  –executor-memory 512m   –executor-cores 1  lib/spark-examples*.jar 10   stay tuned..

Posted in spark, Tips | Tagged , , | Leave a comment

Tips: Sqoop

Override Cluster properties Eg:- disable compression for sqoop output when compression is turned on in the cluster sqoop import -Dmapred.job.queue.name=default \ -Dmapreduce.map.output.compress=false \ -Dmapreduce.output.fileoutputformat.compress=false \ –driver com.ibm.db2.jcc.DB2Driver –connect jdbc:db2://<host>/<db>\ –username <user>–password <pwd> \ –table <db2 table> –target-dir <hdfs path> \ … Continue reading

Posted in sqoop, Tips | Tagged , , , , , | Leave a comment