Quick introduction to Apache Livy Apache Livy is a service that enables access to spark cluster over REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark Context management, all via a simple REST interface or an RPC client library. There is…
Read More →Tag: apache spark
Apache Spark introduced Dataset API that unified the programming experience, improving upon the performance/experience and reducing the learning curve for spark developers. This is a great link to get familiar with Dataset. If the link doesn’t work at when you are reading this post, google is your friend. I want to save time and get…
Read More →Spark 2.0 provides a more matured eco-system, a unified data abstraction API and setting some new benchmarks in performance boosts with some non-backward compatible changes. Here, we try to see some important things to learn/remember before we migrate our existing spark projects to spark 2.0. Following is not a complete list of points but presents…
Read More →Apache Spark is great for processing JSON files, you can right away create DataFrames and start issuing SQL queries agains them by registering them as temporary tables. This works very good when the JSON strings are each in line, where typically each line represented a JSON object. In such a happy path JSON can be…
Read More →Relation/Set Theory transformations We will be playing with this following program to understand the three important set theory based transformations. package com.mishudi.learn.spark.dataframe; import java.util.Arrays; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.sql.DataFrame; import org.apache.spark.sql.SQLContext; public class RelationalOrSetTheoryTransformations { public static void main(String[] args) { SparkConf sparkConf = new SparkConf().setAppName(“RelationalOrSetTheoryTransformations”); JavaSparkContext ctx = new JavaSparkContext(sparkConf); //…
Read More →Apache Spark DataFrame So, lets recall RDD(Resilient Distributed Datasets)? It is an immutable distributed collection of objects, it is an Interface. OK! we have also seen how to apply transformations in previous post. They are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured…
Read More →Spark API allows you to write programs in Scala, Python, Java and R. Through out we will be working with Java 8. Following code snippet is WordCount program written in Java. Open the Maven project created in the Setting up eclipse spark project. In package that you added while creating the project, create a new Java…
Read More →This is a simple exercise and following are the steps for setting up a Maven project in eclipse: Create a new Maven project in Eclipse as shown below: From package explorer view, goto New -> Other -> Maven -Select Maven project -> Fill in group id, artifact id, package name and click finish You should…
Read More →If you haven’t read the previous article about MapReduce, I’d highly recommend reading it because that will set a good foundation to appreciate Sparks existence. Apache Spark – Introduction I want to get to the practical exercises quickly and I think there are enough resources on the internet to explain theoretical view of the framework….
Read More →