Author: Sumit Vyas

Introduction to Dataset API
By: Date: March 12, 2018 Categories: Apache Spark Tags: , , , , ,

Apache Spark introduced Dataset API that unified the programming experience, improving upon the performance/experience and reducing the learning curve for spark developers. This is a great link to get familiar with Dataset. If the link doesn’t work at when you are reading this post, google is your friend. I want to save time and get…

Read More →
Migrating to Spark 2.0
By: Date: October 4, 2017 Categories: Apache Spark Tags: , , , , , , , ,

Spark 2.0 provides a more matured eco-system, a unified data abstraction API and setting some new benchmarks in performance boosts with some non-backward compatible changes. Here, we try to see some important things to learn/remember before we migrate our existing spark projects to spark 2.0. Following is not a complete list of points but presents…

Read More →
Processing multiline JSON file – Apache Spark
By: Date: August 31, 2017 Categories: Apache Spark Tags: , , , , , , , ,

Apache Spark is great for processing JSON files, you can right away create DataFrames and start issuing SQL queries agains them by registering them as temporary tables. This works very good when the JSON strings are each in line, where typically each line represented a JSON object. In such a happy path JSON can be…

Read More →
Relational SetTheory Tranformation
By: Date: August 12, 2017 Categories: Apache Spark Tags: , , , , , , , , , , , , , ,

Relation/Set Theory transformations We will be playing with this following program to understand the three important set theory based transformations. package com.mishudi.learn.spark.dataframe; import java.util.Arrays; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.sql.DataFrame; import org.apache.spark.sql.SQLContext; public class RelationalOrSetTheoryTransformations { public static void main(String[] args) { SparkConf sparkConf = new SparkConf().setAppName(“RelationalOrSetTheoryTransformations”); JavaSparkContext ctx = new JavaSparkContext(sparkConf); //…

Read More →
Apache Spark Transformation – DataFrame
By: Date: August 7, 2017 Categories: Apache Spark Tags: , , , , , , , ,

Apache Spark Transformation – DataFrame DataFrame can be create from any structured dataset like JSON, relational table, parquet or an existing RDD with defined schema. Following program creates a DataFrame and queries using sql. Here is the json we will use to play with, copy these following lines into a file and save it in <SPARK_HOME>/bin…

Read More →
Spark DataFrame
By: Date: August 7, 2017 Categories: Apache Spark Tags: , , , , ,

Apache Spark DataFrame So, lets recall RDD(Resilient Distributed Datasets)? It is an immutable distributed collection of objects, it is an Interface. OK! we have also seen how to apply transformations in previous post. They are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured…

Read More →
General Spark Transformations
By: Date: July 31, 2017 Categories: Apache Spark Tags: , , , , , , , , , ,

Apache Spark Transformations In this post we will be focussing on general Apache Spark transformation against RDDs. We will keep it simple but try to go as deep as we can. Download link is provided at the bottom for you to run the programs and try it with your input. Goal is to get familiar…

Read More →
Anonymous inner class function implementation
By: Date: July 30, 2017 Categories: Apache Spark Tags: , , , , ,

The Word Count program in Java we saw here was written using lambda expression supported in Java 8. So, we passed functions are arguments to our transformation calls like mapToPair() and reduceByKey() etc. In this post we will try to write more detailed implementations of the lambda expressions that we used, as these are still fairly new…

Read More →