Apache Spark introduced Dataset API that unified the programming experience, improving upon the performance/experience and reducing the learning curve for spark developers. This is a great link to get familiar with Dataset. If the link doesn’t work at when you are reading this post, google is your friend. I want to save time and get…
Read More →Tag: dataframe
Spark 2.0 provides a more matured eco-system, a unified data abstraction API and setting some new benchmarks in performance boosts with some non-backward compatible changes. Here, we try to see some important things to learn/remember before we migrate our existing spark projects to spark 2.0. Following is not a complete list of points but presents…
Read More →Apache Spark is great for processing JSON files, you can right away create DataFrames and start issuing SQL queries agains them by registering them as temporary tables. This works very good when the JSON strings are each in line, where typically each line represented a JSON object. In such a happy path JSON can be…
Read More →Relation/Set Theory transformations We will be playing with this following program to understand the three important set theory based transformations. package com.mishudi.learn.spark.dataframe; import java.util.Arrays; import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaRDD; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.sql.DataFrame; import org.apache.spark.sql.SQLContext; public class RelationalOrSetTheoryTransformations { public static void main(String[] args) { SparkConf sparkConf = new SparkConf().setAppName(“RelationalOrSetTheoryTransformations”); JavaSparkContext ctx = new JavaSparkContext(sparkConf); //…
Read More →Apache Spark Transformation – DataFrame DataFrame can be create from any structured dataset like JSON, relational table, parquet or an existing RDD with defined schema. Following program creates a DataFrame and queries using sql. Here is the json we will use to play with, copy these following lines into a file and save it in <SPARK_HOME>/bin…
Read More →Apache Spark DataFrame So, lets recall RDD(Resilient Distributed Datasets)? It is an immutable distributed collection of objects, it is an Interface. OK! we have also seen how to apply transformations in previous post. They are amazing! as they give us all the flexibility to deal with almost any kind of data; unstructured, semi structured and structured…
Read More →