Package org.apache.spark.sql
package org.apache.spark.sql
-
ClassDescriptionThrown when a query fails to analyze, usually because the query itself is invalid.A column that will be computed based on the data in a
DataFrame.A convenient class used for constructing schema.Trait to restrict calls to create and replace operations.Functionality for working with missing data inDataFrames.Interface used to load aDatasetfrom external storage systems (e.g.Statistic functions forDataFrames.Interface used to write aDatasetto external storage systems (e.g.Interface used to write aDatasetto external storage using the v2 API.Dataset<T>A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations.A container for aDataset, used for implicit conversions in Scala.Functions for registering user-defined data sources.Encoder<T>Used to convert a JVM object of typeTto and from the internal Spark SQL representation.Methods for creating anEncoder.:: Experimental :: Holder for experimental methods for the bravest.A trait for a session extension to implement that provides addition explain plan information.The abstract class for writing custom logic to process data generated by a query.Commonly used functions available for DataFrame operations.ADatasethas been logically grouped by a user specified grouping key.Lower priority implicit methods for converting Scala objects intoDatasets.MergeIntoWriterprovides methods to define and execute merge actions based on specified conditions.Helper class to simplify usage ofDataset.observe(String, Column, Column*):Helper class to simplify usage ofDataset.observe(String, Column, Column*):To indicate it's the CUBETo indicate it's the GroupByThe Grouping TypeTo indicate it's the ROLLUPRepresents one row of output from a relational operator.A factory class used to constructRowobjects.Runtime configuration interface for Spark.SaveMode is used to specify the expected behavior of saving a DataFrame to a data source.The entry point to programming Spark with the Dataset and DataFrame API.Builder forSparkSession.:: Experimental :: Holder for injection points to theSparkSession.:: Unstable ::The entry point for working with structured data (rows and columns) in Spark 1.x.A collection of implicit methods for converting common Scala objects intoDatasets.TypedColumn<T,U> Functions for registering user-defined functions.Functions for registering user-defined table functions.WhenMatched<T>A class for defining actions to be taken when matching rows in a DataFrame during a merge operation.A class for defining actions to be taken when no matching rows are found in a DataFrame during a merge operation.A class for defining actions to be performed when there is no match by source during a merge operation in a MergeIntoWriter.Configuration methods common to create/replace operations and insert/overwrite operations.