| select {SparkR} | R Documentation | 
Selects a set of columns with names or Column expressions.
select(x, col, ...) ## S4 method for signature 'SparkDataFrame' x$name ## S4 replacement method for signature 'SparkDataFrame' x$name <- value ## S4 method for signature 'SparkDataFrame,character' select(x, col, ...) ## S4 method for signature 'SparkDataFrame,Column' select(x, col, ...) ## S4 method for signature 'SparkDataFrame,list' select(x, col)
| x | a SparkDataFrame. | 
| col | a list of columns or single Column or name. | 
| ... | additional column(s) if only one column is specified in  | 
| name | name of a Column (without being wrapped by  | 
| value | a Column or an atomic vector in the length of 1 as literal value, or  | 
A new SparkDataFrame with selected columns.
$ since 1.4.0
$<- since 1.4.0
select(SparkDataFrame, character) since 1.4.0
select(SparkDataFrame, Column) since 1.4.0
select(SparkDataFrame, list) since 1.4.0
Other SparkDataFrame functions: SparkDataFrame-class,
agg, arrange,
as.data.frame, attach,
cache, coalesce,
collect, colnames,
coltypes,
createOrReplaceTempView,
crossJoin, dapplyCollect,
dapply, describe,
dim, distinct,
dropDuplicates, dropna,
drop, dtypes,
except, explain,
filter, first,
gapplyCollect, gapply,
getNumPartitions, group_by,
head, histogram,
insertInto, intersect,
isLocal, join,
limit, merge,
mutate, ncol,
nrow, persist,
printSchema, randomSplit,
rbind, registerTempTable,
rename, repartition,
sample, saveAsTable,
schema, selectExpr,
showDF, show,
storageLevel, str,
subset, take,
union, unpersist,
withColumn, with,
write.df, write.jdbc,
write.json, write.orc,
write.parquet, write.text
Other subsetting functions: filter,
subset
## Not run: 
##D   select(df, "*")
##D   select(df, "col1", "col2")
##D   select(df, df$name, df$age + 1)
##D   select(df, c("col1", "col2"))
##D   select(df, list(df$name, df$age + 1))
##D   # Similar to R data frames columns can also be selected using $
##D   df[,df$age]
## End(Not run)