-
Notifications
You must be signed in to change notification settings - Fork 369
Open
Description
I am writing a Dataframe to Redis in the following manner -
import org.apache.spark.sql.SaveMode
import org.apache.spark.sql.Encoders
import spark.implicits._
case class Person(name: String, age: Int)
val personSeq = (1 to 1000).map(x => Person(s"John_$x", x + 5))
val mySchema = Encoders.product[Person].schema
val df = spark.createDataFrame(personSeq)
(df.write
.format("org.apache.spark.sql.redis")
.option("key.column", "name")
.option("table", "person")
.mode("overwrite")
.save()
)
How to read a collection of arbitrary keys. Example -
val baseKeysToRead = List("John_1", "John_69", "John_669", "John_123")
I just want these. One option is to read all the data and join later to reduce the size, but I do not want to do that. Please give me another option where I read the required keys directly.
Metadata
Metadata
Assignees
Labels
No labels