Spark

Add S3 support to Spark/Spark-Shell

Out of the box, Spark does not come with S3 support. So running something like this in the spark-shell: scala> spark.read.parquet("s3a://my-bucket/my-data.parquet").printSchema will yield something like this: 2018-09-05 09:47:59 WARN FileStreamSink:66 - Error while looking for metadata directory. java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.datasources.DataSource$.org$apache$spark$sql$execution$datasources$DataSource$$checkAndGlobPathIfNecessary(DataSource.scala:705) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389) at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$15.apply(DataSource.scala:389) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241) at scala. … »