by 司马顿 | 2022年4月2日 上午11:20
关系型数据库的window函数我们都熟悉,主要用于统计分析目的,又叫做分析函数。
Apache组织下的一系列SQL引擎也都支持window函数,只是表现形式稍有不同。
我们简单测试下,测试数据同样来自这一份。测试的目的是简单求薪资(salary)的排名(rank)。
在Drill里加载数据最方便,因为无需定义schema,直接select csv文件就可以查询。用法如下。
apache drill (dfs.pyh)> select name,job,salary,rank() over(order by cast(salary as float) desc) as ranking from `people.csv` limit 10;
+---------------+--------------------------------------+---------+---------+
| name | job | salary | ranking |
+---------------+--------------------------------------+---------+---------+
| Peyton U.O. | Landscaper & Groundskeeper | 29998.0 | 1 |
| Veronica R.R. | Electrician | 29997.0 | 2 |
| Rylee L.A. | Brickmason & Blockmason | 29996.0 | 3 |
| Emerson E.G. | Pharmacist | 29995.0 | 4 |
| Zoe W.M. | Veterinary Technologist & Technician | 29993.0 | 5 |
| Amiyah F.S. | Clinical Laboratory Technician | 29993.0 | 5 |
| Averie T.U. | Cashier | 29993.0 | 5 |
| Gabriel V.V. | Massage Therapist | 29992.0 | 8 |
| Jennifer L.P. | Hairdresser | 29987.0 | 9 |
| Lennox M.C. | Pharmacist | 29983.0 | 10 |
+---------------+--------------------------------------+---------+---------+
10 rows selected (0.327 seconds)
如上,查询name, job, salary字段,并且按照salary字段排名,增加一列ranking,输出前10个结果。
上述也算是基本的SQL查询,唯一例外是需要用cast()函数改变下列的类型,因为Drill默认是没有类型的。
下面是Spark实现同样查询的方式。首先我们要加载数据,在加载的过程中使用inferSchema属性,也就是猜到数据类型。然后,导入spark的Window函数。最后,运用Window函数对dataframe进行统计。整个过程如下。
scala> val df = spark.read.format("csv").option("inferSchema", "true").option("header", "true").load("tmp/people.csv")
scala> import org.apache.spark.sql.functions._
scala> import org.apache.spark.sql.expressions.Window
scala> df.select("name","job","salary").withColumn("ranking",rank().over(Window.orderBy(desc("salary")) )).show(10,false);
+-------------+------------------------------------+-------+-------+
|name |job |salary |ranking|
+-------------+------------------------------------+-------+-------+
|Peyton U.O. |Landscaper & Groundskeeper |29998.0|1 |
|Veronica R.R.|Electrician |29997.0|2 |
|Rylee L.A. |Brickmason & Blockmason |29996.0|3 |
|Emerson E.G. |Pharmacist |29995.0|4 |
|Zoe W.M. |Veterinary Technologist & Technician|29993.0|5 |
|Amiyah F.S. |Clinical Laboratory Technician |29993.0|5 |
|Averie T.U. |Cashier |29993.0|5 |
|Gabriel V.V. |Massage Therapist |29992.0|8 |
|Jennifer L.P.|Hairdresser |29987.0|9 |
|Lennox M.C. |Pharmacist |29983.0|10 |
+-------------+------------------------------------+-------+-------+
only showing top 10 rows
Spark加载的数据是有类型的,再加上它的所谓SQL是基于dataframe的,整个过程复杂一些,不如Drill那么直接。当然,上述用的是dataframe API,没有直接转换成SQL去查询。
再看看Hive,它对数据要求是强类型的。它用起来最麻烦,也很慢,相对来说可能安全点。
首先,我们创建一个数据表,指定数据类型(schema),再从HDFS上加载csv文件到这个表。然后直接运行Window函数查询结果。过程如下。
> create table ppl (
> name string,
> sex string,
> born date,
> zip int,
> email string,
> job string,
> salary float
> )
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY ','
> STORED AS TEXTFILE;
No rows affected (0.609 seconds)
> LOAD DATA INPATH '/tmp/test/people.csv' OVERWRITE INTO TABLE ppl;
No rows affected (0.691 seconds)
> select name,job,salary,rank() over(order by salary desc) as ranking from ppl limit 10;
+----------------+---------------------------------------+----------+----------+
| name | job | salary | ranking |
+----------------+---------------------------------------+----------+----------+
| Peyton U.O. | Landscaper & Groundskeeper | 29998.0 | 1 |
| Veronica R.R. | Electrician | 29997.0 | 2 |
| Rylee L.A. | Brickmason & Blockmason | 29996.0 | 3 |
| Emerson E.G. | Pharmacist | 29995.0 | 4 |
| Averie T.U. | Cashier | 29993.0 | 5 |
| Amiyah F.S. | Clinical Laboratory Technician | 29993.0 | 5 |
| Zoe W.M. | Veterinary Technologist & Technician | 29993.0 | 5 |
| Gabriel V.V. | Massage Therapist | 29992.0 | 8 |
| Jennifer L.P. | Hairdresser | 29987.0 | 9 |
| Lennox M.C. | Pharmacist | 29983.0 | 10 |
+----------------+---------------------------------------+----------+----------+
10 rows selected (1.374 seconds)
上述查询对比稍微总结下:
就稳定性与安全性而言,无疑Hive最强。如果寻求使用的便利性和速度,应该是Drill胜出。如果是功能的丰富性,那还是Spark,因为它不止能处理结构化数据,还能处理非结构化数据、Streaming和机器学习。
Source URL: https://smart.postno.de/archives/3738
Copyright ©2024 司马顿的博客 unless otherwise noted.