Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

厦门人才招聘数据分析 #10

Open
feng-chen11 opened this issue Dec 5, 2024 · 3 comments
Open

厦门人才招聘数据分析 #10

feng-chen11 opened this issue Dec 5, 2024 · 3 comments

Comments

@feng-chen11
Copy link

请问我在hive进行hql语句查询的时候报错如下:是什么情况呢?我应该怎么解决
我用的是Ubuntu的虚拟机环境

@kings186
Copy link

kings186 commented Dec 5, 2024 via email

@feng-chen11
Copy link
Author

请问我在hive进行hql语句查询的时候报错如下:
hive> select industry,sum(num)as workersfrom jobgroup by industry
> order by workers desc
> limit 10;
Query ID = hadoop 20241205232403 6df14bbf-3eb2-4286-957c-b511c31945d5
Total jobs =2
Launching Job 1 out of 2
Number of reduce tasks not specified, Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Job running in-process(local Hadoop)
2024-12-05 23:24:07,936 stage-1 map =0%,reduce = 0%
2024-12-05 23:24:09,946 stage-1 map= 100%,reduce = 100%
Ended Job = job local782125842 0001
Launching Job 2 out of 2
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapreduce.job.reduces=
Job running in-process(local Hadoop)
2024-12-05 23:24:11,146 stage-2 map =0%,reduce = 0%
Ended Job = job local2035050000 0002 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-stage-1:HDFS Read:249977108 HDFS Write:0 SUCCESSHDFS Read:0 HDFS Write:0 FAIL
Stage-stage-2:TotalMapReduce CPu Time spent:0 msec
这个是什么情况呢?我应该怎么解决
我用的是Ubuntu的虚拟机环境

@TurboWay
Copy link
Owner

TurboWay commented Dec 6, 2024

请问我在hive进行hql语句查询的时候报错如下: hive> select industry,sum(num)as workersfrom jobgroup by industry > order by workers desc > limit 10; Query ID = hadoop 20241205232403 6df14bbf-3eb2-4286-957c-b511c31945d5 Total jobs =2 Launching Job 1 out of 2 Number of reduce tasks not specified, Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Job running in-process(local Hadoop) 2024-12-05 23:24:07,936 stage-1 map =0%,reduce = 0% 2024-12-05 23:24:09,946 stage-1 map= 100%,reduce = 100% Ended Job = job local782125842 0001 Launching Job 2 out of 2 Number of reduce tasks determined at compile time: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Job running in-process(local Hadoop) 2024-12-05 23:24:11,146 stage-2 map =0%,reduce = 0% Ended Job = job local2035050000 0002 with errors Error during job, obtaining debugging information... FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask MapReduce Jobs Launched: Stage-stage-1:HDFS Read:249977108 HDFS Write:0 SUCCESSHDFS Read:0 HDFS Write:0 FAIL Stage-stage-2:TotalMapReduce CPu Time spent:0 msec 这个是什么情况呢?我应该怎么解决 我用的是Ubuntu的虚拟机环境

MapReduce 过程中报错,可以通过 yarn 去查看具体的报错信息

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants