Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用1.0,Linkis1.5执行规则后,显示执行成功,返回任务结果为空 #189

Closed
hiparabbit opened this issue May 8, 2024 · 1 comment

Comments

@hiparabbit
Copy link

Describe the bug
使用1.0,Linkis1.5执行规则后,显示执行成功,返回结果为空

To Reproduce

image

Expected behavior
返回执行结果

Screenshots

image

Additional context
scala> val rule011_dev_replacedSchemas = rule011_dev_schemas.map(s => s.replaceAll("[()]", "")).toList
scala> val statisticDFOfrule011_dev = NullVerificationOfrule011_dev.toDF(rule011_dev_replacedSchemas: _)
scala> spark.sqlContext.setConf("hive.exec.dynamic.partition", "true")
scala> spark.sqlContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict")
scala> spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
2024-05-08 07:11:07.138 WARN [Linkis-Default-Scheduler-Thread-14] org.apache.spark.sql.execution.CacheManager 69 logWarning [JobId-17] - Asked to cache already cached data.
scala> if (spark.catalog.tableExists("hadoop_ind.dev_rule01")) {
val partition_list_hadoop_ind_dev_rule01 = spark.sql("select qualitis_partition_key from hadoop_ind.dev_rule01 where (qualitis_partition_key < 20240501)").map(f=>f.getString(0)).collect.toList
partition_list_hadoop_ind_dev_rule01.foreach(f => spark.sql("alter table hadoop_ind.dev_rule01 drop if exists partition (qualitis_partition_key=" + f + ")"))
statisticDFOfrule011_dev.withColumn("qualitis_partition_key", lit("20240508")).withColumn("qualitis_partition_key_env", lit("1_dev")).write.mode("overwrite").insertInto("hadoop_ind.dev_rule01")
} else {
statisticDFOfrule011_dev.withColumn("qualitis_partition_key", lit("20240508")).withColumn("qualitis_partition_key_env", lit("1_dev")).write.mode("append").partitionBy("qualitis_partition_key", "qualitis_partition_key_env").format("hive").saveAsTable("hadoop_ind.dev_rule01");
}
scala> statisticDFOfrule011_dev.selectExpr("count(
) as value", "'QUALITIS20240508151103708_454731' as application_id", "'Long' as result_type", "'12' as rule_id", "'' as version", "'-1' as rule_metric_id", "'-1' as run_date", "'1_dev' as env_name", "'2024-05-08 15:11:03' as create_time").write.mode(org.apache.spark.sql.SaveMode.Append).jdbc("jdbc:mysql://rm-uf66mtp56ee9w61g98o.mysql.rds.aliyuncs.com:3306/qualitis?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=utf-8", "qualitis_application_task_result", prop);
scala> spark.catalog.uncacheTable("common_table_1_dev")
scala> val linkisVar=123
2024-05-08 07:11:09.691 WARN [Linkis-Default-Scheduler-Thread-14] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-17] - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
2024-05-08 07:11:09.691 WARN [Linkis-Default-Scheduler-Thread-14] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-17] - HiveConf of name hive.stats.jdbc.timeout does not exist
2024-05-08 07:11:09.692 WARN [Linkis-Default-Scheduler-Thread-14] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-17] - HiveConf of name hive.stats.retries.wait does not exist
2024-05-08 07:11:12.838 WARN [load-dynamic-partitions-0] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-] - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
2024-05-08 07:11:12.838 WARN [load-dynamic-partitions-0] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-] - HiveConf of name hive.stats.jdbc.timeout does not exist
2024-05-08 07:11:12.839 WARN [load-dynamic-partitions-0] org.apache.hadoop.hive.conf.HiveConf 4122 initialize [JobId-] - HiveConf of name hive.stats.retries.wait does not exist
2024-05-08 15:11:15.011 INFO Congratulations! Your job : Qualitis_hadoop_spark_9 executed with status succeed and 0 results.
2024-05-08 15:11:15.011 INFO Task creation time(任务创建时间): 2024-05-08 15:11:04, Task scheduling time(任务调度时间): 2024-05-08 15:11:04, Task start time(任务开始时间): 2024-05-08 15:11:04, Mission end time(任务结束时间): 2024-05-08 15:11:15
2024-05-08 15:11:15.011 INFO Task submit to Orchestrator time:2024-05-08 15:11:04, Task request EngineConn time:2024-05-08 15:11:04, Task submit to EngineConn time:2024-05-08 15:11:05
2024-05-08 15:11:15.011 INFO Your mission(您的任务) 17 The total time spent is(总耗时时间为): 11.3 s
2024-05-08 15:11:15.011 INFO Congratulations. Your job completed with status Success.

@Tangjiafeng
Copy link
Contributor

一般是引擎侧任务未进入执行状态,所以看不到日志。比如进入执行中之前,可能是排队中,资源申请中,这个阶段是没有日志的。如果确定是执行中的任务没有日志,可以关闭引擎后重试,或者闲置一段时间引擎会自动关闭。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants