Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about subsql in XSQL #61

Open
RustRw opened this issue Oct 31, 2019 · 2 comments
Open

about subsql in XSQL #61

RustRw opened this issue Oct 31, 2019 · 2 comments

Comments

@RustRw
Copy link

RustRw commented Oct 31, 2019

for example i have a table, table name : sys.sys_config

desc sys.sys_config

variable string NULL
value string NULL
set_time timestamp NULL
set_by string NULL

i use sql:
select approx_count_distinct(variable) count from sys.sys_config

can get the result: 6

but if i use sql like this:

select * from (select approx_count_distinct(variable) count from sys.sys_config) as t

it will be error:

org.apache.spark.sql.AnalysisException: org.apache.spark.SparkException: Error when execute select t.count from (select approx_count_distinct(sys.sys_config.variable) AS count from sys.sys_config ) as t, details:
FUNCTION sys.approx_count_distinct does not exist;
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree2$1(XSQLExternalCatalog.scala:658)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.withWorkingDSDB(XSQLExternalCatalog.scala:648)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.scanTables(XSQLExternalCatalog.scala:380)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$scanTables$1.apply(XSQLSessionCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$scanTables$1.apply(XSQLSessionCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.setWorkingDataSource(XSQLExternalCatalog.scala:611)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.setWorkingDataSource(XSQLSessionCatalog.scala:151)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.scanTables(XSQLSessionCatalog.scala:624)
at org.apache.spark.sql.xsql.execution.command.PushDownQueryCommand.run(tables.scala:729)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:71)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:69)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.unsafeResult$lzycompute(commands.scala:77)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.unsafeResult(commands.scala:74)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:88)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:194)
at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
at org.apache.spark.sql.Dataset.(Dataset.scala:194)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:79)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:665)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:252)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:166)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1(SparkXSQLShell.scala:166)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.process$1(SparkXSQLShell.scala:310)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$loop$1(SparkXSQLShell.scala:350)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:94)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:76)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:76)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: org.apache.spark.SparkException: Error when execute select t.count from (select approx_count_distinct(sys.sys_config.variable) AS count from sys.sys_config ) as t, details:
FUNCTION sys.approx_count_distinct does not exist
at org.apache.spark.sql.xsql.manager.MysqlManager.scanXSQLTables(MysqlManager.scala:521)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$scanTables$1.apply(XSQLExternalCatalog.scala:382)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$scanTables$1.apply(XSQLExternalCatalog.scala:380)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree2$1(XSQLExternalCatalog.scala:649)
... 47 more

so , i think , maybe it use mysql function, mysql function not found it.

but , i test anther function: curtime

select * from (select curtime()) as t;

will be get another error:

19/10/31 20:54:54 INFO SparkXSQLShell: current SQL: select * from (select curtime()) as t silent: false
19/10/31 20:54:54 INFO SparkXSQLShell: excute to parsed
19/10/31 20:54:54 ERROR SparkXSQLShell: Failed: Error
org.apache.spark.sql.AnalysisException: java.lang.UnsupportedOperationException: Check MYSQL function exists not supported!;
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree1$1(XSQLExternalCatalog.scala:635)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.withWorkingDS(XSQLExternalCatalog.scala:625)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.functionExists(XSQLExternalCatalog.scala:1074)
at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.functionExists(ExternalCatalogWithListener.scala:292)
at org.apache.spark.sql.catalyst.catalog.SessionCatalog.isPersistentFunction(SessionCatalog.scala:1227)
at org.apache.spark.sql.hive.HiveSessionCatalog.isPersistentFunction(HiveSessionCatalog.scala:179)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.org$apache$spark$sql$xsql$XSQLSessionCatalog$$super$isPersistentFunction(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply$mcZ$sp(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLSessionCatalog$$anonfun$isPersistentFunction$1.apply(XSQLSessionCatalog.scala:793)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.setWorkingDataSource(XSQLExternalCatalog.scala:611)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.setWorkingDataSource(XSQLSessionCatalog.scala:151)
at org.apache.spark.sql.xsql.XSQLSessionCatalog.isPersistentFunction(XSQLSessionCatalog.scala:792)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1276)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$$anonfun$apply$15.applyOrElse(Analyzer.scala:1272)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$2.apply(TreeNode.scala:256)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:255)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$transformDown$1.apply(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:261)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsDown$1.apply(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$transformExpressionsDown$1.apply(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$1.apply(QueryPlan.scala:105)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpression$1(QueryPlan.scala:104)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:116)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1$2.apply(QueryPlan.scala:121)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.immutable.List.foreach(List.scala:392)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.immutable.List.map(List.scala:296)
at org.apache.spark.sql.catalyst.plans.QueryPlan.org$apache$spark$sql$catalyst$plans$QueryPlan$$recursiveTransform$1(QueryPlan.scala:121)
at org.apache.spark.sql.catalyst.plans.QueryPlan$$anonfun$2.apply(QueryPlan.scala:126)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.plans.QueryPlan.mapExpressions(QueryPlan.scala:126)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressionsDown(QueryPlan.scala:83)
at org.apache.spark.sql.catalyst.plans.QueryPlan.transformExpressions(QueryPlan.scala:74)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:129)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveExpressions$1.applyOrElse(AnalysisHelper.scala:128)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$2.apply(AnalysisHelper.scala:108)
at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:70)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:107)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1$$anonfun$apply$6.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.trees.TreeNode$$anonfun$4.apply(TreeNode.scala:326)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapProductIterator(TreeNode.scala:187)
at org.apache.spark.sql.catalyst.trees.TreeNode.mapChildren(TreeNode.scala:324)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:113)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsDown$1.apply(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsDown(AnalysisHelper.scala:106)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsDown(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperators(AnalysisHelper.scala:73)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperators(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveExpressions(AnalysisHelper.scala:128)
at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveExpressions(LogicalPlan.scala:29)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:1272)
at org.apache.spark.sql.catalyst.analysis.Analyzer$LookupFunctions$.apply(Analyzer.scala:1269)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:87)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1$$anonfun$apply$1.apply(RuleExecutor.scala:84)
at scala.collection.IndexedSeqOptimized$class.foldl(IndexedSeqOptimized.scala:57)
at scala.collection.IndexedSeqOptimized$class.foldLeft(IndexedSeqOptimized.scala:66)
at scala.collection.mutable.WrappedArray.foldLeft(WrappedArray.scala:35)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:84)
at org.apache.spark.sql.catalyst.rules.RuleExecutor$$anonfun$execute$1.apply(RuleExecutor.scala:76)
at scala.collection.immutable.List.foreach(List.scala:392)
at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:76)
at org.apache.spark.sql.catalyst.analysis.Analyzer.org$apache$spark$sql$catalyst$analysis$Analyzer$$executeSameContext(Analyzer.scala:127)
at org.apache.spark.sql.catalyst.analysis.Analyzer.execute(Analyzer.scala:121)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:106)
at org.apache.spark.sql.catalyst.analysis.Analyzer$$anonfun$executeAndCheck$1.apply(Analyzer.scala:105)
at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.markInAnalyzer(AnalysisHelper.scala:201)
at org.apache.spark.sql.catalyst.analysis.Analyzer.executeAndCheck(Analyzer.scala:105)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:241)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1$1.apply(SparkXSQLShell.scala:166)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$run$1(SparkXSQLShell.scala:166)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.process$1(SparkXSQLShell.scala:310)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.org$apache$spark$sql$xsql$shell$SparkXSQLShell$$loop$1(SparkXSQLShell.scala:350)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:94)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$$anonfun$main$2.apply(SparkXSQLShell.scala:76)
at scala.Option.map(Option.scala:146)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell$.main(SparkXSQLShell.scala:76)
at org.apache.spark.sql.xsql.shell.SparkXSQLShell.main(SparkXSQLShell.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:849)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:167)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:195)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.UnsupportedOperationException: Check MYSQL function exists not supported!
at org.apache.spark.sql.xsql.DataSourceManager$class.functionExists(DataSourceManager.scala:920)
at org.apache.spark.sql.xsql.manager.MysqlManager.functionExists(MysqlManager.scala:51)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$functionExists$1.apply(XSQLExternalCatalog.scala:1076)
at org.apache.spark.sql.xsql.XSQLExternalCatalog$$anonfun$functionExists$1.apply(XSQLExternalCatalog.scala:1074)
at org.apache.spark.sql.xsql.XSQLExternalCatalog.liftedTree1$1(XSQLExternalCatalog.scala:626)
... 118 more

@beliefer
Copy link
Collaborator

beliefer commented Nov 1, 2019

Thanks for your feedback. We will arrange the plan to solve it.

beliefer pushed a commit that referenced this issue Nov 12, 2019
… some spark-provided functions in subquery #64

This PR solved the first question about Issues[61]#61, don't `pushdown `MySQL `datasource `when there are some functions  provided by spark in subquery.
beliefer pushed a commit that referenced this issue Nov 12, 2019
… some spark-provided functions in subquery #64

This PR solved the first question about Issues[61]#61, don't `pushdown `MySQL `datasource `when there are some functions  provided by spark in subquery.
@beliefer
Copy link
Collaborator

@RustRw We have resolved the first issue. #64

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants