pbootcms网站模板|日韩1区2区|织梦模板||网站源码|日韩1区2区|jquery建站特效-html5模板网

<small id='cPXNx'></small><noframes id='cPXNx'>

    1. <tfoot id='cPXNx'></tfoot>
        <legend id='cPXNx'><style id='cPXNx'><dir id='cPXNx'><q id='cPXNx'></q></dir></style></legend>

      1. <i id='cPXNx'><tr id='cPXNx'><dt id='cPXNx'><q id='cPXNx'><span id='cPXNx'><b id='cPXNx'><form id='cPXNx'><ins id='cPXNx'></ins><ul id='cPXNx'></ul><sub id='cPXNx'></sub></form><legend id='cPXNx'></legend><bdo id='cPXNx'><pre id='cPXNx'><center id='cPXNx'></center></pre></bdo></b><th id='cPXNx'></th></span></q></dt></tr></i><div class="cig2aim" id='cPXNx'><tfoot id='cPXNx'></tfoot><dl id='cPXNx'><fieldset id='cPXNx'></fieldset></dl></div>

          <bdo id='cPXNx'></bdo><ul id='cPXNx'></ul>

      2. pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合

        pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
          <bdo id='Hhwjl'></bdo><ul id='Hhwjl'></ul>
            <tbody id='Hhwjl'></tbody>
          <i id='Hhwjl'><tr id='Hhwjl'><dt id='Hhwjl'><q id='Hhwjl'><span id='Hhwjl'><b id='Hhwjl'><form id='Hhwjl'><ins id='Hhwjl'></ins><ul id='Hhwjl'></ul><sub id='Hhwjl'></sub></form><legend id='Hhwjl'></legend><bdo id='Hhwjl'><pre id='Hhwjl'><center id='Hhwjl'></center></pre></bdo></b><th id='Hhwjl'></th></span></q></dt></tr></i><div class="cg20mk0" id='Hhwjl'><tfoot id='Hhwjl'></tfoot><dl id='Hhwjl'><fieldset id='Hhwjl'></fieldset></dl></div>

          <small id='Hhwjl'></small><noframes id='Hhwjl'>

              <legend id='Hhwjl'><style id='Hhwjl'><dir id='Hhwjl'><q id='Hhwjl'></q></dir></style></legend>

                • <tfoot id='Hhwjl'></tfoot>
                  本文介紹了pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的處理方法,對(duì)大家解決問(wèn)題具有一定的參考價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)吧!

                  問(wèn)題描述

                  我在 Mac 上使用 docker image sequenceiq/spark 來(lái)研究這些spark examples,在學(xué)習(xí)過(guò)程中,我根據(jù)這個(gè)答案,當(dāng)我啟動(dòng)Simple Data Operations 例子,這里是發(fā)生了什么:

                  I use docker image sequenceiq/spark on my Mac to study these spark examples, during the study process, I upgrade the spark inside that image to 1.6.1 according to this answer, and the error occurred when I start the Simple Data Operations example, here is what happened:

                  當(dāng)我運(yùn)行 df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() 它引發(fā)錯(cuò)誤,與pyspark控制臺(tái)的完整堆棧如下:

                  when I run df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load() it raise a error, and the full stack with the pyspark console is as followed:

                  Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56)
                  [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2
                  Type "help", "copyright", "credits" or "license" for more information.
                  16/04/12 22:45:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
                  Welcome to
                        ____              __
                       / __/__  ___ _____/ /__
                      _\ \/ _ \/ _ `/ __/  '_/
                     /__ / .__/\_,_/_/ /_/\_\   version 1.6.1
                        /_/
                  
                  Using Python version 2.6.6 (r266:84292, Jul 23 2015 15:22:56)
                  SparkContext available as sc, HiveContext available as sqlContext.
                  >>> url = "jdbc:mysql://localhost:3306/test?user=root;password=myPassWord"
                  >>> df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  16/04/12 22:46:05 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:06 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:11 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
                  16/04/12 22:46:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
                  16/04/12 22:46:16 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  16/04/12 22:46:17 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
                  Traceback (most recent call last):
                    File "<stdin>", line 1, in <module>
                    File "/usr/local/spark/python/pyspark/sql/readwriter.py", line 139, in load
                      return self._df(self._jreader.load())
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
                    File "/usr/local/spark/python/pyspark/sql/utils.py", line 45, in deco
                      return f(*a, **kw)
                    File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
                  py4j.protocol.Py4JJavaError: An error occurred while calling o23.load.
                  : java.sql.SQLException: No suitable driver
                      at java.sql.DriverManager.getDriver(DriverManager.java:278)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$$anonfun$2.apply(JdbcUtils.scala:50)
                      at scala.Option.getOrElse(Option.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils$.createConnectionFactory(JdbcUtils.scala:49)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:120)
                      at org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.<init>(JDBCRelation.scala:91)
                      at org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
                      at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
                      at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
                      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
                      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
                      at java.lang.reflect.Method.invoke(Method.java:606)
                      at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
                      at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
                      at py4j.Gateway.invoke(Gateway.java:259)
                      at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
                      at py4j.commands.CallCommand.execute(CallCommand.java:79)
                      at py4j.GatewayConnection.run(GatewayConnection.java:209)
                      at java.lang.Thread.run(Thread.java:744)
                  
                  >>>
                  

                  這是我迄今為止嘗試過(guò)的:

                  Here is what I have tried till now:

                  1. 下載mysql-connector-java-5.0.8-bin.jar,放入/usr/local/spark/lib/.還是一樣的錯(cuò)誤.

                  1. Download mysql-connector-java-5.0.8-bin.jar, and put it in to /usr/local/spark/lib/. It still the same error.

                  像這樣創(chuàng)建t.py:

                  from pyspark import SparkContext  
                  from pyspark.sql import SQLContext  
                  
                  sc = SparkContext(appName="PythonSQL")  
                  sqlContext = SQLContext(sc)  
                  df = sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()  
                  
                  df.printSchema()  
                  countsByAge = df.groupBy("age").count()  
                  countsByAge.show()  
                  countsByAge.write.format("json").save("file:///usr/local/mysql/mysql-connector-java-5.0.8/db.json")  
                  

                  然后,我嘗試了 spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py.結(jié)果還是一樣.

                  then, I tried spark-submit --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py. The result is still the same.

                  1. 然后我嘗試了 pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py,有和沒(méi)有下面的t.py,還是一樣.
                  1. Then I tried pyspark --conf spark.executor.extraClassPath=mysql-connector-java-5.0.8-bin.jar --driver-class-path mysql-connector-java-5.0.8-bin.jar --jars mysql-connector-java-5.0.8-bin.jar --master local[4] t.py, both with and without the following t.py, still the same.

                  在此期間,mysql 正在運(yùn)行.這是我的操作系統(tǒng)信息:

                  During all of this, the mysql is running. And here is my os info:

                  # rpm --query centos-release  
                  centos-release-6-5.el6.centos.11.2.x86_64
                  

                  hadoop 版本是 2.6.

                  And the hadoop version is 2.6.

                  現(xiàn)在不知道下一步該去哪里,希望有大神幫忙指點(diǎn)一下,謝謝!

                  Now I don't where to go next, so I hope some one can help give some advice, thanks!

                  推薦答案

                  當(dāng)我嘗試將腳本寫(xiě)入 MySQL 時(shí),我遇到了java.sql.SQLException:沒(méi)有合適的驅(qū)動(dòng)程序".

                  I ran into "java.sql.SQLException: No suitable driver" when I tried to have my script write to MySQL.

                  這是我為解決這個(gè)問(wèn)題所做的.

                  Here's what I did to fix that.

                  在 script.py 中

                  In script.py

                  df.write.jdbc(url="jdbc:mysql://localhost:3333/my_database"
                                    "?user=my_user&password=my_password",
                                table="my_table",
                                mode="append",
                                properties={"driver": 'com.mysql.jdbc.Driver'})
                  

                  然后我以這種方式運(yùn)行 spark-submit

                  Then I ran spark-submit this way

                  SPARK_HOME=/usr/local/Cellar/apache-spark/1.6.1/libexec spark-submit --packages mysql:mysql-connector-java:5.1.39 ./script.py
                  

                  請(qǐng)注意,SPARK_HOME 特定于安裝 spark 的位置.對(duì)于您的環(huán)境,這個(gè) https://github.com/sequenceiq/docker-spark/blob/master/README.md 可能會(huì)有所幫助.

                  Note that SPARK_HOME is specific to where spark is installed. For your environment this https://github.com/sequenceiq/docker-spark/blob/master/README.md might help.

                  如果以上所有內(nèi)容都令人困惑,請(qǐng)嘗試以下操作:
                  在 t.py 中替換

                  In case all the above is confusing, try this:
                  In t.py replace

                  sqlContext.read.format("jdbc").option("url",url).option("dbtable","people").load()
                  

                  sqlContext.read.format("jdbc").option("dbtable","people").option("driver", 'com.mysql.jdbc.Driver').load()
                  

                  然后運(yùn)行

                  spark-submit --packages mysql:mysql-connector-java:5.1.39 --master local[4] t.py
                  

                  這篇關(guān)于pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序的文章就介紹到這了,希望我們推薦的答案對(duì)大家有所幫助,也希望大家多多支持html5模板網(wǎng)!

                  【網(wǎng)站聲明】本站部分內(nèi)容來(lái)源于互聯(lián)網(wǎng),旨在幫助大家更快的解決問(wèn)題,如果有圖片或者內(nèi)容侵犯了您的權(quán)益,請(qǐng)聯(lián)系我們刪除處理,感謝您的支持!

                  相關(guān)文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數(shù)根據(jù) N 個(gè)先前值來(lái)決定接下來(lái)的 N 個(gè)行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達(dá)式的結(jié)果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數(shù)的 ignore 選項(xiàng)是忽略整個(gè)事務(wù)還是只是有問(wèn)題的行?) - IT屋-程序員軟件開(kāi)發(fā)技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時(shí)出錯(cuò),使用 for 循環(huán)數(shù)組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調(diào)用 o23.load 時(shí)發(fā)生錯(cuò)誤 沒(méi)有合適的驅(qū)動(dòng)程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數(shù)據(jù)庫(kù)表作為 Spark 數(shù)據(jù)幀讀取?)
                • <i id='H7lsK'><tr id='H7lsK'><dt id='H7lsK'><q id='H7lsK'><span id='H7lsK'><b id='H7lsK'><form id='H7lsK'><ins id='H7lsK'></ins><ul id='H7lsK'></ul><sub id='H7lsK'></sub></form><legend id='H7lsK'></legend><bdo id='H7lsK'><pre id='H7lsK'><center id='H7lsK'></center></pre></bdo></b><th id='H7lsK'></th></span></q></dt></tr></i><div class="yeuomca" id='H7lsK'><tfoot id='H7lsK'></tfoot><dl id='H7lsK'><fieldset id='H7lsK'></fieldset></dl></div>

                    • <tfoot id='H7lsK'></tfoot>

                        • <bdo id='H7lsK'></bdo><ul id='H7lsK'></ul>
                          <legend id='H7lsK'><style id='H7lsK'><dir id='H7lsK'><q id='H7lsK'></q></dir></style></legend>

                          <small id='H7lsK'></small><noframes id='H7lsK'>

                              <tbody id='H7lsK'></tbody>
                            主站蜘蛛池模板: 环氧乙烷灭菌器_压力蒸汽灭菌器_低温等离子过氧化氢灭菌器 _低温蒸汽甲醛灭菌器_清洗工作站_医用干燥柜_灭菌耗材-环氧乙烷灭菌器_脉动真空压力蒸汽灭菌器_低温等离子灭菌设备_河南省三强医疗器械有限责任公司 | 真空粉体取样阀,电动楔式闸阀,电动针型阀-耐苛尔(上海)自动化仪表有限公司 | ERP企业管理系统永久免费版_在线ERP系统_OA办公_云版软件官网 | 酒精检测棒,数显温湿度计,酒安酒精测试仪,酒精检测仪,呼气式酒精检测仪-郑州欧诺仪器有限公司 | 沙盘模型公司_沙盘模型制作公司_建筑模型公司_工业机械模型制作厂家 | 色油机-色母机-失重|称重式混料机-称重机-米重机-拌料机-[东莞同锐机械]精密计量科技制造商 | 锯边机,自动锯边机,双面涂胶机-建业顺达机械有限公司 | 济南保安公司加盟挂靠-亮剑国际安保服务集团总部-山东保安公司|济南保安培训学校 | 福州仿石漆加盟_福建仿石漆厂家-外墙仿石漆加盟推荐铁壁金钢(福建)新材料科技有限公司有保障 | 仿真植物|仿真树|仿真花|假树|植物墙 - 广州天昆仿真植物有限公司 | 钢托盘,钢制托盘,立库钢托盘,金属托盘制造商_南京飞天金属制品实业有限公司 | 蓝莓施肥机,智能施肥机,自动施肥机,水肥一体化项目,水肥一体机厂家,小型施肥机,圣大节水,滴灌施工方案,山东圣大节水科技有限公司官网17864474793 | 天津暖气片厂家_钢制散热器_天津铜铝复合暖气片_维尼罗散热器 | 合肥卓创建筑装饰,专业办公室装饰、商业空间装修与设计。 | 液氮罐_液氮容器_自增压液氮罐_杜瓦瓶_班德液氮罐厂家 | 河南卓美创业科技有限公司-河南卓美防雷公司-防雷接地-防雷工程-重庆避雷针-避雷器-防雷检测-避雷带-避雷针-避雷塔、机房防雷、古建筑防雷等-山西防雷公司 | 化妆品加工厂-化妆品加工-化妆品代加工-面膜加工-广东欧泉生化科技有限公司 | 一体化隔油提升设备-餐饮油水分离器-餐厨垃圾处理设备-隔油池-盐城金球环保产业发展有限公司 | 电磁流量计_智能防腐防爆管道式计量表-金湖凯铭仪表有限公司 | 振动时效_振动时效仪_超声波冲击设备-济南驰奥机电设备有限公司 北京宣传片拍摄_产品宣传片拍摄_宣传片制作公司-现像传媒 | 上海电子秤厂家,电子秤厂家价格,上海吊秤厂家,吊秤供应价格-上海佳宜电子科技有限公司 | 浙江红酒库-冰雕库-气调库-茶叶库安装-医药疫苗冷库-食品物流恒温恒湿车间-杭州领顺实业有限公司 | 防腐木批发价格_深圳_惠州_东莞防腐木厂家_森源(深圳)防腐木有限公司 | 上海地磅秤|电子地上衡|防爆地磅_上海地磅秤厂家–越衡称重 | 重庆小面培训_重庆小面技术培训学习班哪家好【终身免费复学】 | 扫地车厂家-山西洗地机-太原电动扫地车「大同朔州吕梁晋中忻州长治晋城洗地机」山西锦力环保科技有限公司 | 泉州陶瓷pc砖_园林景观砖厂家_石英砖地铺石价格 _福建暴风石英砖 | 加中寰球移民官网-美国移民公司,移民机构,移民中介,移民咨询,投资移民 | 【中联邦】增稠剂_增稠粉_水性增稠剂_涂料增稠剂_工业增稠剂生产厂家 | 苏州防水公司_厂房屋面外墙防水_地下室卫生间防水堵漏-苏州伊诺尔防水工程有限公司 | 彩信群发_群发彩信软件_视频短信营销平台-达信通 | OpenI 启智 新一代人工智能开源开放平台 | 快速卷帘门_硬质快速卷帘门-西朗门业 | 酒店厨房设计_中央厨房设计_北京商用厨房设计公司-奇能商厨 | 防勒索软件_数据防泄密_Trellix(原McAfee)核心代理商_Trellix(原Fireeye)售后-广州文智信息科技有限公司 | 低合金板|安阳低合金板|河南低合金板|高强度板|桥梁板_安阳润兴 北京租车牌|京牌指标租赁|小客车指标出租 | 亿诺千企网-企业核心产品贸易| 硬齿面减速机_厂家-山东安吉富传动设备股份有限公司 | 自动气象站_气象站监测设备_全自动气象站设备_雨量监测站-山东风途物联网 | 气体热式流量计-定量控制流量计(空气流量计厂家)-湖北南控仪表科技有限公司 | 国际线缆连接网 - 连接器_线缆线束加工行业门户网站 |