pbootcms网站模板|日韩1区2区|织梦模板||网站源码|日韩1区2区|jquery建站特效-html5模板网

  • <tfoot id='GR7DV'></tfoot>

      <small id='GR7DV'></small><noframes id='GR7DV'>

    1. <i id='GR7DV'><tr id='GR7DV'><dt id='GR7DV'><q id='GR7DV'><span id='GR7DV'><b id='GR7DV'><form id='GR7DV'><ins id='GR7DV'></ins><ul id='GR7DV'></ul><sub id='GR7DV'></sub></form><legend id='GR7DV'></legend><bdo id='GR7DV'><pre id='GR7DV'><center id='GR7DV'></center></pre></bdo></b><th id='GR7DV'></th></span></q></dt></tr></i><div class="aahzvik" id='GR7DV'><tfoot id='GR7DV'></tfoot><dl id='GR7DV'><fieldset id='GR7DV'></fieldset></dl></div>
        <bdo id='GR7DV'></bdo><ul id='GR7DV'></ul>
      <legend id='GR7DV'><style id='GR7DV'><dir id='GR7DV'><q id='GR7DV'></q></dir></style></legend>
      1. Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄

        Kafka connect setup to send record from Aurora using AWS MSK(Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄)
        <legend id='Cb3h0'><style id='Cb3h0'><dir id='Cb3h0'><q id='Cb3h0'></q></dir></style></legend>
        1. <small id='Cb3h0'></small><noframes id='Cb3h0'>

        2. <tfoot id='Cb3h0'></tfoot>

              <tbody id='Cb3h0'></tbody>

            • <bdo id='Cb3h0'></bdo><ul id='Cb3h0'></ul>

              <i id='Cb3h0'><tr id='Cb3h0'><dt id='Cb3h0'><q id='Cb3h0'><span id='Cb3h0'><b id='Cb3h0'><form id='Cb3h0'><ins id='Cb3h0'></ins><ul id='Cb3h0'></ul><sub id='Cb3h0'></sub></form><legend id='Cb3h0'></legend><bdo id='Cb3h0'><pre id='Cb3h0'><center id='Cb3h0'></center></pre></bdo></b><th id='Cb3h0'></th></span></q></dt></tr></i><div class="dbmqulc" id='Cb3h0'><tfoot id='Cb3h0'></tfoot><dl id='Cb3h0'><fieldset id='Cb3h0'></fieldset></dl></div>

                • 本文介紹了Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄的處理方法,對大家解決問題具有一定的參考價值,需要的朋友們下面隨著小編來一起學習吧!

                  問題描述

                  我必須將記錄從 Aurora/Mysql 發送到 MSK,然后再從那里發送到 Elastic 搜索服務

                  Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->彈性搜索

                  極光表結構中的記錄是這樣的
                  我認為記錄將以這種格式發送到 AWS MSK.

                  "o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?0xml version=""1"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-38d179749-setwr23424","","","",,"","",""

                  因此,為了通過彈性搜索使用,我需要使用正確的架構,因此我必須使用架構注冊表.

                  我的問題

                  問題 1

                  對于需要上述類型的消息架構注冊表,我應該如何使用架構注冊表?.我是否必須為此創建 JSON 結構,如果是,我將其保留在哪里.這里需要更多幫助才能理解這一點?

                  我已經編輯了

                  vim/usr/local/confluent/etc/schema-registry/schema-registry.properties

                  提到了zookeper,但我沒有提到什么是kafkastore.topic=_schema如何將其鏈接到自定義架構.

                  即使我開始并收到此錯誤

                  Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas 在 60000 ms 后不存在于元數據中.

                  這是我所期待的,因為我沒有對架構做任何事情.

                  我確實安裝了 jdbc 連接器,當我啟動時出現以下錯誤

                  無效值 java.sql.SQLException:找不到適合 jdbc 的驅動程序:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration 無法打開與 jdbc 的連接:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=歡迎123無效值 java.sql.SQLException:找不到適合 jdbc 的驅動程序:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123用于配置無法打開與 jdbc 的連接:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表

                  問題 2我可以在一個 ec2 上創建兩個連接器嗎(jdbc 和彈性 serach 一個).如果是,我是否必須在 sepearte cli 中同時啟動?

                  問題 3當我打開 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties我只看到像下面這樣的屬性值

                  name=test-source-sqlite-jdbc-autoincrementconnector.class=io.confluent.connect.jdbc.JdbcSourceConnector任務.max=1connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123模式=遞增遞增.column.name=idtopic.prefix=trf-aurora-fspaudit-

                  在上面的屬性文件中,我可以提到架構名稱和表名稱嗎?

                  根據答案,我正在更新我的 Kafka 連接 JDBC 配置

                  --------------啟動JDBC連接彈性搜索------------------------------

                  wget/usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/須藤 mv ~/Downloads/confluent-5.2.0/usr/local/confluentwget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gztar -xzf mysql-connector-java-5.1.48.tar.gz須藤 mv mysql-connector-java-5.1.48 mv/usr/local/confluent/share/java/kafka-connect-jdbc

                  然后

                  vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  然后我修改了下面的屬性

                  connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf模式=遞增connection.user=adminconnection.password=Welcome123table.whitelist=PANStatementInstanceLogschema.pattern=dbo

                  最后我修改了

                  vim/usr/local/confluent/etc/kafka/connect-standalone.properties

                  在這里我修改了以下屬性

                  bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-East-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092key.converter.schemas.enable=truevalue.converter.schemas.enable=trueoffset.storage.file.filename=/tmp/connect.offsetsoffset.flush.interval.ms=10000plugin.path=/usr/local/confluent/share/java

                  當我列出主題時,我沒有看到任何為表名列出的主題.

                  錯誤信息的堆棧跟蹤

                  [2020-01-03 07:40:57,169] 錯誤未能為/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache) 創建作業.kafka.connect.cli.ConnectStandalone:108)[2020-01-03 07:40:57,169] 連接器錯誤后停止錯誤 (org.apache.kafka.connect.cli.ConnectStandalone:119)java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: 連接器配置無效并包含以下 2 個錯誤:無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表在 org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)在 org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)在 org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)引起:org.apache.kafka.connect.runtime.rest.errors.BadRequestException:連接器配置無效并包含以下2個錯誤:無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf無效值 com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:通信鏈接失敗最后一個成功發送到服務器的數據包是 0 毫秒前.驅動程序沒有收到來自服務器的任何數據包.用于配置無法打開與 jdbc 的連接:mysql://****.us-east-1.rds.amazonaws.com:3306/trf您還可以在端點 `/{connectorType}/config/validate` 找到上面的錯誤列表在 org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)在 org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)在 org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/-d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'

                  解決方案

                  是否需要架構注冊表?

                  沒有.您可以在 json 記錄中啟用模式.JDBC 源可以根據表信息為您創建

                  value.converter=org.apache.kafka...JsonConvertervalue.converter.schemas.enable=true

                  <塊引用>

                  提到了zookeper,但我不知道什么是kafkastore.topic=_schema

                  如果你想使用 Schema Registry,你應該使用 kafkastore.bootstrap.servers.with Kafka 地址,而不是 Zookeeper.所以刪除 kafkastore.connection.url

                  請閱讀文檔 所有屬性的解釋

                  <塊引用>

                  我沒有對架構做任何事情.

                  沒關系.模式主題在注冊表第一次啟動時被創建

                  <塊引用>

                  我可以在一個 ec2 上創建兩個連接器嗎

                  是(忽略可用的 JVM 堆空間).同樣,這在 Kafka Connect 文檔中有詳細說明.

                  使用獨立模式,您首先傳遞連接工作器配置,然后在一個命令中最多傳遞 N 個連接器屬性

                  使用分布式模式,您使用 Kafka Connect REST API

                  https://docs.confluent.io/current/connect/managing/configuring.html

                  <塊引用>

                  當我打開 vim/usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  首先,這是針對 Sqlite,而不是針對 Mysql/Postgres.您不需要使用快速入門文件,它們僅供參考

                  同樣,所有屬性都有詳細記錄

                  https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc

                  <塊引用>

                  我確實安裝了 jdbc 連接器,當我啟動時出現以下錯誤

                  這里有更多關于如何調試的信息

                  https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/

                  <小時>

                  如前所述,我個人建議盡可能使用 Debezium/CDC

                  用于 RDS Aurora 的 Debezium 連接器

                  I have to send records from Aurora/Mysql to MSK and from there to Elastic search service

                  Aurora -->Kafka-connect--->AWS MSK--->kafka connect --->Elastic search

                  The record in Aurora table structure is something like this
                  I think record will go to AWS MSK in this format.

                  "o36347-5d17-136a-9749-Oe46464",0,"NEW_CASE","WRLDCHK","o36347-5d17-136a-9749-Oe46464","<?xml version=""1.0"" encoding=""UTF-8"" standalone=""yes""?><caseCreatedPayload><batchDetails/>","CASE",08-JUL-17 10.02.32.217000000 PM,"TIME","UTC","ON","0a348753-5d1e-17a2-9749-3345,MN4,","","0a348753-5d1e-17af-9749-FGFDGDFV","EOUHEORHOE","2454-5d17-138e-9749-setwr23424","","","",,"","",""
                  

                  So in order to consume by elastic search i need to use proper schema so schema registry i have to use.

                  My question

                  Question 1

                  How should i use schema registry for above type of message schema registry is required ?. Do i have to create JSON structure for this and if yes where i have keep that. More help required here to understand this ?

                  I have edited

                  vim /usr/local/confluent/etc/schema-registry/schema-registry.properties
                  

                  Mentioned zookeper but i did not what is kafkastore.topic=_schema How to link this to custom schema .

                  Even i started and got this error

                  Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Topic _schemas not present in metadata after 60000 ms.
                  

                  Which i was expecting because i did not do anything about schema .

                  I do have jdbc connector installed and when i start i get below error

                  Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  Invalid value java.sql.SQLException: No suitable driver found for jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123 for configuration Couldn't open connection to jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                  

                  Question 2 Can i create two onnector on one ec2 (jdbc and elastic serach one ).If yes do i have to start both in sepearte cli ?

                  Question 3 When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties I see only propeties value like below

                  name=test-source-sqlite-jdbc-autoincrement
                  connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
                  tasks.max=1
                  connection.url=jdbc:mysql://123871289-eruyre.cluster-ceyey.us-east-1.rds.amazonaws.com:3306/trf?user=admin&password=Welcome123
                  mode=incrementing
                  incrementing.column.name=id
                  topic.prefix=trf-aurora-fspaudit-
                  

                  In the above properties file where i can mention schema name and table name?

                  Based on answer i am updating my configuration for Kafka connect JDBC

                  ---------------start JDBC connect elastic search -----------------------------

                  wget /usr/local http://packages.confluent.io/archive/5.2/confluent-5.2.0-2.11.tar.gz -P ~/Downloads/
                  tar -zxvf ~/Downloads/confluent-5.2.0-2.11.tar.gz -C ~/Downloads/
                  sudo mv ~/Downloads/confluent-5.2.0 /usr/local/confluent
                  
                  wget https://cdn.mysql.com//Downloads/Connector-J/mysql-connector-java-5.1.48.tar.gz
                  tar -xzf  mysql-connector-java-5.1.48.tar.gz
                  sudo mv mysql-connector-java-5.1.48 mv /usr/local/confluent/share/java/kafka-connect-jdbc
                  

                  And then

                  vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
                  

                  Then i modified below properties

                  connection.url=jdbc:mysql://fdgfgdfgrter.us-east-1.rds.amazonaws.com:3306/trf
                  mode=incrementing
                  connection.user=admin
                  connection.password=Welcome123
                  table.whitelist=PANStatementInstanceLog
                  schema.pattern=dbo
                  

                  Last i modified

                  vim /usr/local/confluent/etc/kafka/connect-standalone.properties
                  

                  and here i modified below properties

                  bootstrap.servers=b-3.205147-ertrtr.erer.c5.ertert.us-east-1.amazonaws.com:9092,b-6.ertert-riskaudit.ertet.c5.kafka.us-east-1.amazonaws.com:9092,b-1.ertert-riskaudit.ertert.c5.kafka.us-east-1.amazonaws.com:9092
                  key.converter.schemas.enable=true
                  value.converter.schemas.enable=true
                  offset.storage.file.filename=/tmp/connect.offsets
                  offset.flush.interval.ms=10000
                  plugin.path=/usr/local/confluent/share/java
                  

                  When i list topic i do not see any topic listed for table name .

                  Stack trace for the error message

                  [2020-01-03 07:40:57,169] ERROR Failed to create job for /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties (org.apache.kafka.connect.cli.ConnectStandalone:108)
                  [2020-01-03 07:40:57,169] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
                  java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                          at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
                          at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
                          at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)
                  Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 2 error(s):
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  Invalid value com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure
                  
                  The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. for configuration Couldn't open connection to jdbc:mysql://****.us-east-1.rds.amazonaws.com:3306/trf
                  You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
                          at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
                          at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
                          at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)
                  
                          curl -X POST -H "Accept:application/json" -H "Content-Type:application/json" IPaddressOfKCnode:8083/connectors/ -d '{"name": "emp-connector", "config": { "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector", "tasks.max": "1", "connection.url": "jdbc:mysql://IPaddressOfLocalMachine:3306/test_db?user=root&password=pwd","table.whitelist": "emp","mode": "timestamp","topic.prefix": "mysql-" } }'
                  

                  解決方案

                  schema registry is required ?

                  No. You can enable schemas in json records. JDBC source can create them for you based on the table information

                  value.converter=org.apache.kafka...JsonConverter 
                  value.converter.schemas.enable=true
                  

                  Mentioned zookeper but i did not what is kafkastore.topic=_schema

                  If you want to use Schema Registry, you should be using kafkastore.bootstrap.servers.with the Kafka address, not Zookeeper. So remove kafkastore.connection.url

                  Please read the docs for explanations of all properties

                  i did not do anything about schema .

                  Doesn't matter. The schemas topic gets created when the Registry first starts

                  Can i create two onnector on one ec2

                  Yes (ignoring available JVM heap space). Again, this is detailed in the Kafka Connect documentation.

                  Using standalone mode, you first pass the connect worker configuration, then up to N connector properties in one command

                  Using distributed mode, you use the Kafka Connect REST API

                  https://docs.confluent.io/current/connect/managing/configuring.html

                  When i open vim /usr/local/confluent/etc/kafka-connect-jdbc/source-quickstart-sqlite.properties

                  First of all, that's for Sqlite, not Mysql/Postgres. You don't need to use the quickstart files, they are only there for reference

                  Again, all properties are well documented

                  https://docs.confluent.io/current/connect/kafka-connect-jdbc/index.html#connect-jdbc

                  I do have jdbc connector installed and when i start i get below error

                  Here's more information about how you can debug that

                  https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/


                  As stated before, I would personally suggest using Debezium/CDC where possible

                  Debezium Connector for RDS Aurora

                  這篇關于Kafka 連接設置以使用 AWS MSK 從 Aurora 發送記錄的文章就介紹到這了,希望我們推薦的答案對大家有所幫助,也希望大家多多支持html5模板網!

                  【網站聲明】本站部分內容來源于互聯網,旨在幫助大家更快的解決問題,如果有圖片或者內容侵犯了您的權益,請聯系我們刪除處理,感謝您的支持!

                  相關文檔推薦

                  How to use windowing functions efficiently to decide next N number of rows based on N number of previous values(如何有效地使用窗口函數根據 N 個先前值來決定接下來的 N 個行)
                  reuse the result of a select expression in the quot;GROUP BYquot; clause?(在“GROUP BY中重用選擇表達式的結果;條款?)
                  Does ignore option of Pyspark DataFrameWriter jdbc function ignore entire transaction or just offending rows?(Pyspark DataFrameWriter jdbc 函數的 ignore 選項是忽略整個事務還是只是有問題的行?) - IT屋-程序員軟件開發技
                  Error while using INSERT INTO table ON DUPLICATE KEY, using a for loop array(使用 INSERT INTO table ON DUPLICATE KEY 時出錯,使用 for 循環數組)
                  pyspark mysql jdbc load An error occurred while calling o23.load No suitable driver(pyspark mysql jdbc load 調用 o23.load 時發生錯誤 沒有合適的驅動程序)
                  How to integrate Apache Spark with MySQL for reading database tables as a spark dataframe?(如何將 Apache Spark 與 MySQL 集成以將數據庫表作為 Spark 數據幀讀取?)

                  <i id='ZriVq'><tr id='ZriVq'><dt id='ZriVq'><q id='ZriVq'><span id='ZriVq'><b id='ZriVq'><form id='ZriVq'><ins id='ZriVq'></ins><ul id='ZriVq'></ul><sub id='ZriVq'></sub></form><legend id='ZriVq'></legend><bdo id='ZriVq'><pre id='ZriVq'><center id='ZriVq'></center></pre></bdo></b><th id='ZriVq'></th></span></q></dt></tr></i><div class="g8oycec" id='ZriVq'><tfoot id='ZriVq'></tfoot><dl id='ZriVq'><fieldset id='ZriVq'></fieldset></dl></div>

                    <tbody id='ZriVq'></tbody>
                  • <legend id='ZriVq'><style id='ZriVq'><dir id='ZriVq'><q id='ZriVq'></q></dir></style></legend>

                    1. <small id='ZriVq'></small><noframes id='ZriVq'>

                      • <tfoot id='ZriVq'></tfoot>

                        • <bdo id='ZriVq'></bdo><ul id='ZriVq'></ul>
                          1. 主站蜘蛛池模板: 无菌检查集菌仪,微生物限度仪器-苏州长留仪器百科 | 碳化硅,氮化硅,冰晶石,绢云母,氟化铝,白刚玉,棕刚玉,石墨,铝粉,铁粉,金属硅粉,金属铝粉,氧化铝粉,硅微粉,蓝晶石,红柱石,莫来石,粉煤灰,三聚磷酸钠,六偏磷酸钠,硫酸镁-皓泉新材料 | 篷房[仓储-婚庆-展览-活动]生产厂家-江苏正德装配式帐篷有限公司 | 卡诺亚轻高定官网_卧室系统_整家定制_定制家居_高端定制_全屋定制加盟_定制家具加盟_定制衣柜加盟 | 磁棒电感生产厂家-电感器厂家-电感定制-贴片功率电感供应商-棒形电感生产厂家-苏州谷景电子有限公司 | Eiafans.com_环评爱好者 环评网|环评论坛|环评报告公示网|竣工环保验收公示网|环保验收报告公示网|环保自主验收公示|环评公示网|环保公示网|注册环评工程师|环境影响评价|环评师|规划环评|环评报告|环评考试网|环评论坛 - Powered by Discuz! | 招商帮-一站式网络营销服务|互联网整合营销|网络推广代运营|信息流推广|招商帮企业招商好帮手|搜索营销推广|短视视频营销推广 | 首页|专注深圳注册公司,代理记账报税,注册商标代理,工商变更,企业400电话等企业一站式服务-慧用心 | 浙江富广阀门有限公司| 台式恒温摇床价格_大容量恒温摇床厂家-上海量壹科学仪器有限公司 | 临朐空调移机_空调维修「空调回收」临朐二手空调 | 智能型高压核相仪-自动开口闪点测试仪-QJ41A电雷管测试仪|上海妙定 | 无压烧结银_有压烧结银_导电银胶_导电油墨_导电胶-善仁(浙江)新材料 | 宝宝药浴-产后药浴-药浴加盟-艾裕-专注母婴调养泡浴 | 东莞注册公司-代办营业执照-东莞公司注册代理记账-极刻财税 | 大流量卧式砂磨机_强力分散机_双行星双动力混合机_同心双轴搅拌机-莱州市龙跃化工机械有限公司 | 南汇8424西瓜_南汇玉菇甜瓜-南汇水蜜桃价格 | 刹车盘机床-刹车盘生产线-龙口亨嘉智能装备 | 贝朗斯动力商城(BRCPOWER.COM) - 买叉车蓄电池上贝朗斯商城,价格更超值,品质有保障! | 槽钢冲孔机,槽钢三面冲,带钢冲孔机-山东兴田阳光智能装备股份有限公司 | 东莞市海宝机械有限公司-不锈钢分选机-硅胶橡胶-生活垃圾-涡电流-静电-金属-矿石分选机 | 防爆电机_防爆电机型号_河南省南洋防爆电机有限公司 | 酒店品牌设计-酒店vi设计-酒店标识设计【国际级】VI策划公司 | 成都网站建设制作_高端网站设计公司「做网站送优化推广」 | 紫外可见光分光度计-紫外分光度计-分光光度仪-屹谱仪器制造(上海)有限公司 | 济南律师,济南法律咨询,山东法律顾问-山东沃德律师事务所 | 防水套管-柔性防水套管-刚性防水套管-上海执品管件有限公司 | 船用锚链|专业锚链生产厂家|安徽亚太锚链制造有限公司 | 【铜排折弯机,钢丝折弯成型机,汽车发泡钢丝折弯机,线材折弯机厂家,线材成型机,铁线折弯机】贝朗折弯机厂家_东莞市贝朗自动化设备有限公司 | NMRV减速机|铝合金减速机|蜗轮蜗杆减速机|NMRV减速机厂家-东莞市台机减速机有限公司 | 智能监控-安防监控-监控系统安装-弱电工程公司_成都万全电子 | 招商帮-一站式网络营销服务|互联网整合营销|网络推广代运营|信息流推广|招商帮企业招商好帮手|搜索营销推广|短视视频营销推广 | 对夹式止回阀_对夹式蝶形止回阀_对夹式软密封止回阀_超薄型止回阀_不锈钢底阀-温州上炬阀门科技有限公司 | 浙江建筑资质代办_二级房建_市政_电力_安许_劳务资质办理公司 | 河南卓美创业科技有限公司-河南卓美防雷公司-防雷接地-防雷工程-重庆避雷针-避雷器-防雷检测-避雷带-避雷针-避雷塔、机房防雷、古建筑防雷等-山西防雷公司 | 胶水,胶粘剂,AB胶,环氧胶,UV胶水,高温胶,快干胶,密封胶,结构胶,电子胶,厌氧胶,高温胶水,电子胶水-东莞聚力-聚厉胶粘 | 电脑知识|软件|系统|数据库|服务器|编程开发|网络运营|知识问答|技术教程文章 - 好吧啦网 | 企业管理培训,企业培训公开课,企业内训课程,企业培训师 - 名课堂企业管理培训网 | 色谱柱-淋洗液罐-巴罗克试剂槽-巴氏吸管-5ml样品瓶-SBS液氮冻存管-上海希言科学仪器有限公司 | 番茄畅听邀请码怎么输入 - Dianw8.com | 氟塑料磁力泵-不锈钢离心泵-耐腐蚀化工泵厂家「皖金泵阀」 |