object sqldb is not a member of com microsoft azure

object sqldb is not a member of com microsoft azure

Object 'sqldb' is not a member of com.microsoft.azure is a databricks programming solution to the error while accessing the external servers i.e, Azure SQL DB or SQL servers from Azure databricks for read/write operations.

Databricks providing in-built connector i.e. JDBC connector to connect to external SQL servers. This connector used to insert the rows in a row-by-row process. Another connector supported by Databricks i.e. Spark connector from Apache Spark used to support the bulk insertion into external DB servers. 
 

Error context

Before going through the actual solution, just providing some more information on the error context.

By default, there is no bulkcopy support library available in Azure databricks to move data between databricks notebook to external SQL servers. Before installing the external library in the cluster, we need to import the existing package ‘com.microsoft.azure’ libraries to the notebook first.

If you are getting the ‘sqldb’ error means all other support libraries already imported to your notebook and only the latest JAR library is missed.

The JAR library supports the basic read-write operations along with the ‘bulkcopy’ insertions in Azure SQL DB and SQL Server.

To solve this error, please follow the below steps:

1. Enter the portal.azure.com in the browser
2. Select the Azure resources from Home
3. Select the databricks workspaces from the Type drop-down
4. Navigate to the databricks workspace
5. Navigate to databricks clusters
6. Select the cluster attached to your notebook (make sure that the cluster is up and running in azure databricks workspace)
7. From the cluster, select the libraries tab from the top menu
8. Select the JAR tab
9. Drag the JAR file you have download from Github (not providing the download link here, search the library with the exact name, it’s available in few GitHub open libraries)
JAR file name:
azure-sqldb-spark-1.0.2-jar-with-dependencies.jar

10.  Upload the JAR library into the cluster and install it
11.  Once the library installed, restart the databricks cluster
12. Once restarted, run the notebook cell with proper Azure SQL DB or SQL server name, database name, table name, and login credentials within the notebook
13. Now, login to the Azure SQL DB or SQL server and verify the table data.

Conclusion

you can observe that, without any ‘sqldb’ error the data transfer with bulkcopy is successful.

This SQL bulkcopy functionality currently supported by Scala only. There is no support for Python yet...



No comments:

Post a Comment