Read External Location Using Trino + Iceberg + Postgress

Hi;
I am trying to read data from AZ-ADLS-GEN2, using TRINO; DATA stored via SPARK + ICEBERG using ICEBERG-JDBC-CATALOG. Can any one guide; is there any way we can configure the AZ-STORAGE access while using ICEBERG-JDBC CATALOG & WAREHOUSE PATH as EXTERNAL AZ-ADLS-GEN2. I mean does TRINO supports ICEBERG JDBC CATALOG with EXTERNAL FILE STORAGE PATH SUCH AS AZ-ADLS-GEN2? If yes, then what is the points need to be taken care, like what extended properties? or may be adding some extra jars inside plugins etc

My TRINO CATALOG is
etc/catalog/catalog.properties
        connector.name=iceberg
        iceberg.file-format=PARQUET
        iceberg.catalog.type=jdbc
        iceberg.jdbc-catalog.catalog-name=spark_iceberg_tutorial
        iceberg.jdbc-catalog.driver-class=org.postgresql.Driver
        iceberg.jdbc-catalog.connection-url=jdbc:postgresql://<ps-url>:5432/iceberg_tutorials
        iceberg.jdbc-catalog.connection-user=<uid>
        iceberg.jdbc-catalog.connection-password=<pwd>
        iceberg.jdbc-catalog.default-warehouse-dir=https://<Az-Storage-Ac-Name>.blob.core.windows.net/icebergdata?<Az-Details>

Exception while running Trino:

Caused by: Configuration property <Az-Storage-Ac>.dfs.core.windows.net not found.
at org.apache.hadoop.fs.azurebfs.AbfsConfiguration.getStorageAccountKey(AbfsConfiguration.java:342)
at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.initializeClient(AzureBlobFileSystemStore.java:814)
at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.<init>(AzureBlobFileSystemStore.java:151)
at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:108)
at io.trino.hdfs.TrinoFileSystemCache.createFileSystem(TrinoFileSystemCache.java:155)
at io.trino.hdfs.TrinoFileSystemCache$FileSystemHolder.createFileSystemOnce(TrinoFileSystemCache.java:293)
at io.trino.hdfs.TrinoFileSystemCache.getInternal(TrinoFileSystemCache.java:135)
at io.trino.hdfs.TrinoFileSystemCache.get(TrinoFileSystemCache.java:91)
at org.apache.hadoop.fs.ForwardingFileSystemCache.get(ForwardingFileSystemCache.java:38)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:479)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
at io.trino.hdfs.HdfsEnvironment.lambda$getFileSystem$0(HdfsEnvironment.java:103)
at io.trino.hdfs.authentication.NoHdfsAuthentication.doAs(NoHdfsAuthentication.java:25)
at io.trino.hdfs.HdfsEnvironment.getFileSystem(HdfsEnvironment.java:102)
at io.trino.hdfs.HdfsEnvironment.getFileSystem(HdfsEnvironment.java:96)
at io.trino.filesystem.hdfs.HdfsInputFile.openFile(HdfsInputFile.java:113)
at io.trino.filesystem.hdfs.HdfsInputFile.newStream(HdfsInputFile.java:69)
at io.trino.filesystem.tracing.Tracing.withTracing(Tracing.java:47)
at io.trino.filesystem.tracing.TracingInputFile.newStream(TracingInputFile.java:64)
at io.trino.plugin.iceberg.fileio.ForwardingInputFile.newStream(ForwardingInputFile.java:52)

I looks like there is a configuration problem in your catalog file … the error message seems to say that <Az-Storage-Ac>.dfs.core.windows.net is a configuration property. Make sure your properties file is as you pasted.

The other issue could be there there is a network issue and the Trino coordinator and workers can not actually resolve the DNS entry.

Lastly … I am not sure if the JDBC catalog from Iceberg supports Azure Storage. Documentation for it on the Iceberg side is sparse…