Something I’ve always found challenging in PaaS Spark platforms, such as Databricks and Microsoft Fabric, is efficiently leveraging compute resources to maximize parallel job execution while minimizing platform costs. It’s straightforward to spin up a cluster and run a single job, but what’s the optimal approach when you need to run hundreds of jobs simultaneously? Should you use one large high-concurrency cluster, or a separate job cluster for each task?
Unity Catalog introduces many new concepts in Databricks, particularly around security and governance. One significantly improved security feature that Unity Catalog enables is Row Level Security (hereby referred to as RLS).
Apache Spark offers tremendous capability, regardless of the implementation—be it Microsoft Fabric or Databricks. However, with vast capabilities comes the risk of using the wrong “tool in the shed” and encountering unnecessary performance issues.
TL;DR For developers, Chocolatey is an essential tool to address the challenges of installing and managing software.
Ever wished you could add dynamic content, parameterize, or reference a Key Vault secret value for Linked Service properties that only accept static inputs in the Azure Data Factory or Azure Synapse UI? In this post, I’ll introduce you to a feature that’s often overlooked, but incredibly handy for these purposes.