If you’re gearing up to take on the Databricks Machine Learning Associate exam, there are specific steps you can follow to ensure your preparedness. PassQuestion offers a comprehensive set of Databricks Machine Learning Associate Certification Exam Questions which cover every topic that you can expect to see on the actual exam, ensuring a thorough understanding of the subject matter. By practicing with these Databricks Machine Learning Associate Certification Exam Questions, you can assess your knowledge and identify areas where you may need additional study. This methodical approach will equip you with the knowledge and confidence you need to successfully pass the Databricks Machine Learning Associate exam on your first attempt.

Databricks Certified Machine Learning Associate Certification

The Databricks Certified Machine Learning Associate certification exam assesses an individual’s ability to use Databricks to perform basic machine learning tasks. This includes an ability to understand and use Databricks Machine Learning and its capabilities like AutoML, Feature Store, and select capabilities of MLflow. It also assesses the ability to make correct decisions in machine learning workflows and implement those workflows using Spark ML. Finally, the ability to understand advanced characteristics of scaling machine learning models is assessed. Individuals who pass this certification exam can be expected to complete basic machine learning tasks using Databricks and its associated tools.

About the Databricks Machine Learning Associate Exam

● Number of items: 45 multiple-choice questions
● Time limit: 90 minutes
● Registration fee: $200
● Languages: English
● Delivery method: Online Proctored
● Type: Proctored certification
● Test aides: None allowed.
● Prerequisite: None required; course attendance and six months of hands-on experience in Databricks is highly recommended
● Validity: 2 years
● Recommended experience: 6+ months of hands-on experience performing the machine learning tasks outlined in the exam guide

Databricks Certified Machine Learning Associate Exam Topics

Section 1: Databricks Machine Learning         29%

Databricks ML

  • Identify when a standard cluster is preferred over a single-node cluster and vice versa
  • Connect a repo from an external Git provider to Databricks repos.
  • Commit changes from a Databricks Repo to an external Git provider.
  • Create a new branch and commit changes to an external Git provider.
  • Pull changes from an external Git provider back to a Databricks workspace.
  • Orchestrate multi-task ML workflows using Databricks jobs.

Databricks Runtime for Machine Learning

  • Create a cluster with the Databricks Runtime for Machine Learning.
  • Install a Python library to be available to all notebooks that run on a cluster.

AutoML

  • Identify the steps of the machine learning workflow completed by AutoML.
  • Identify how to locate the source code for the best model produced by AutoML.
  • Identify which evaluation metrics AutoML can use for regression problems.
  • Identify the key attributes of the data set using the AutoML data exploration notebook.

Feature Store

  • Describe the benefits of using Feature Store to store and access features for machine learning pipelines.
  • Create a feature store table.
  • Write data to a feature store table.
  • Train a model with features from a feature store table.
  • Score a model using features from a feature store table.

Managed MLflow

  • Identify the best run using the MLflow Client API.
  • Manually log metrics, artifacts, and models in an MLflow Run.
  • Create a nested Run for deeper Tracking organization.
  • Locate the time a run was executed in the MLflow UI.
  • Locate the code that was executed with a run in the MLflow UI.
  • Register a model using the MLflow Client API.
  • Transition a model’s stage using the Model Registry UI page.
  • Transition a model’s stage using the MLflow Client API.
  • Request to transition a model’s stage using the ML Registry UI page.

Section 2: ML Workflows            29%

Exploratory Data Analysis

  • Compute summary statistics on a Spark DataFrame using .summary()
  • Compute summary statistics on a Spark DataFrame using dbutils data summaries.
  • Remove outliers from a Spark DataFrame that are beyond or less than a designated threshold.

Feature Engineering

  • Identify why it is important to add indicator variables for missing values that have been imputed or replaced.
  • Describe when replacing missing values with the mode value is an appropriate way to handle missing values.
  • Compare and contrast imputing missing values with the mean value or median value.
  • Impute missing values with the mean or median value.
  • Describe the process of one-hot encoding categorical features.
  • Describe why one-hot encoding categorical features can be inefficient for tree-based models.

Training

  • Perform random search as a method for tuning hyperparameters.
  • Describe the basics of Bayesian methods for tuning hyperparameters.
  • Describe why parallelizing sequential/iterative models can be difficult.
  • Understand the balance between compute resources and parallelization.
  • Parallelize the tuning of hyperparameters using Hyperopt and SparkTrials.
  • Identify the usage of SparkTrials as the tool that enables parallelization for tuning single-node models.

Evaluation and Selection

  • Describe cross-validation and the benefits of downsides of using cross-validation over a train-validation split.
  • Perform cross-validation as a part of model fitting.
  • Identify the number of models being trained in conjunction with a grid-search and cross-validation process.
  • Describe Recall and F1 as evaluation metrics.
  • Identify the need to exponentiate the RMSE when the log of the label variable is used.
  • Identify that the RMSE has not been exponentiated when the log of the label variable is used.

Section 3: Spark ML           33%

Distributed ML Concepts

  • Describe some of the difficulties associated with distributing machine learning models.
  • Identify Spark ML as a key library for distributing traditional machine learning work.
  • Identify scikit-learn as a single-node solution relative to Spark ML

Spark ML Modeling APIs

  • Split data using Spark ML.
  • Identify key gotchas when splitting distributed data using Spark ML.
  • Train / evaluate a machine learning model using Spark ML.
  • Describe Spark ML estimator and Spark ML transformer.
  • Develop a Pipeline using Spark ML.
  • Identify key gotchas when developing a Spark ML Pipeline.

Hyperopt

  • Identify Hyperopt as a solution for parallelizing the tuning of single-node models.
  • Identify Hyperopt as a solution for Bayesian hyperparameter inference for distributed models.
  • Parallelize the tuning of hyperparameters for Spark ML models using Hyperopt and Trials.
  • Identify the relationship between the number of trials and model accuracy.

Pandas API on Spark

  • Describe key differences between Spark DataFrames and Pandas on Spark DataFrames.
  • Identify the usage of an InternalFrame making Pandas API on Spark not quite as fast as native Spark.
  • Identify Pandas API on Spark as a solution for scaling data pipelines without much refactoring.
  • Convert data between a PySpark DataFrame and a Pandas on Spark DataFrame.
  • Identify how to import and use the Pandas on Spark APIs.

Pandas UDFs/Function APIs

  • Identify Apache Arrow as the key to Pandas <-> Spark conversions.
  • Describe why iterator UDFs are preferred for large data.
  • Apply a model in parallel using a Pandas UDF.
  • Identify that pandas code can be used inside of a UDF function.
  • Train / apply group-specific models using the Pandas Function API.

Section 4: Scaling ML Models         9%

Model Distribution

  • Describe how Spark scales linear regression.
  • Describe how Spark scales decision trees.

Ensembling Distribution

  • Describe the basic concepts of ensemble learning.
  • Compare and contrast bagging, boosting, and stacking

View Online Databricks Certified Machine Learning Associate Free Questions

1. A data scientist wants to efficiently tune the hyperparameters of a scikit-learn model in parallel. They elect to use the Hyperopt library to facilitate this process.
Which of the following Hyperopt tools provides the ability to optimize hyperparameters in parallel?
A.fmin
B.SparkTrials
C.quniform
D.search_space
E.objective_function
Answer: B

2. An organization is developing a feature repository and is electing to one-hot encode all categorical feature variables. A data scientist suggests that the categorical feature variables should not be one-hot encoded within the feature repository.
Which of the following explanations justifies this suggestion?
A.One-hot encoding is a potentially problematic categorical variable strategy for some machine learning algorithms.
B.One-hot encoding is dependent on the target variable’s values which differ for each apaplication.
C.One-hot encoding is computationally intensive and should only be performed on small samples of training sets for individual machine learning problems.
D.One-hot encoding is not a common strategy for representing categorical feature variables numerically.
Answer: A

3. A data scientist has created a linear regression model that uses log(price) as a label variable. Using this model, they have performed inference and the predictions and actual label values are in Spark DataFrame preds_df.
They are using the following code block to evaluate the model:
regression_evaluator.setMetricName(“rmse”).evaluate(preds_df)
Which of the following changes should the data scientist make to evaluate the RMSE in a way that is comparable with price?
A.They should exponentiate the computed RMSE value
B.They should take the log of the predictions before computing the RMSE
C.They should evaluate the MSE of the log predictions to compute the RMSE
D.They should exponentiate the predictions before computing the RMSE
Answer: D

4. A data scientist has a Spark DataFrame spark_df. They want to create a new Spark DataFrame that contains only the rows from spark_df where the value in column discount is less than or equal 0.
Which of the following code blocks will accomplish this task?
A.spark_df.loc[:,spark_df[‘discount’] <= 0]
B.spark_df[spark_df[‘discount’] <= 0]
C.spark_df.filter (col(‘discount’) <= 0)
D.spark_df.loc(spark_df[‘discount’] <= 0, :]
Answer: C

5. A data scientist has written a feature engineering notebook that utilizes the pandas library. As the size of the data processed by the notebook increases, the notebook’s runtime is drastically increasing, but it is processing slowly as the size of the data included in the process increases.
Which of the following tools can the data scientist use to spend the least amount of time refactoring their notebook to scale with big data?
A.PySpark DataFrame API
B.pandas API on Spark
C.Spark SQL
D.Feature Store
Answer: B