CPSC 330 Lecture 3: ML fundamentals

Varada Kolhatkar

Announcements

  • Homework 2 (hw2) has been released (Due: Sept 16, 11:59pm)
    • You are welcome to broadly discuss it with your classmates but final answers and submissions must be your own.
    • Group submissions are not allowed for this assignment.
  • Advice on keeping up with the material
    • Practice!
    • Start early on homework assignments.
  • If you are still on the waitlist, it’s your responsibility to keep up with the material and submit assignments.
  • Last day to drop without a W standing: Sept 15

iClicker 3.1

Clicker cloud join link: https://join.iclicker.com/FZMQ

Select all of the following statements which are TRUE.

    1. A decision tree model with no depth (the default max_depth in sklearn) is likely to perform very well on the deployment data.
    1. Data splitting helps us assess how well our model would generalize.
    1. Deployment data is scored only once.
    1. Validation data could be used for hyperparameter optimization.
    1. It’s recommended that data be shuffled before splitting it into train and test sets.

iClicker 3.2

Clicker cloud join link: https://join.iclicker.com/FZMQ

Select all of the following statements which are TRUE.

    1. \(k\)-fold cross-validation calls fit \(k\) times
    1. We use cross-validation to get a more robust estimate of model performance.
    1. If the mean train accuracy is much higher than the mean cross-validation accuracy it’s likely to be a case of overfitting.
    1. The fundamental tradeoff of ML states that as training error goes down, validation error goes up.
    1. A decision stump on a complicated classification problem is likely to underfit.

Recap from videos

  • Why do we split the data? What are train/valid/test splits?
  • What are the benefits of cross-validation?
  • What is underfitting and overfitting?
  • What’s the fundamental trade-off in supervised machine learning?
  • What is the golden rule of machine learning?

Summary of train, validation, test, and deployment data

fit score predict
Train ✔️ ✔️ ✔️
Validation ✔️ ✔️
Test once once
Deployment ✔️

Cross validation

Cross validation

Overfitting and underfitting

  • An overfit model matches the training set so closely that it fails to make correct predictions on new unseen data.
  • An underfit model is too simple and does not even make good predictions on the training data

The fundamental tradeoff

As you increase the model complexity, training score tends to go up and the gap between train and validation scores tends to go up.

  • Underfitting: Both accuracies rise
  • Sweet spot: Validation accuracy peaks
  • Overfitting: Training \(\uparrow\), Validation \(\downarrow\)
  • Tradeoff: Balance complexity to avoid both

The golden rule

  • Although our primary concern is the model’s performance on the test data, this data should not influence the training process in any way.

Source: Image generated by ChatGPT 5

  • Test data = final exam
  • You can practice all you want with training/validation data
  • But never peek at the test set before evaluation
  • Otherwise, it’s like sneaking answers before the exam \(\rightarrow\) not a real assessment of your learning.

Class demo