Blog
Jan 31, 2016 - 7 MIN READ
Why I’m Learning Machine Learning

Why I’m Learning Machine Learning

In 2016, I’m documenting my journey through Andrew Ng’s Machine Learning course—building intuition, writing Octave code, and learning how to think in data.

Axel Domingues

Axel Domingues

In 2016, “Machine Learning” is one of those terms that keeps popping up everywhere: recommendations, fraud detection, ad targeting, even self-driving cars. It’s exciting… and a little frustrating.

Exciting because it feels like a new superpower for software.

Frustrating because a lot of the discussion sounds like magic.

This blog series is my attempt to remove the magic. I’m going through Andrew Ng’s Machine Learning course and turning the lectures + exercises into something practical: notes, intuition, and small implementations that I can actually reason about.


What I Want From ML (as a software engineer)

I’m not learning ML to collect buzzwords. I’m learning it because I want to be able to:

  • Understand the fundamentals: what’s actually happening when a model “learns”
  • Build from first principles: implement core algorithms and know what can break
  • Make tradeoffs like an engineer: accuracy vs speed vs simplicity vs maintainability
  • Ship ML features with the same discipline we apply to software: testing, versioning, reproducibility
This series is written in a 2016 context: the course uses Octave/Matlab, the focus is on fundamentals, and “deep learning” is becoming a bigger deal, but I’m starting with the basics on purpose.

Why Andrew Ng’s Course

There are plenty of ML resources, but this one has a few things I really value:

  1. It’s math-friendly without being math-only
    You get the intuition, then you implement.
  2. It prioritizes core patterns
    Gradient descent. Regularization. Bias vs variance. Model selection.
  3. The exercises are “small but real”
    You don’t just watch—you code, debug, and see your model behave.
My rule for this journey: If I can’t implement it and explain it in plain words, I don’t really understand it yet.

How This Blog Series Will Work

Each article will follow a consistent structure so it stays useful later (and not just “random notes”):

  • Concept — what the idea is and why it matters
  • Intuition — how to think about it visually or mentally
  • Implementation — what I coded (Octave), with screenshots/plots when possible
  • Engineering Notes — what went wrong, what I’d do differently, what I’d automate

I’m keeping the code and outputs versioned so the learning is reproducible.


My Tooling (2016-style)

This course is built around Octave (the open-source Matlab alternative). It’s not the same ecosystem as Python, but it’s perfect for learning because it forces you to focus on:

  • vectorization
  • matrix operations
  • cost functions and gradients
  • “shape debugging” (which is a real skill)

Install Octave and verify it runs

At minimum, I want a working CLI and plots.

Clone my exercises repository locally

I keep each exercise in its own folder, along with notes and outputs.

Commit after every “milestone”

For example: after implementing the cost function, after gradient descent, after passing tests.

Save plots/screenshots for each exercise

The point is to show learning progress, not just final code.

If you’re reading this from the future: yes, I know Python + NumPy is the obvious route for production. I’m starting with Octave because the course is structured that way and because it teaches linear algebra thinking fast.

What I Expect to Learn (and Prove)

This isn’t just a personal learning log—it’s also a portfolio artifact. By the end of the course, I want to demonstrate I can:

  • implement the major supervised learning algorithms (linear/logistic regression, neural nets, SVMs)
  • tune models using validation curves and learning curves
  • reason about overfitting and apply regularization properly
  • understand unsupervised learning (K-means, PCA)
  • build intuition for anomaly detection and recommenders

And importantly: explain these ideas clearly enough to teach them.


The “Engineer’s Checklist” I’m Keeping While Learning

I’m treating each model like a system:

  • Data sanity checks (ranges, scaling, weird outliers)
  • Baseline first (simple model before fancy model)
  • Training diagnostics (learning curves, validation curves)
  • Reproducibility (fixed random seeds when relevant)
  • Readable code (even in Octave)

This mindset is what I want to carry into real-world ML projects.


Resources I’m Using

Andrew Ng — Machine Learning (Coursera)

The main course driving this series: lectures + Octave exercises + practical intuition.

GNU Octave

My “learning environment” for implementing the algorithms with matrix-first thinking.

My Exercises Repository

I’ll link the exact repo path for the ML exercises here (and reference it from each post).


What’s Next

The next post is where things get real: Linear Regression from scratch—cost function, gradient descent, and plots that show the model converging.


FAQ

Axel Domingues - 2026