Blog
Nov 26, 2017 - 13 MIN READ
LSTMs - Engineering Memory into the Network

LSTMs - Engineering Memory into the Network

After vanilla RNNs taught me why gradients collapse through time, LSTMs finally felt like an engineered solution - keep the memory path stable, and control it with gates.

Axel Domingues

Axel Domingues

Last month’s conclusion was painfully clear:

Vanilla RNNs don’t just “struggle” with long-term dependencies — they’re structurally bad at training them.

I could get an RNN to learn local patterns (next character, short context), but the moment I needed it to remember something far back in the sequence, learning simply didn’t reach that far.

This month was the first time in 2017 I felt the “deep learning revolution” in my hands:

LSTMs are not a small tweak. They’re an architectural response to an optimization problem.

What you’ll learn

Why LSTMs are an engineered response to RNN training failure: a stable memory path + gates that control access.

The 3 ideas to keep

  1. A memory lane that survives time
  2. Gates as control logic
  3. A cleaner path for gradients flowing backward from the loss

How to use this post

Read once for intuition, then reuse the “Engineering Notes” checklist when implementing/debugging.


The Shift: Stop Asking SGD to Perform Miracles

In 2016, when something didn’t learn, I had a pretty reliable playbook:

  • check the data
  • check the cost function
  • check the gradients
  • tune the learning rate
  • regularize
  • add features

With RNNs, I did all of that… and still hit a wall.

That’s when it clicked:

Sometimes the right solution isn’t “a better optimizer”.
It’s “a model design that makes optimization possible”.

Try the classic playbook

Data, loss, gradients, learning rate, regularization — the usual fixes.

Notice the structural wall

If the model cannot train long dependencies even when everything is “reasonable”, it’s not only tuning.

Treat architecture as the fix

Sometimes the right move is: change the structure so optimization becomes possible.

LSTMs are exactly that.

This is why I’m calling it engineering memory.

It’s not “the network magically learns to remember.”
It’s “we give it a stable memory pathway and let training learn when to write/read/forget.”


The Intuition: A Memory Lane With Gates

When I first read about LSTMs, the gates sounded like ceremony:

  • input gate
  • forget gate
  • output gate

But after October, the gating idea became obvious:

If gradients die through time, build a path where they don’t.

Two flows are happening at once:
  • Forward: information moves through time (past → future).
  • Backward: gradients move from the loss back toward earlier time steps.
LSTMs matter because they create a path where that backward learning signal can survive longer.

In my head, I now picture LSTMs like this:

  • There’s a memory lane that’s designed to stay stable across time.
  • There are controlled doors (gates) that decide:
    • what to store
    • what to erase
    • what to expose as output

So instead of forcing the hidden state to be both:

  • short-term processing
  • long-term storage

…the LSTM separates those roles.

That separation is the entire point.


What “Long-Term Dependency” Means (In My Own Debugging Terms)

A long-term dependency isn’t abstract to me anymore.

It’s when:

  • the correct output at time t
  • depends on something that happened at time t - 50 (or t - 200)

A vanilla RNN has to keep that information alive by repeatedly transforming it through the same recurrent weights.

That’s exactly where things go wrong:

  • the signal gets diluted or distorted
  • gradients stop flowing

LSTM says:

Don’t rely on repeated transformation to preserve memory. Preserve memory directly, and learn controlled updates.


The Three Gates (Explained Like I Would Explain a System)

I’m intentionally avoiding equations here — because what mattered to me wasn’t the derivation.
It was the control logic.

1) Forget gate: “Is this memory still useful?”

This is the gate that made me stop thinking of memory as passive.

The network can actively decide:

  • “keep this”
  • “fade this out”
  • “delete this”

That’s huge for sequences where context changes.

The forget gate is also the piece that made LSTMs feel robust.It’s a built-in defense against “my memory cell is full of old junk.”

2) Input gate: “Should I write this new information?”

This prevents the cell from being overwritten all the time.

In practice, it means:

  • the model can ignore noise
  • it can store something only when it’s confident it matters

3) Output gate: “Should I expose the memory right now?”

This gate separated a subtle thing I didn’t appreciate before:

  • Having memory
  • Using memory

The LSTM can store something for later without constantly outputting it.

That helps training too, because it avoids forcing every internal state to be immediately “useful” for prediction.


Why This Solves the RNN Training Pain (Without Pretending It’s Magic)

October’s failure mode was:

  • gradients shrink or explode across time
  • training becomes unstable
  • long-term dependencies are basically unreachable

LSTMs don’t eliminate difficulty, but they change the game:

  • the memory lane is designed to remain stable
  • gates allow controlled interaction with that lane
  • training has an easier job because gradients can travel longer

In other words:

LSTMs don’t make sequence learning “easy”.
They make it possible.


What I Built This Month (Practical, Not Perfect)

I did two things in parallel:

  1. A “from-scratch mindset” implementation sketch (to learn shapes and data flow)
  2. A minimal working model training on a toy sequence task (to test whether memory actually works)

My toy tasks (the point was long memory)

I used small synthetic problems where success requires remembering something far back:

  • “store a symbol early, reproduce it much later”
  • “detect whether something appeared in the first half of the sequence”

The goal was not accuracy.

The goal was to see:

  • does training improve reliably?
  • does performance remain stable as sequence length increases?

This is where LSTMs started to feel like a real tool.


My Biggest “Aha”: Architecture Is a Form of Regularization

In 2016, regularization was mostly:

  • add a penalty
  • tune lambda
  • reduce variance

In 2017, I’m seeing another form:

Architectural constraints.

CNNs encode locality and translation bias.
LSTMs encode stable memory and controlled flow.

Those are not just representational choices.

They’re choices that shape what learning is likely to succeed at.

This reframed how I think about models:
  • It’s not “choose a flexible model and regularize it later.”
  • It’s “choose a model that already respects the structure of the data.”

Engineering Notes: What Still Went Wrong (Even With LSTMs)

LSTMs aren’t automatic.

I still hit real issues:

Training instability didn’t vanish, it just became manageable

  • Learning rate still mattered a lot.
  • Gradient clipping still helped.
  • Momentum sometimes accelerated progress, sometimes destabilized it.

Shape bugs were everywhere

The LSTM has more moving pieces:

  • hidden state
  • cell state
  • multiple gate computations

One swapped dimension can silently break learning.

This month reinforced my “shape debugging” habits from Octave days.

The one habit that saved me repeatedly:

Write down the expected shapes for every tensor/state before coding.

It’s boring. It prevents hours of confusion.

Overfitting showed up quickly on small tasks

LSTMs can memorize tiny datasets fast.

This is where dropout started to become relevant again — but not in the abstract.

It was the difference between:

  • “model learned the task pattern”
  • “model memorized these exact examples”

Connection Back to 2016 ML: Same Fundamentals, New Constraint

Everything here still feels like the 2016 foundation — just under a new kind of stress.

  • Optimization: still gradient-based learning
  • Diagnostics: still about interpreting failure modes and fixing the right thing
  • Regularization: still essential, now including architectural constraints
  • Bias vs variance: still visible, but sequence models can swing wildly to either side

The thing that changed is what I consider “a valid fix”.

In 2016, fixes were mostly:

  • change features
  • change regularization
  • change the model class

In 2017, a fix can be:

  • change the architecture so gradients can survive

That’s a different level of thinking.


What Changed in My Thinking (November Takeaway)

I used to assume training difficulty was mostly about hyperparameters.

Now I’m convinced:

Training difficulty is often an architectural property.

LSTMs taught me that the right model design can turn an impossible optimization problem into a solvable one.


What’s Next

I’ve now hit the major “building blocks” of 2017:

  • MLPs and why depth is hard
  • ReLUs and why activations matter
  • initialization and optimization
  • CNN intuition and hierarchy
  • RNNs and why time breaks things
  • LSTMs as an engineered fix

December 2017 is the synthesis.

What actually changed between classical ML and deep learning?
What stayed the same?
And what does my “next step” look like after a year of building intuition the hard way?


FAQ

Axel Domingues - 2026