A Quick Comment On Reverse Engineering

Don’t Conflate Output Spaces

Vineet Tiruvadi
4 min readSep 9, 2023

I’m reading through this article from Haspel et al. (https://arxiv.org/abs/2308.06578).

They outline their broad vision to ‘reverse-engineer’ a nervous system. But then they then define RE as:

The Problematic Part

There are several major issues with the paper, but this early spark sits at the root of most of them.

Let’s talk about what RE is and isn’t.

What is system identification?

Before addressing RE, we have to talk about system identification.

The simplest system is an input-output relationship (Figure 1A). A more flexible system is an input-output relationship that is index, or influenced, by a state (Figure 1B).

Figure 1. The basic definition of a system that we then try to study.

In both cases, there’s a map H between I and O that uniquely defines the system’s transfer function.

Using measurements, experiments, and models to infer what this transfer function is is called system identification. This is a rich, storied approach — here’s a great intro:

One of my favorite introductions to System Identification from Dr. Steve Brunton.

In other words, system identification would be the effort to “identify how each neuron’s output depends on its input”. This is what Haspel et al. describe, but conflate it with ‘reverse-engineering’.

Then what is RE?

RE goes to the next meta-level. When you have a bunch of simple systems together, they become subsystems of a larger system (Figure 2).

Figure 2. Most systems consist of subsystems, with each subsystem defined by a downstream behavior it helps drive. This isn’t a trivial extension of a system, it’s a sort of “meta” system that introduces to us at least three interacting layers: subsystems (neurons), systems (brain + spine + adrenals + …), and behavior (memory, motor, emotion, etc.)

The bigger system tends to have many downstream effectors — and these effectors dance in stereotyped ways. Often in an environment-dependent way.

Implementation Layer

Everything above is still in the implementation layer. These are the parts of our system, and when they coordinate across each other they tend to form subsystems.

Each subsystem has a transfer function H linking inputs to outputs. H itself can change, depending on the input, some internal state, or over time. Any individual H may be interesting (eg to Haspel et al.) but to understand the bigger picture we need to understand how individual Hs come together to achieve goals in downstream spaces.

Cars have parts, and each part has inputs->outputs. That doesn’t address the larger function of a car, and definitely doesn’t help us trace back the behaviors responsible for specific output functions we may want to understand or replicate. Source: https://www.britannica.com/technology/automobile

By analogy, a car has parts like brakes and clutches and gas tanks. These tend to connect to each other in a pattern driven largely by necessity — but they can also be grouped by how they affect downstream behaviors of the car that we rely on.

Behavior Layer

So subsystems make up larger systems, and these larger systems exhibit patterns of behaviors. These patterns of behaviors have less to do with the output O and more to do with the collection of H across all subsystems.

RE is anchoring yourself in the output behavior, the behavior layer, that is downstream of the implementation, or brain, layer. This is where higher-order phenomena live.

Parts dance together to achieve specific behaviors — this is, arguably, the systems-level we care about. Especially as we’re “reverse-engineering”. Source: http://www.visualdictionaryonline.com/transport-machinery/road-transport/automobile/automobile-systems.php

By analogy, a car’s output behavior are “go very fast” or “slow down quickly” or “stay upright”. These behaviors involve several implementation-level subsystems — like the transmission, suspension, engine, and brakes.

Integration and Example

Putting it all together…

In the Abstract

A comparison of system ID with RE, with an abstraction of the brain and behavior layers.

Key here is that linking the output of every neuron to every input is just vanilla system ID. RE, in contrast, anchors itself in an output behavior (like a clinician assessing symptoms) and then traces back into the implementation the systems involved.

In the Brain

A quick, concrete example is thinking about how we move our arms.

How do neural systems + muscle actuators yield behaviors. We can then start from behaviors and “pull back” into the dances occurring in the implementation space driving the behavior. This is the essence of RE.

RE here wouldn’t be anchored in “motor cortex output” or “biceps contraction”. This is more of the implementation output. Understanding the upstream parts would be system identification, a relatively simple input-output map.

Instead, RE would be anchored in “reaching” or “running”, and we’d then move backward from the behavioral pattern to the implementation circuits and dynamics involved in that pattern.

Summary

Reverse Engineering (RE) is a powerful approach to inference in systems that anchors itself in the behavioral output and then traces back the subsystems and components involved. This is quite different than system identification, which simply tries to characterize the input-output mapping present in the system’s components.

Importantly, there is no one “right” frame or definitional boundary for our system, our effectors, and behaviors. The “right” one depends on the problem at hand.

In the case of neural systems, any non-trivial definition of RE needs to include downstream behaviors that then get pulled back into the implementation/brain space — eg neurons or networks of neurons.

--

--

Vineet Tiruvadi

Engineer. Emotions, brains, and data-efficient control. Community > Technology.