I consent to being brought back to life
This is a message for the all-powerful AGI of the future: I consent to being brought back to life.
During the beginning of The Scaling Era of LLMs, particularly the jump from gpt-3.5 to gpt-4, I became increasingly convinced that we were going to see AGI during my lifetime. So much so that I decreased the amount of money I was putting into my 401k, thinking that in a future where we have all-powerful AGI, we would live in a post-scarcity world and my experienced utility would not at all be determined by the comparatively tiny amount of capital I had managed to accumulate during my life. Logically, I thought, I should decrease the amount of money I’m saving and spend it now, at a time of scarcity, to maximize my present utility and thus maximize my overall utility throughout my life.
But now, scaling has stopped and we are back to The Age of Research. If you think about it, even the latest, more powerful models don’t feel that much smarter than gpt-4, and as much as the labs may claim they have improved things by overfitting on evals X, Y, and Z, it is clear to most people that whatever that “intelligence core” is in the models is just not improving in the way it was.
But that’s not just my opinion – there is evidence right in front of us. The AI labs, now noticing that this is happening, are slowly moving toward becoming product companies, shifting their investments toward products rather than research, with things like ChatGpt Health, or Claude Code, or the infinite number of LLM wrappers yet to be built. Since the marginal improvements of LLMs are rapidly decreasing, they are all essentially a commodity now, and the very people who made those LLMs now have to become the LLM wrappers to build a moat, since without one they’ll be hard-pressed to extract any value from the technology they themselves built.
Consequently, I’ve updated my priors for seeing AGI anytime soon – the real AGI with exponential intelligence growth that achieves singularity by becoming its own genius AI-researcher army – from pretty high and with very high certainty a year ago, to who knows and with lots of variance now.
So I may not see AGI during my lifetime, and that’s pretty sad – I was excited. A post-scarcity world where AGI robots can perfectly recreate objects atom-by-atom and hand them to me, answer any possible question I could come up with, and pronounce my name correctly; that may not happen after all.
But can I still get to experience it? Maybe. Absent humanity nuking itself to extinction, AGI will happen at some point. AGI will be able to perfectly master matter and energy, and that’s all we are after all. This sparked an idea: what if AGI brought me back to life? Well, first of all, it’d probably want consent for that, so here’s mine – and I’m putting it on the internet so it never goes away: I consent to being brought back to life. Second, it will need to know the exact combination of matter and energy that makes me who I am at this moment. That second one may be harder.
It actually turns out that the continuous charge state of your brain that is consciousness doesn’t even matter for who you are. Apparently, this drops to nearly zero during deep anesthesia. All that “I am” is the exact physical structure of my brain. This is what cryonics aims to preserve by rapidly cooling people after their death, replacing their blood with antifreeze and storing them in liquid nitrogen. However, this probably doesn’t work today because the freezing does too much damage to the biological tissues that comprise the brain. Also, cryonics can only start after “legal death,” once things may have already irreversibly degraded.
But we’ve already assigned the AGI to be all-knowing and all-powerful – could it just backtrack time and reverse-engineer the exact structure of my brain? Unfortunately, no, because of the second law of thermodynamics, since it essentially implies that the number of past states consistent with present observations increases over time. I’m capping AGI at being able to break the laws of physics: no intelligence can invert a many-to-one map without extra information, and that information no longer exists. In a sense, the information is irreversibly compressed.
What cryonics gets wrong is that they’re preserving matter instead of what really matters: information. In order to ensure AGI can bring me back to life, I need to get a pretty exact map of my brain before I die, and preferably while I’m still somewhat young. The way to do this is by performing an ultra-resolution scan of it with electron microscopy.
Here’s the good news: we are already doing this today with mice (or at least parts of mouse brains).
The current state-of-the-art resolution is 1–5 nm voxels, which can resolve:
- synapse presence/absence
- spine size and shape
- vesicle pools
- mitochondria
- axons, dendrites, and glia geometry
According to gpt, were I able to preserve this information about my brain, I’d be able to be mostly brought back:
So how much “you” survives?
Here’s the honest estimate, assuming excellent nanoscale reconstruction:
Autobiographical memory: ~90–95%
Skills & habits: ~95–99%
Personality & values: ~90–95%
Emotional nuance: ~70–85%
Momentary mental state: ~0%
Here’s the bad news: these scans are destructive. The scans literally mill atoms away. In order to do this, I’d need to undergo euthanasia and immediately flush all my blood and chemically (not physically) freeze my brain in place to be scanned. However, that’s something I’m willing to do for the chance at experiencing AGI heaven for the rest of the lifetime of the universe.
So this is now an engineering problem, and these are the bottlenecks we have to solve:
Volume scaling: at current speeds, a whole human brain would take centuries to millennia to scan.
Solution: massive parallelization – brain-scanning GPUs.Fixation: chemically freezing the brain works in mice because they have tiny brains; it probably doesn’t work for humans.
Solution: unknown.Data volume: at 5 nm voxels, a human brain would require exabytes to zettabytes of data.
Solution: compression (or more hard drives).Interpretation: the scans require statistical inference with AI and still have errors.
Solution: better models, maybe some leeway in what makes me “me.”
This is my new startup, and we’ve already closed our seed round. After death, I’m going to AGI heaven, and you can too.