Understanding AdS, Eukaryogenesis, and Continual Learning

Learning aspects of spacetime, life, and intelligence.

Here I spell out a few things I want to understand next. These knowledge quests will form the basis of some of my future blog posts. This sort of endeavor—merely researching the basics of a preexisting field—will stand in contrast with blog posts where I advance a new thought of mine.1Even if only new to me due to my ignorance of prior discourse.

Anti-de Sitter space

In general relativity, space and time form a unified object—a spacetime manifold—whose curvature due to energy and momentum gives rise to gravitational phenomena. Anti-de Sitter space is a maximally symmetric spacetime that is negatively curved. It is a vacuum solution to Einstein’s equations with negative cosmological constant.

Why am I learning this?

For my research. I have been thinking for some time on how to do a holographic experiment where properties of a quantum system (like atoms in an optical cavity) can be mapped to those of a gravitational system using elements of the AdS/CFT dictionary.

I am an experimental atomic physics graduate student who thus needs to learn more about AdS/CFT and more generally the holographic principle. These are theoretical subjects that can be hard to learn. After all, deep understanding of the holographic correspondence might unlock the secret to the union of gravity with quantum theory. As a complete theory of quantum gravity in our universe remains elusive, in a certain sense no one on Earth fully understands this—but people have learned a lot about holography in the AdS/CFT context.2It is an open question how useful AdS/CFT is to understanding our universe. We believe our universe has a very small positive cosmological constant and therefore is not AdS.

My plan

I plan to walk through the Wikipedia page for anti-de Sitter space and make sure I understand literally everything. I want to understand the fundamentals deeply, so later on I can appreciate complicated aspects like the details of AdS/CFT.

Why Wikipedia first? Well, I find Wikipedia great for quickly identifying holes in my knowledge. To fill in those gaps, I can supplement with textbooks like Carroll’s Spacetime and Geometry or Wald’s General Relativity, but compared to a textbook, Wikipedia is better designed for me to jump to a specific topic and learn it directly. Also, it always is a good benchmark if I know enough to contribute to the Wikipedia article.

I think the classical general relativity of the AdS manifold that appears in the Wikipedia article is manageable for a first pass. AdS/CFT, on the other hand, is much harder, especially if you approach it from a string theory perspective.

After understanding all the concepts, I will learn to solve some standard problems, like calculating the scalar curvature given the AdS metric written in some coordinates.

This is all stuff that Robert Laughlin might call the “peas and potatoes” of the subject.3as he used to say in the one class I took from him a couple of years back. It might be mushy to eat up in the moment, but you need those nutrients to grow into a strong physicist.

Eukaryogenesis

Eukaryogenesis concerns the origin of eukaryotes.4Apologies if this sounds like kindergarten to you folks. But I’m not a biologist, so this is writing things in a way I can understand. Humans, plants, and most multicellular life are eukaryotic. A eukaryotic cell contains different structures (organelles) than a prokaryotic cell like a bacterium or archaeon. Two of the most notable organelles are the nucleus, which houses the DNA of a eukaryotic cell, and the mitochondrion, which performs cellular respiration to convert energy to a usable form for the cell. The prevailing theory on how the mitochondrion was incorporated into the eukaryotic cell is endosymbiosis, where a bacterium entered an archaeon host cell and became the mitochondrion. There is an open question on how exactly this process happened and how probable or improbable it was.

Why am I learning this?

I feel this issue is more amenable to both self-study and actual solution than the problem of the origin of life. I’m also interested in whether there are experimental groups attempting to recreate eukaryogenesis in the lab. I have the vague suspicion that it should be possible to see a transition similar to eukaryogenesis under the right laboratory conditions, and that is what is driving me to read into this.

My plan

I’ve downloaded a few articles I plan to read. I don’t have a fixed goal except to at least report what I have learned in a blog post. If I am emboldened, I might try a mad scientist experiment or visit some (hopefully not so mad) scientists doing this work here at Stanford or nearby in the Bay Area. For example, the group of Ellen Yeh at Stanford is researching a more recent example of endosymbiosis that resulted in the nitrogen-fixing organelles5Nitrogen fixation is another fun topic I might want to learn about! I’d love to know how the Haber-Bosch process compares with natural nitrogen fixation. of Epithemia diatoms. (Diatoms are a type of single-celled algae.)

Continual Learning

The world continues to change after a deep neural network has been trained and its weights fixed. It seems obvious to want an AI system to continue to learn by updating its weights in response to new information. However, in practice, this is difficult to do: learning a new task can degrade performance in previous tasks. (This is sometimes called catastrophic forgetting.) Continual learning methods remedy this by providing ways for a neural network to learn new data sequentially without significantly degrading performance on previously learned tasks.

Why am I learning this?

Continual learning is an open problem in AI research (at least according to people ranging from Jeff Dean,6Chief Scientist at Google who mentioned it as a big open problem in the Q&A after his talk last week at Stanford, to Dwarkesh Patel,7a prominent podcaster who has interviewed lots of AI leaders who listed it as one of the two big bottlenecks to artificial general intelligence).

One reason AI fascinates me is that it helps us learn about ourselves better. We get a grasp on the ingredients for intelligence, which tells us about the nature of intelligence itself. By learning why and how humans can continually learn, and by understanding why machines can’t (yet), I will better appreciate how learning works.

My plan

Similar to eukaryogenesis, I plan to read some articles. In this case, I also will try to implement a continual learning method on an AI system—and see how well I can get it to work!