Kato: Log structures are “magic powder” that makes mildly singular spaces appear smooth. Log schemes literally lie between ordinary schemes and tropical geometry, and are related to Berkovich Spaces. Problems in Gromov-Witten Theory demand intersection theoretic machinery for slightly singular spaces. Log structures have solved similar problems in Hodge Theory, D-modules, Connections and Riemann-Hilbert Correspondences, Abelian Varieties (esp. Elliptic Curves), etc. How can they be used to define a reasonable intersection theory for curve counting on singular spaces? We'll compare different answers to this question and prove basic formulas for log Gromov--Witten invariants.
A commutative algebra A together with a Lie bracket satisfying the Leibniz rule is called a Poisson algebra, which is named in honor of Siméon Denis Poisson. Poisson structures appear in many contexts, including string theory, classical (quantum) mechanics, and differential geometry. In this talk, we will talk about Poisson structure on weighted polynomial rings and introduce Poisson valuations. Furthermore, we will see that the Poisson valuations play an important role in characterizing the Poisson automorphism groups of certain Poisson algebras.
The output of a language model like ChatGPT is a probability distribution for the next word (or fragment of a word). Mathematically, what is the model actually doing when it generates these probabilities? I’ll narrate how the model translates its input text into a sequence of vectors, which pass through a succession of alternating linear and nonlinear layers. Then we’ll examine this architecture with a view toward some big-picture open questions: Do language models “plan ahead” for future words, phrases, and sentences? If not, then how do they manage to generate coherent text? And if yes, then how does their training — they are trained myopically to maximize the predicted log probability for the next word — incentivize planning? I’ll illustrate the theory by fine-tuning GPT2 (an open-source precursor of ChatGPT) to do some funny and unusual things. I’ll also talk a bit about the experience of switching research fields, from pure math to AI safety.