Let’s get specific
Bigger is better — that’s the current trend of machine learning. Hundreds of billions of parameter models running on thousands of the best processing units we’re able to manufacture.
What if models got smaller, more specific, instead of larger and more general?
Bespoke AI attempts to answer that question (and tangential ones) by analyzing the motivations behind domain-specific machine learning and how it is manifested in practice. This is a space for developing the case for and implications of domain-specific machine learning. And, tangentially, reflections on how we got from the dawn of the internet to AI being a household name, from someone who grew up with typing game CD-ROMs and vaguely remembers that dial-up sound
If this sounds like something that mildly interests you (or annoys you or provokes any kind of emotional response) smash that mf subscribe button
I started thinking about domain-specific machine learning in my PhD program, researching novel, unsupervised methods of machine learning to identify particles produced at the Large Hadron Collider. While working with some of the most intelligent scientists in the world, I’ve noticed how inefficiently we apply machine learning techniques.
Here and there I saw methods specifically built for physics. Eventually I became convinced that, with our best educated guesses of what to look for next, this is the most efficient way we can further our field — with machine learning that was smart enough to know the physics principles that we do. If this type of Bespoke learning can work for the fundamental nature of our universe, what else can it do?
Margaret Lazarovits is a PhD candidate in Experimental Particle Physics at the University of Kansas. She works on the CMS Experiment and has done work in the R&D of a new, precision timing detector, a search for dark matter, and, of course, machine learning for physics.