AI Generated beer? Everything is possible with Generative Adversarial Networks
Generative AI models have undeniably sparked a revolution across numerous industries. Problems that were once considered challenging in the design and production process of various products now appear insignificant in light of the vast possibilities presented by these models.
While exploring several applications of Generative AI models during the last few months, mostly for work and study related purposes, I started thinking about how they could be useful for my hobby, which is not other than Home brewing.
Home brewing, the hobby of making beer at home, has witnessed an ever increasing community globally during the last years. This community is characterized by its active engagement on various blogs, forums, and social media groups, where home brewers share recipes, tips, and techniques to continually improve the quality and variety of their beers.
Literally, there are millions of beer recipes out there which a home brewer can either brew as is, or use as a source of inspiration to brew a slightly different beer.
Thus, the most apparent application of Generative AI models in the context of home brewing is to use them in order to generate entirely new beer recipes. And this is where Generative Adversarial Networks come in.
But what Generative Adversarial Networks really are ?
Generative Adversarial Networks (GANs) are a kind of artificial neural networks that consist of two neural networks, known as the generator and the discriminator. What makes them so special and distinguishes them from other kinds of Neural Networks is that they are trained in an adversarial way.
This means that the generator generates synthetic data (initially starting from producing noise / random data) and the discriminator tries to distinguish between between real data and synthetic data generated by the generator. The generator and discriminator are trained in a process called adversarial training, where they compete against each other in a feedback loop. This means that the generator tries to generate better synthetic data at each training loop in order to “fool” the discriminator…