Large Language Models (LLMs) are massive neural network architectures trained on vast amounts of data while being tasked to classify or generate text. These models have acquired a deep understanding of the intrinsic language structure as well as the world itself by analyzing web-scraped documents word after word, sentence after sentence for a long machine-time.