Large Language Models (LLMs) are massive neural network architectures trained on vast amounts of data while being tasked to classify or generate text. These models have acquired a deep understanding of the intrinsic language structure as well as the world itself by analyzing web-scraped documents word after word, sentence after sentence for a long machine-time. Although LLMs have been around for several years and are popular among the scientific community since their advent in 2018 starting with the BERT model1, they have recently increased in scale and have reached impressive levels of human-like text understanding and generation capabilities. Additionally, they have been released to the general public through web services such as OpenAI’s chatGPT and Google’s Bard, and have generated a lot of buzz around Artificial Intelligence (AI), its tremendous capabilities as well as potential hazards that may arise by its use. Moreover, extended versions of them - called multimodal LLMs - that are able to also process visual content and answer questions related to it, are now available.