Software Development

Sorting through all the AI lingo? Here’s a glossary to help

What's theory of mind? NLP? Alignment? With new terminology coming out as fast as artificial intelligence technology evolves, this list can help you keep up.

AI terminology can be overwhelming. (Photo by Pexels user Tara Winstead via a Creative Commons license)
Editor's note: Yes, of course we used ChatGPT to help us understand some of these concepts in layperson's terms.
Hey, did you hear about LIMA? It’s built on the LLM LLaMA, not to be confused with LaMDA.

The language of AI is riddled with acronyms, platform names, tech slang and theories. If you’ve ever overheard a conversation about AI and thought, “What the heck is Stable Diffusion, and how is it different from ChatGPT?” but were too afraid to ask, we’ve put together an AI glossary to help navigate some of the lingo and identify the tech companies behind which tech.

AI has been moving so quickly in 2023 that this list could be obsolete before long. There will most definitely be new terms emerging over the summer, and who knows where AI will be by the fall? But for now, we hope this helps.

AI Glossary

Act as if: A prompt starter for AI chatbots that has it respond as if it is something specific (e.g: job interviewer, therapist, fictional character)

Algorithm: Instructions that a computer program follows to operate on its own

Artificial general AI (AGI): An artificial intelligence system that can learn and adapt, as opposed to its capabilities being limited to what is programmed

Alignment: A field of research that aims to make sure AI aligns with human value codes; for example, AI models may be trained to refuse to tell a user how to build a bomb or steal data

Ameca: A humanoid robot designed by UK-based Engineering Arts as a platform for developing interactive AI

Artstation: The largest online digital artist community on the internet; “Trending on Artstation” is a common prompt for creating AI art

Autonomous: A robot, vehicle or device that operates without human control

Bard: Google’s AI chatbot, powered by PaLM 2. Bard is not an acronym; the chatbot is named after William Shakespeare, the “Bard of Avalon”

Bidirectional Encoder Representations from Transformers (BERT): A Google machine learning framework for natural language processing used since 2018 for tasks such as predicting text in search

Bias: When an AI algorithm produces systemically prejudiced results due to biases in the training data.

BingGPT: Bing’s ChatGPT-based chatbot

Black Box AI: A machine learning concept where developers do not control or understand how the AI model processes information. The opposite of “Explainable AI”

Blinding: A method where certain information is intentionally withheld from an AI to make it more challenging to exploit

Boxing: A method where an AI is isolated, for example, by not connecting it to the internet, to prevent it from potentially causing harm outside of its developers’ control

ChatGPT: An open-source deep learning chatbot by OpenAI, first released to the public in November 2022. The current version is ChatGPT4

Chatbot: A computer program that uses AI and natural language processing to respond to human questions in real time

Clone: An AI clone uses voice and video data of a person to create an interactive digital version of that person

Convolutional neural network: An artificial neural network that can be trained to recognize objects or patterns, but is not predictive

Confabulate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “hallucinate.”

Confinement: Also known as AI capability control, AI confinement is a field related to alignment that aims to keep human control over AI systems.

Corpus: A large set of texts used to train an AI that uses natural language processing; these could be anything from social media posts to news articles to movies

Dall-E: OpenAI’s deep learning model for creating images

Data Dignity: A movement that advocates for the AI economy giving people control over their data and compensating them when data about or created by them is used

Data poisoning: A type of cyber attack where inaccurate or otherwise bad data is incorporated into an AI model’s training data set, causing it to give inaccurate or harmful results

Data mining: The process of analyzing datasets to discover new patterns that might improve the model

Defense Advanced Research Projects Agency (DARPA): The military research and development agency of the United States Department of Defense, a major AI and XAI researcher

Deep learning: An AI function of neural networks where a model learns how to respond based on data it’s given rather than simply performing what is programmed

Deepfake: Using AI to create video, images or voices that appear to be real but are not

Diffusion model: A generative AI model that can create high-resolution images by creating new data samples on top of the one they were trained on, leading to higher-quality images

Dream Studio: The web app of Stable Diffusion, a major deep learning text-to-image AI engine.

Explainable AI (XAI): A type of machine learning that designers can explain or interpret. The opposite of “Black Box AI”

Gemini: A Google language model powered by PaLM 2; unlike Bard, it has multimodal capability (text, image, sound and video)

Generative AI: AI that creates output, including text, images, music and video

Golden prompts: Prompts that have been engineered to give the user desirable results and can be used as a template for other prompts

Generative Pre-trained Transformer (GPT): OpenAI’s large language model on which the ChatGPT chatbot is built

Hallucinate: When an AI model randomly answers with false information presented as fact, often a result of insufficient data or bias. Interchangeable with “confabulate”

Humanoid AI: A physical robot designed to look like a human with AI neural networks allowing it to interact with humans. Sophia and Ameca are examples of humanoids in development.

Hypothetical intelligence agent: Potential artificial general AI that rewrites its own code to become independent of human programming

Imagen: A text-to-image diffusion AI Image creator that outputs photo-realistic images

Language Model for Dialogue Applications (LaMDA): A Google language model designed to engage in conversations that naturally evolve from one subject to another

LAION: A German non-profit that makes open-source deep learning models, including the models Stable Diffusion and Imagen are built on; has met controversy for scraping images from art sites like ArtStation and Deviant Art.

Large Language Model Meta A (LLaMA): Meta’s large language model, released in February 2023

Large Language Model (LLM): A deep-learning transformer model that is trained to understand natural language and respond in a human-like way

Lensa: A Stable Diffusion-based photo and video filter program by Prisma Labs that uses AI to transform images/selfies; many AI filters are built into TikTok, where they are popular and free

Less is More for Alignment (LIMA): Meta’s newest language model, considered competitive with Bard and ChatGPT, built on its LLaMA LLM.

Long Short-Term Memory (LSTM): First developed in 1997, a variety of recurrent neural networks (RNNs) that are capable of learning long-term dependencies, especially in sequence prediction problems

Low-rank adaptation (LoRA): A Microsoft training method that freezes part of an LLM to make fine-tuning it more efficient and cost-effective

Machine learning: The process or field of developing artificial intelligence by feeding a computer data and using the results to improve and evolve the technology.

Massively Multilingual Speech (MMA): A text-to-speech/speech-to-text AI model that can process over 1,100 languages

Meta Megabyte: AI architecture by Meta AI that can process large volumes of data without breaking down the input into smaller units (tokenization)

Midjourney: A generative AI text-to-image platform by San Francisco research lab Midjourney, Inc. Users create AI images through its Discord.

Moat: Not exclusively an AI term, a moat is a competitive advantage an AI company has over its competitors when its proprietary technology creates a barrier for other companies from entering the market

Multimodal: An AI model that combines multiple types of data, including video, text, audio and images

Narrow AI: AI that is designed to perform a single or narrow range of tasks, such as search engines, virtual assistants and facial recognition software

Natural Language Processing (NLP): A type of linguistic computer science that programs computers to analyze and process natural language data, so, for example, Alexa can “listen” and respond to a human voice

Neural Network: A method in AI where computers are trained to process data like a human brain rather than a programmed machine. Deep learning models are made up of neural networks

Oracle: A hypothetical controlled AI platform that can only answer simple questions and can not grow its knowledge beyond its immediate environment

Output: What the AI creates when prompted; it could be text, image, music or video

PaLM 2: Google’s AI model, used for Bard, Gemini and other Google AI uses

Playground AI: A free (up to 1,000 images a day) AI art generator using Stable Diffusion

Prompt crafting: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt engineering,” sometimes preferred by people who use AI for creative uses

Prompt engineering: Creating text prompts to interact with AI in a way that produces the desired results; interchangeable with “prompt crafting,” sometimes preferred by people who use AI for technical uses

Prompt framework: An outline of a prompt that includes all of the steps and information to create a specific output

Reactive AI: AI that provides output based on the input it receives, but does not learn or evolve. Examples include spam filters and recommendations based on your activity

Recurrent neural network (RNN): An artificial neural network that recognizes recurring patterns and uses the data to predict what comes next, often used in speech recognition and natural language processing

Seed AI: A type of hypothetical intelligence agent that eventually does not need human intervention to learn new things

Self-awareness: A level of AI, currently only existing in science fiction, in which AI has a level of consciousness similar to human beings, with emotions and needs

Sophia: An advanced, socially intelligent humanoid robot created by Hong Kong-based Hanson Robotics 2016

Stable Diffusion: An open-source, deep learning, text-to-image model released in 2022 by Stability AI. In April 2023, a new version called SDXL was released in beta; its official web app is DreamStudio

Theory of mind (ToM): In AI, ToM, or “emotional intelligence,” is when a machine can recognize human emotions and adjust its behavior in response. Early ToM models include humanoid robots Ameca and Sophia

Tokenization: Splitting large volume input or output into smaller units in order to make them manageable by large language models

Transformer: A neural network invented and open source by Google Research in 2017. Chatbots including GPT-3, LaMDA and BERT were built on Transformer

Vicuna: An open-source chatbot by Meta Research that runs on Meta’s LLaMA-13B, considered a competitor of BARD and ChatGPT


Knowledge is power!

Subscribe for free today and stay up to date with news and tips you need to grow your career and connect with our vibrant tech community.


UMD and IonQ built a National Quantum Lab in College Park

'We are just trying to survive, which is different from living': Afghan refugee Ghulam Danish on his journey toward thriving

After acquiring a DC-area company, Qualtrics just opened a new office in Reston

BarCamp Philly: Share your inner geek at this year's unconference

Technically Media