Artificial Narrow Intelligence (ANI) refers to AI systems designed to perform specific tasks, excelling in one domain but lacking broader cognitive abilities. While highly effective in specialized tasks, ANI cannot adapt to new contexts or learn beyond its programming.
Term Context:
AI, GenAI
Artificial General Intelligence (AGI) refers to AI capable of performing a wide range of tasks at a human level, adapting and learning without specific programming. While it holds the potential to revolutionize industries like healthcare and education, AGI also raises significant ethical concerns, such as risks related to autonomous decision-making and job displacement. Its development remains theoretical, with ongoing debates about its feasibility and societal impact.
Term Context:
AI, GenAI
Artificial Superintelligence (ASI) refers to AI that surpasses human intelligence in all areas, capable of solving complex problems and driving advancements in science and technology. While ASI could revolutionize fields like medicine and climate science, it poses significant risks if not aligned with human values, leading to potentially harmful outcomes. Ensuring its safe and ethical development is one of the biggest challenges in AI research.
Term Context:
AI, GenAI
Few-shot prompting is a technique in natural language processing where a model is given a task along with a small number of examples to demonstrate how the task should be performed. By providing several examples, the model can better grasp the nuances of the task, such as specific formats, styles, or patterns, and apply this understanding to generate an appropriate response. Few-shot prompting strikes a balance between flexibility and specificity, allowing the model to generalize from the examples while still using its existing knowledge base to adapt to the task. This approach is particularly effective in guiding the model to produce more accurate and contextually relevant outputs, as the multiple examples help refine its understanding of the task's requirements.
Also see "Zero-Shot Prompting" and "One-Shot Prompting".
Term Context:
GenAI
A foundational model is a type of large-scale machine learning model trained on a vast and diverse dataset, designed to perform a wide range of tasks with minimal task-specific training. These models are typically built using advanced architectures like transformers and are pre-trained on extensive data, allowing them to learn general-purpose features or representations. Once trained, a foundational model can be fine-tuned or adapted to specific tasks, often achieving high performance with relatively little additional data or effort. They serve as a "foundation" for various applications, such as natural language processing, computer vision, and other AI-driven tasks.
Term Context:
AI, GenAI
A model is a simplified representation of a system or phenomenon used to describe, predict, or explain its behavior. Models capture the essential features of the system they represent. By simplifying reality, models make it easier to analyze complex situations, test hypotheses, and simulate scenarios. While useful, models have limitations, as they cannot capture every detail of the actual system. Their effectiveness is measured by their accuracy, predictive power, and how well they align with real-world data, leading to continual refinement.
Term Context:
AI
One-shot prompting is a technique in natural language processing where a model is given a task and a single example of how that task should be performed. The model uses this one example, along with its existing knowledge base, to understand the task and generate an appropriate response. This approach is particularly useful when a specific format or style is required, as the single example provides guidance on how to structure the output. One-shot prompting offers a balance between flexibility and control, allowing the model to adapt to new tasks with minimal input while still leveraging its pre-trained knowledge. The effectiveness of one-shot prompting depends on the quality and relevance of the example provided, as it sets the standard for the model's output.
Also see "Zero-Shot Prompting" and "Few-Shot Prompting".
Term Context:
GenAI
The "temperature" parameter in large language models (LLMs) controls the randomness of the model's output. It essentially influences how creative or deterministic the generated responses are. The temperature can be adjusted to fine-tune the behavior of the model, depending on the desired output.
When the temperature is set to a lower value, such as 0.1, the model's responses become more conservative and focused. In this mode, the model is more likely to choose the most probable words or phrases, resulting in predictable and precise outputs. This setting is useful for tasks where accuracy and consistency are critical, such as technical writing, code generation, or providing factual information.
On the other hand, when the temperature is set to a higher value, like 0.9 or 1.0, the model's responses become more varied and creative. The higher temperature increases the likelihood of selecting less probable words, which can lead to more diverse and imaginative outputs. This is ideal for creative writing, brainstorming, or generating content where novelty and innovation are prioritized.
Adjusting the temperature allows users to strike a balance between creativity and precision, making it a powerful tool for tailoring the behavior of the language model to suit specific needs.
Term Context:
GenAI
The "top-k" parameter is a mechanism used in natural language processing (NLP) models, particularly in text generation tasks. It is one of the methods used to control the randomness and diversity of the generated output, influencing how the model selects the next word in a sequence.
In a text generation process, after a model predicts the probabilities of potential next words, the top-k sampling method filters these predictions by limiting the number of words considered to a subset of the most likely options. Specifically, the model sorts the predicted words by their probability and retains only the top k words. For example, if k is set to 5, only the five words with the highest probability are considered as possible candidates for the next word. From this smaller pool, one word is randomly selected based on their probabilities.
The choice of k significantly impacts the generated text's creativity and coherence. A smaller k value makes the model more conservative, often leading to repetitive and deterministic outputs since the model sticks closely to the most probable words. This can be useful in scenarios where coherence is critical, such as generating technical documentation or formal writing. On the other hand, a larger k value introduces more randomness, allowing for more diverse and creative outputs, which can be beneficial in creative writing or storytelling applications.
In summary, the top-k parameter is a crucial tool for controlling the balance between predictability and diversity in text generation. By adjusting k, users can fine-tune the behavior of language models to better suit their specific needs, whether they require reliable, high-confidence predictions or more varied and inventive language outputs.
Term Context:
GenAI
The "top-p" parameter, also known as nucleus sampling, is a crucial hyperparameter in the process of generating text with large language models. It controls the diversity of the generated content by dynamically selecting from the model's output distribution. Instead of focusing solely on the top-ranked tokens (as in top-k sampling), top-p sampling considers the cumulative probability distribution over all potential next tokens.
When generating text, the model produces a probability distribution over the possible next words. The top-p parameter is set as a probability threshold (between 0 and 1) that determines the smallest set of tokens whose cumulative probability reaches or exceeds this threshold. The model then samples from this subset, which is often referred to as the "nucleus."
For example, with top-p = 0.9, the model will consider the smallest group of tokens that together account for 90% of the probability mass. If the distribution is skewed, this might include only a few tokens, but if it's more uniform, a larger number of tokens might be considered. This allows the model to adapt the diversity of its output dynamically based on the context, often leading to more coherent and contextually appropriate text.
Adjusting the top-p value influences the balance between creativity and coherence. A lower top-p value results in more focused and deterministic outputs, reducing randomness and potentially making the output more predictable. Conversely, a higher top-p value increases diversity and creativity but can lead to less coherent or more surprising results. Users can tweak this parameter based on their specific needs, whether they seek highly controlled text generation or more open-ended and creative outputs.
Term Context:
GenAI
Zero-shot prompting is a technique in natural language processing where a model performs a task without having been explicitly trained on examples of that task. Instead, the model relies on its existing knowledge base, acquired through extensive pre-training on diverse text data, to generate a response or complete a task based solely on the prompt given. This method allows for remarkable flexibility, enabling the model to tackle a wide variety of tasks—such as translation, summarization, or answering questions—without the need for specific training data. The success of zero-shot prompting largely hinges on the clarity and construction of the prompt, as well as the model's ability to generalize from its vast, pre-existing knowledge.
Also see "One-Shot Prompting" and "Few-Shot Prompting".
Term Context:
GenAI