See also: Parrots, paperclips and security vs ethics: why the artificial intelligence debate sounds like a foreign language

Here is a list of some terms used by AI insiders:

AGI — AGI stands for “Artificial General Intelligence”. As a concept, it is used to mean a significantly more advanced AI than is currently possible, capable of doing most things as well or better than most humans, including improving itself.

Example: “To me, AGI is the equivalent of a median human that you can hire as an employee, and they can tell you to do anything that you would be happy to have an external colleague do behind a computer,” said Sam Altman on a latest Greylock VC event.

AI ethics describes the desire to prevent AI from causing immediate harm, and often focuses on issues such as how AI systems collect and process data and the potential for bias in areas such as housing or employment.

AI security describes the long-term fear that AI will develop so suddenly that a super-intelligent AI could harm or even eliminate humanity.

Targeting is the practice of tweaking an AI model so that it produces the outputs that the creators desired. In the short term, adaptation refers to the practice of building software and moderating content. But it could also refer to the much larger and still theoretical task of ensuring that all AGIs would be friendly to humanity.

Example: “What these systems adapt to — whose values, what those limits are — that are somehow set by society as a whole, by governments. And so creating that dataset, our adaptation dataset, it could be an AI constitution, what it than it is, it has to come very broadly from the community,” Sam Altman said last week during the Senate hearing.

Emergent Behavior – Emergent behavior is the technical way of saying that some AI models display abilities that were not originally intended. It can also describe surprising results from AI tools that are widely distributed to the public.

Example: “Even as a first step, however, GPT-4 challenges a large number of widely held assumptions about machine intelligence and exhibits emergent behaviors and abilities whose sources and mechanisms are currently difficult to discern precisely,” Microsoft researchers wrote in Sparks of Artificial General Intelligence.

Fast Start or Hard Start — A phrase that suggests if someone manages to build an AGI it will already be too late to save humanity.

Example: “AGI can happen soon or far in the future; the speed of the initial AGI to more powerful subsequent systems can be slow or fast,” says OpenAI CEO Sam Altman in a blog post.

Foom — Another way of saying “hard start”. It is an onomatopoeia and has also been described as short for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.

Example: “It’s like you believe the ridiculous hard-start ‘foom’ scenario, which makes it sound like you have no understanding of how it all works.” tweeted Meta AI Director Yann LeCun.

GPU — The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most used model at the moment is Nvidia’s A100.

Example: From Stability AI founder Emad Mostque:

Guardrails are software and policies that major tech companies currently build around AI models to ensure they don’t leak data or produce disruptive content, often referred to as “going off the rails”. It can also refer to specific applications that protect AI from going off-topic, such as Nvidia’s “NeMo guard rails“product.

Case in point: “The moment for government to play a role has not passed us by in this period of focused public attention on AI, the time is right to define and build the right guardrails to protect people and their interests,” Christina Montgomery, president of IBM’s AI ethics board and VP at the company, said in Congress this week.

conclusion – The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Example: “The problem with inference is if the workload increases very quickly, which is what happened with ChatGPT. It did like a million users in five days. There’s no way your GPU capacity can keep up with that,” Sid Sheth, founder of D-Matrix, earlier told CNBC.

Large language model — A sort of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that appears like a human wrote it.

Case in point: “Google’s new large language model, which the company announced last week, uses nearly five times the training data of its 2022 predecessor, allowing it to perform more advanced coding, math and creative writing tasks,” CNBC reported earlier this week.

Clip are an important symbol for AI Safety advocates because they symbolize the chance that an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” tasked with making as many paper clips as possible. It decides to turn all people, the earth and ever larger parts of the cosmos into paper clips. OpenAI logo is a reference to this story.

Example: “It also seems entirely possible to have a superintelligence whose only goal is something completely arbitrary, such as making as many paper clips as possible, and which would resist with all its might any attempt to change this goal,” Boström wrote in his thought experiment.

Peculiarity is an older term that is not often used anymore, but it refers to the moment when the technological change becomes self-reinforcing, or the moment when an AGI is created. It’s a metaphor – literally, singularity refers to the point in a black hole of infinite density.

Case in point: “The advent of artificial general intelligence is called a singularity because it’s so hard to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.

[pub1]