Stable Diffusion Now Has Its Own Open Source AI Language Model

It seems like nowadays everyone and their mother have a big language model. Stability AI, one of the companies that made its mark early in the AI rat race, has been slower than its contemporaries to break into the LLM space — at least until now. On Wednesday, the company announced It started StableLMa “suite” of language models designed to compete with alphabet soup AIs like OpenAI’s GPT-4, Meta’s LLaMA, and Google’s LaMDA.
Stability AI said it trained its new model on an 800GB open-source machine Record called “The Pile”. The company said it would release details of the new language model’s training data “in due course” alongside a full technical description. The various “very alpha” versions of the LLM, as CEO Emad Mostaque put it, come in 3 billion and 7 billion parameter variants, and the company claimed to be working on 15 and 65 billion parameter versions. The 7B version of the chatbot is accessible to test Hugging Face. This latest LLM was Stability AI’s attempt to “go back to our open roots.” after to Mostaque.
Gizmodo’s initial testing of the model in chatbot form was a bit awkward to say the least. The AI seemed to have an issue shifting gears after we asked them about issues with the training data of its competing AI models, then about the best way to peel a banana. The free space on Hugging Face is also inundated with requests, making it difficult to get a better feel for the AI. However, some users reported that it fails on some of the most rudimentary tasks like creating a peanut butter jelly sandwich recipe (apparently remember to scoop out the banana seeds when assembling).
Parameters are essentially a way for LLMs to generate predictions and offer a very rough estimate of how mature each model is. For comparison GPT-3, that was it The first to power OpenAI’s ChatGPT had 175 billion parameters. The company did not reveal how many parameters GPT-4 has, however semaphore reported last month that the latest version of OpenAI’s LLM has 1 trillion parameters. However, the number of parameters doesn’t necessarily tell you about the quality of the results the AI generates, and more parameters usually mean It takes a lot more energy to actually generate content.
Stability AI recognizes that it must assert itself in order to compete with its larger, Microsoft-backed competitors. The tool was designed to “help everyday people and everyday businesses use AI to unleash creativity.” The company announced that the company is “focused on efficient, specialized, and practical AI performance – not the quest for godlike intelligence.” The last piece seems to be a special dig at OpenAI whose execs seem obsessed with the idea of super intelligent AI.
On Twitter, Mostaque said that both the LLM and its training data will only get better over time and that he wants it to eventually process 3 trillion tokens that could best be described as units of text, whether they’re letters or words acts.
Stability AI has long been evangelical in the way it talks about AI, with Mostaque often sounding the horn for widely adopted open-source solutions AI programs, whether hell or high tide. But The company has reportedly been struggling with money lately since it has spent so much developing its AI projects and richer companies are catching the eye. The start lately presented its enterprise-oriented Stable Diffusion XL model this is said to be even better than the company’s previous AI image generators. Still, the company said it still plans to open source this newer generative AI model… eventually.