Next Generation AI?

An artificial intelligence (AI) research lab recently released what is quickly becoming known as the most dangerous AI algorithm in history. Dubbed GPT-3 (for Generated Pretrained Transformer), it’s the latest in a series of automated text-generating neural networks.

Basically, it’s a language model that has been trained on the largest dataset of text to ever be fed into an AI system, including digital books, articles, religious texts and anything else you can imagine – a reported 175 billion parameters. To put it in perspective, the entirety of English Wikipedia entries (an estimated 6 million articles) makes up less than 1 percent of the information used to train GPT-3.

The result is a system that can take a few sentences and expand them into a full-length story – or poetry, or computer code, or meme – and do it fairly convincingly. The fear is that tools like this could be used to spread fake news faster than ever. But the downside of such a huge system is that it requires a lot of computing power to actually use it. So, while it’s interesting, it may not be practical. We’ll see…

For information: OpenAI, 3180 18th Street, San Francisco, CA 94110; Web site: https://openai.com/