Have you ever been in a situation where your short term memory was so limited that you had an idea, sought to speak it, and by mid-sentence, forgot the idea? Can you imagine how limiting that can be to your ability to reason?

After building a number of applications that use OpenAI’s Chat GPT-4 API, I believe limited “short term memory” (number of tokens available to use) is the primary limitation to GPT-4 being a true artificial general intelligence (AGI; equivalent to human intelligent minus self-awareness).

So what if GPT-4’s token limitation of 8,192 is increased to 1,000,000,000 and what if Microsoft says this is possible and they will build it in the very near future?

2023-11-14 Edit: OpenAI announces and makes immediately available GPT-4-turbo, increasing context window size from 8,192 up to 128,000 and reducing cost by 2x to 3x.

From everything I hear, I’m thinking the primary – if not only – limitation for OpenAI right now is processing power (graphics cards) and local memory on the computers that make up the neural network. I’ve heard Sam Altman, the CEO of OpenAI, speak of their need for more money to buy more graphics cards.

Microsoft, partnered with OpenAI, has that money and, as the company who incorporated GPT-4 into their search engine (Bing), with plans to add it to their other products, including Windows, has the incentive to pay the bill.

Do you use any of the current a.i. for getting information, helping you plan, scrutinizing your code, etc.?

How many months do you think it will be before we have AGI? Many agree that once AGI is obtained, the path to ASI (Artificial Super Intelligence) will be quick and easy. A development in that direction, GPTs having the ability to improve upon their own reasoning are already technically possible.

After that, do you think any a.i. will attain self-awareness? 

Here’s a 19-minute video talking about this and making a few predictions: