Sure it's useful even fair to call it narrow domain AI. But peddling it as "AGI soon", and lobbying around AI safety aka regulatory capture and shutting out open source is one of the biggest scams Open AI is trying to pull off. Leaked Google memo "We have no moat and neither does OpenAI" is spot on.
Transformers won't get us to AGI. They opened up a new realm of research but there's still another leap or five needed before we get there.
Why do you think that?
Prediction vs. understanding. To achieve AGI, the agent needs to understand why it came to the answer it did, not just mimic what others have said in similar situations.
Adjusted Gross Income