Yann LeCun, Yoshua Bengio and Geoffrey Hinton were credited years ago for their work on Neural Networks. They won the Turing award and are called Godfathers of AI. Since then, Yann is the only one that is still a proponent of AI. The other two have switched to voicing concerns about AI. What do you make of that? To me it sounds like there are two arguments. 1. AI as a tool is dangerous in the hands of bad guys. This is valid but I donāt think we can do anything about it. Even if Western countries agree to only do āsafeā AI, China and Russia arenāt going to comply. You can only hope that we stay a step ahead. This is also true about any new technology. 2. AI becoming Terminator on its own. I donāt see how this is possible. LLMs are just a baby step. I donāt see a path to which AI becomes nefarious on its own, like in the movie HER. I am concluding that it is actually not a big concern. Curious if we have anyone here who can counter that, especially on #2 above. What do you think? @openai
It is an existential risk in the sense that it could create unwanted side effects for society. Like what social media did but much worse.
Can you expand on this? What are some examples of?
On short time scales the displacement of workers. On medium timescales an unrivaled renaissance of discovery, invention, and efficiency. On long timescales the very definition of what it means to be human. In the end the real problem: we canāt progress further without AI taking a larger role.
1: it is not true that we can't do anything. For example, both China and Russia have not done any dangerous things with nuclear power so it should be possible for AI as well.
I agree. we can do what we can but I donāt think this scenario is unique to AI. The people who seem scared seem to be more in the #2 camp. But no one is able to give a technical explanation on how ai will become dangerous on its own.
As skeptical as I am, it gives me a pause when ai pioneers are paranoid. Wondering what those scenarios are that concerns them.
I don't think the risk is from the AI take over side. More like business CEO/pm overly trusting the output of AI without verifying over with human agent first. Explainability and audit ready output of AI is going to be the biggest road block
We still donāt have self driving car, Alexa & google home is trash. ChatGPT gets wrong answer frequently. This is all overblown.
That is an awesome perspective. Thanks for sharing. Gives me the permission to lower the weights I assigned to their dire prognosis. To his credit, Bengio has committed to focusing on AI safety. Their obsession with existential AI risk got me very curious that there is a scientific explanation for their concerns. Doesnāt sound like it.
both can be true
Thereās one more thing: too much job displacement too fast can destabilize society and get bad actors elected. Itās not only about terminator stories.
Oh that is interesting. Is there a historical precedent for this?
Is there a historical precendent of AI? Everything about the future is speculation. Especially AI related. There are examples of bad actors elected or getting in power throughout history though. Itās not without precedent.