Thursday, November 02, 2023

What if Generative AI Turns out to be a Dud?

 I follow posts on twitter from different sides of the generative AI debates, including Yann LeCun (whom I've followed for decades) and Gary Marcus (whom I discovered just in the past few years). I'll post at some other time about my views, but found this post by Marcus to be intriguing. I first published my comments here on LinkedIn 


Key quotes at the end of the article, 


"Everybody in industry would probably like you to believe that AGI is imminent. It stokes their narrative of inevitability, and it drives their stock prices and startup valuations. Dario Amodei, CEO of Anthropic, recently projected that we will have AGI in 2-3 years. Demis Hassabis, CEO of Google DeepMind has also made projections of near-term AGI.

I seriously doubt it. We have not one, but many, serious, unsolved problems at the core of generative AI — ranging from their tendency to confabulate (hallucinate) false information, to their inability to reliably interface with external tools like Wolfram Alpha, to the instability from month to month (which makes them poor candidates for engineering use in larger systems)."

This is exactly how it comes across to me and is consistent with what I've experienced myself and what my closest colleagues who have used generative AI have also experienced.


No comments: