In a recent interview with Scott Pelley on 60 Minutes, the ‘Godfather’ of artificial intelligence (AI), Geoffrey Hinton, talked about the unknown challenges we face as we move further into AI technology.
We’re entering a period of great uncertainty, where we’re dealing with things we’ve never dealt with before. And normally, the first time you deal with something totally novel, you get it wrong. And we can’t afford to get it wrong with these things.
Highlighting the seriousness of the issue, Hinton also mentioned that if we handle these advancements poorly, “they might take over… yes. That’s a possibility.”
A Call for Caution: The AI Community’s Growing Concerns
His straightforward comments reflect a growing worry within the AI community about the fast pace of AI technologies without being properly prepared.
Backing Hinton’s concerns, a recent statement issued by a nonprofit advocating for the development of safe AI, the Center for AI Safety, shared the collective apprehension of hundreds of AI executives and researchers about the rapid growth of AI. Geoffrey Hinton was one of the signatories, along with executives from tech giants leading in the technology, such as OpenAI, Microsoft, and Google.
The statement also emphasized the need for proper preparation and a cautious approach to avoid potential missteps that could have irreversible consequences.
Machine Versus Man: The Potential of AI Surpassing Human Intelligence
The discussion between Pelley and Hinton on 60 Minutes revealed many startling insights about the future of AI.
Hinton, known for his groundbreaking work in AI, shared his worries about machines possibly becoming smarter than humans. This concern is real as AI is rapidly evolving, learning more and more every day.
The interview also touched upon Hinton’s early days, which were filled with skepticism from the academic community during the initial stages. Despite this, he kept working on exploring the idea of neural networks, showing a strong will that eventually led to major discoveries in AI.
It took much, much longer than I expected. It took, like, 50 years before it worked well, but in the end it did work well… I always thought I was right [about neural networks].
Hinton’s thoughts on what AI can do now and in the future were both intriguing and a bit alarming. He believes that AI systems, with their ability to learn and think, might one day outsmart humans.
I think they may be [better at learning than the human mind], yes. And at present, they’re quite a lot smaller. So even the biggest chatbots only have about a trillion connections in them. The human brain has about 100 trillion. And yet, in the trillion connections in a chatbot, it knows far more than you do in your hundred trillion connections, which suggests it’s got a much better way of getting knowledge into those connections – a much better way of getting knowledge that isn’t fully understood.
A Healing Touch: AI’s Promise in Revolutionizing Healthcare
The potential benefits of AI in healthcare, as highlighted by Hinton, show a promising side of what AI can achieve. From interpreting medical images to designing drugs, AI’s contribution to healthcare can be life-changing.
AI is already comparable with radiologists at understanding what’s going on in medical images. It’s gonna be very good at designing drugs. It already is designing drugs. So that’s an area where it’s almost entirely gonna do good.
In fact, there is already evidence that AI is revolutionizing medicine. By sifting through extensive data to identify potential compounds, AI cuts down the drug discovery timeframe drastically, as showcased by biotech startup Kantify. Additionally, collaborations between AI firms and major drugmakers, like the UK startup Causaly, are paving the way for more affordable and quicker drug development.
The Two Edges of AI: Uncovering the Risks Alongside the Rewards
However, the flip side of AI presents risks like unemployment, fake news, and the misuse of AI in military operations, which are equally concerning.
Well, the risks are having a whole class of people who are unemployed and not valued much because what they– what they used to do is now done by machines.
Learning from History: Hinton’s Reflection on AI, Governance, and Global Responsibility
Hinton stated that he has no regrets towards his work in AI due to its potential for good. But he says now is the time to study AI, for governments to set rules, and for a global agreement to stop the use of military robots. He spoke of Robert Oppenheimer, who, after making the atomic bomb, fought against the hydrogen bomb – a man who changed the world but found it out of his control.
It may be we look back and see this as a kind of turning point when humanity had to make the decision about whether to develop these things further and what to do to protect themselves if they did.