Elon Musk voiced a dire warning about AI. Should we be worried?
4 min read
Opinions expressed by Entrepreneur contributors are their own.
Tech leader Elon Musk is known for sounding the alarm bells on the risks of artificial intelligence.
Musk has said that he believes that AI will soon manipulate social media if it hasn’t already — a concern that pales in comparison to his previous predictions of a future humanity governed by an intelligent machine dictator.
A year ago, he told Recode Decode that the relative intelligence ratio between such a dictator and the rest of humanity would resemble the ratio between a person and a cat.
The great Musk doesn’t stand alone in fearing the risks of AI gone wrong. Stephen Hawking and other researchers have said that intelligent machines could become very dangerous. But there’s another, brighter, possible future that Musk agrees could materialize as well.
In a conversation with Musk and journalist Maureen Dowd for Vanity Fair, the Tesla founder agreed with Y Combinator’s Sam Altman’s prediction: “In the next few decades we are either going to head toward self-destruction or toward human descendants eventually colonizing the universe.”
The first, Terminator-like option is terrifying. But the latter possibility sees artificial intelligence opening doors for humankind to become a race of space-exploring Han Solos and Princess Leias. So far, we’re headed in the right direction, as AI today is improving human lives in various applications, including healthcare, defense, and business.
Healthcare, perhaps, is where machine learning’s potential will become increasingly visible in the years to come.
If AI works as optimists hope, it could democratize healthcare by boosting access for underserved communities and lowering costs across the board, all while assisting in the early detection of life-threatening diseases. Already, AI models are changing the way cancer, the leading cause of death in wealthy countries is diagnosed.
MIT’s Computer Science and Artificial Intelligence Lab has developed an AI model that can anticipate the development of brain cancer up to five years in advance. When it comes to cancer treatments, time is of the essence, and AI’s ability to help diagnose early on has the potential to save lives.
AI is disrupting the genetic-care space as well.
There are currently around 5,000 geneticists in the world, and though genetic sequencing has become easier and cheaper to perform, making sense of the data is still largely a human effort.
Each test is a tedious, mini-research project, taking hours to perform. Genomics platform Emedgene, for example, uses AI to bring the world a genetic interpretation platform, to help the human geneticists interpret data that can be used to inform doctors’ treatments of various illnesses.
No one can deny that AI is improving human life in various fields, and nowhere is its value more apparent than medicine.
The alarmists’ primary concern today, though, lies with the prospect of machines intruding on people’s privacy. With the AI-powered, super-convincing doctored videos known as “deepfake” and the data-privacy controversy surrounding Russian photo-editing app FaceApp dominating headlines this past summer, it’s quite reasonable that such fears persist.
But it’s important to keep in mind that there are AI companies out there actively working to ensure AI is trained in a secure manner, and that consumer data remains private.
“Artificial intelligence is set to shape the future of many industries, and public perception of the matter is substantial,” says Leif Lundbaek, CEO of XAIN, a company that provides GDPR compliance for AI applications. “The time of misusing personal data is over. Data privacy will become a key competitive factor for machine learning solutions because it will be demanded by both governments and ourselves as users.”
It’s only a matter of time before AI companies across the globe start complying with increasingly strict government-imposed data privacy regulations, as even various U.S. states are racing toward crafting regulations of their own to keep up with GDPR. While that fact alone might not be enough to put the likes of Musk at ease, it does prove governments are taking the right steps toward ensuring the AI of the future is an AI that’s good for humanity.
Let’s keep heading in that direction.