What are the risks as Apple Intelligence comes to our iPhones
AI anxiety: What are the risks as Apple Intelligence comes to our iPhones?
BY MICHAEL SINCEREJUNE 19, 2024•5 MIN READ
It’s been a long time coming, but Apple finally unveiled its AI strategy. In fact, Apple recently partnered with OpenAI, an artificial intelligence research laboratory, to incorporate AI into its iPhones. The question is: how can they do so as safely and as socially responsibly as possible?
During a two-hour presentation earlier this month at the company’s annual World Developers Conference in Cupertino, Calif., CEO Tim Cook and other Apple executives revealed what they are calling Apple Intelligence.
During the presentation, Apple AAPL - $209.68 4.61 (2.151%) showed how Apple Intelligence works with a number of their products including iPhones, the Mac and iPads. Admittedly, it is an evolving technology.
“It’s not 100 percent,” Cook explained. “I think we have done everything that we know to do, including thinking very deeply about the readiness of the technology in the areas that we’re using it in.”
For starters, Apple Intelligence is upgrading its virtual assistant Siri, allowing her to perform many more tasks while also using a much more conversational tone.
During the demo, Apple showed how Siri could pull up airplane flight information from an email and find details about a lunch reservation that was included in a text. Apple Intelligence can also write and proofread emails and create cartoon images of your friends.
Highest ethical standards
Although these enhancements are useful and sometimes fun, Apple is keenly aware of its role in making sure that AI is developed with the highest ethical standards. In an interview with YouTuber Marques Brownlee right after his keynote address, Cook put an emphasis on security and privacy:
“We are not waiting for comprehensive privacy legislation regulation to come into effect. We already view privacy as a fundamental human right. That’s the lens that we see it at. Given that we’re doing those things, personal context and privacy, we wanted to integrate it at a deep level.”
Cook made it clear that privacy and security was Apple’s biggest concern, and they would do everything to protect both.
AI anxiety
Many people are nervous about artificial intelligence. Some are concerned that it will take away human jobs, which has already happened in some factories and retail stores. Others are concerned that AI will develop without any ethical concerns or without adhering to ISO standards (these are a set of global rules and regulations that ethical businesses and consumers follow). In fact, several technology experts want AI participants to follow ISO standards.
Apple photo
There is reason to be concerned. Although AI is still in its infancy, some of the things it “could” do in the future are the stuff of science fiction. In fact, some experts warn that AI could do more harm than good. For example, the late Prof. Stephen Hawking, in a 2014 interview with the BBC, warned: “The development of full artificial intelligence could spell the end of the human race.”
More recently, the Center for AI Safety (CAIS), wrote: “Ensuring that AI systems are safe is more than just a machine learning problem – it is a societal challenge that cuts across traditional disciplinary boundaries.”
When AI is misused
If unmanaged, AI could cause tremendous damage in ways few can imagine. Recently, AI has been used to create seemingly real images and videos of people, including Taylor Swift. These “deep fake” images can do lasting damage to the reputations of celebrities, as well as to innocent civilians.
AI could also be used by criminals to imitate the voices of real people or to send realistic emails and texts to try and steal money or products. If the government doesn’t step in to provide oversight, AI could cause even more problems for society.
Nevertheless, it’s a given that AI is going to be a bigger part of all of our lives moving forward. Now that Apple has officially entered the AI space, look for even more participants and players to join in.
If AI can be used as a responsible tool to enhance our lives, AI will be welcomed. However, we also know that, like any tool, there are those who will abuse AI or any technology for their own selfish interests. That’s why many experts are calling for strict AI regulation.
Government regulations
Fortunately, governments are responding. The European Union passed the AI Act, which was designed to help the government manage the risks of AI. Also, President Biden issued an executive order to not only promote AI development but also give federal agencies guidelines when designing or acquiring AI products. This includes creating a set of testing standards with a goal to minimize AI risks.
In addition to increased regulation, others suggest that scientists should think about how to use AI in the future as a force of good. For example, AI technology is being used right now to help predict when natural disasters may occur, and to reduce response times.
It is really up to AI scientists, technicians, and the government, to come up with a set of common-sense guidelines to enhance the positive aspects of AI while trying to control the negative. With the help of AI, it’s possible for scientists to speed up technology that will improve the environment and make the world a safer place.
Now that Apple has joined this elite AI group, many are hoping they help AI turn into a positive tool rather than technology to be feared.