At the presentation of the new iPhone, Apple engineers often juggle incomprehensible terms, one of which is “neural processor”: from year to year it is improved, increasing the number of operations from several hundred billion to several trillion per second. However, for many there is no visible benefit from the neural coprocessor, so not every user knows why this module is needed and what it is responsible for. By the way, in 2022, without it, your smartphone would not have the lion’s share of functions: we explain, what is Neural Engine and why is it needed.
What is Neural Engine
Apple began developing its own processors in 2010, because that’s when the iPhone 4 came out with the A4 chip. But the era of neural coprocessors only began in 2017 with the release of the iPhone X — it’s not for nothing that it is considered one of the most iconic in Apple’s history: the company introduced the new A11 Bionic chip with the first generation Neural Engine, consisting of two cores and performing about 600 billion operations per second — A16 Bionic has 16 cores, and performance has increased to 17 trillion operations per second.
Many did not understand why it was needed – after all, the iPhone somehow worked without it all these years. However, Apple noted that the Neural Engine is based on neural networks and is used for Face ID, augmented reality elements like AniMoji and Memodji, and other resource-intensive tasks. It turned out that for some processes it is not necessary to use the main computing core or video card at all.
Why you need a Neural Engine
The initial feature for which the Neural Engine was used, of course, was Face ID: due to the coprocessor, the system built a system of points on a person’s face for the most accurate determination when unlocking. There were attempts to create something similar with other manufacturers of Android smartphones: it turned out quite close, but it worked differently (only due to the camera) and not entirely safe, since the system could simply unlock the phone even from a photo.
In the future, Neural Engine was adapted for other cool features, for example, for portrait photography, using Siri, speech recognition, photo collation and Memories – yes, all these functions work through neural networks and machine learning.
As mentioned above, neither the CPU nor the GPU was suitable for the operation of neural networks. But why? The specifics are such that when AI works, it is necessary to perform a large number of fairly simple calculations in parallel. With the help of the Neural Engine, this happens in parallel from the CPU and the video core, so they are not overloaded with unnecessary processes, and the iPhone’s charge is spent more efficiently.
Neural Engine Chip also in the fact that it works autonomously, without sending data somewhere to the server. This allows you to stay offline while Face ID, Night Snapshot, Photo Bins, Background Blur, and more work offline.
Naturally, Apple regularly updates iOS with an eye on neural networks, so when choosing an iPhone, it is important to focus not only on the number of CPU and GPU cores, but also on the Neural Engine – due to it, the lion’s share of the everyday functions that you use will work much faster.
Artificial intelligence in iPhone
The appearance of a neural coprocessor is another trend that Apple has set. Since then, each mobile chip has had its own engine responsible for computing, so in 2022 it is not at all surprising that a cheap Android flagship has some kind of AI-assisted photo enhancement.
Incidentally, even Neural Engine в A16 Bionic – far from the fastest on the market: even the Snapdragon 8+ Gen 1 has a built-in Hexagon neural coprocessor, which is capable of performing up to 27 trillion operations per second, while the latest Neural Engine is only up to 17 trillion.
One of the main pursuers Apple Neural Processor is a development by Google – Next-gen Tensor Processing Unit, which is included in the proprietary Tensor G2 chip. In particular, it is thanks to him that Pixel smartphones have cool features that the iPhone will never have.
As mentioned above, at first Neural Engine feature set was rather scarce, and AI was used only for embedded applications. However, now third-party software vendors can also use it for their work: for example, for voice identification, face recognition, images, and much more.
In iOS 16, a huge emphasis is placed on the use of neural networks: the system has a huge number of functions that work through artificial intelligence. Text recognition on photos and videos, the ability to cut out an object from a photo evenly, adjust the depth effect on the wallpaper of the lock screen – everything needs Neural Engine.
Source: AppleInsider.ru — крупнейший сайт о iPhone, iPad, Mac в России by appleinsider.ru.
*The article has been translated based on the content of AppleInsider.ru — крупнейший сайт о iPhone, iPad, Mac в России by appleinsider.ru. If there is any problem regarding the content, copyright, please leave a report below the article. We will try to process as quickly as possible to protect the rights of the author. Thank you very much!
*We just want readers to access information more quickly and easily with other multilingual content, instead of information only available in a certain language.
*We always respect the copyright of the content of the author and always include the original link of the source article.If the author disagrees, just leave the report below the article, the article will be edited or deleted at the request of the author. Thanks very much! Best regards!