When the new
iPhone models (8, 8 Plus and X) were announced in the September Event keynote, the new GPU Family 4 webpage and a series of new Metal videos labeled
Fall 2017were published. The new processor, called A11 Bionic, has the first
Apple-designed GPU which comes with three cores. Internal tests state it is 30% faster than the previous
GPU inside the
A10. It also features a new
Neural Engine hardware addition, for machine learning.
Below is a table I made out of the Metal Feature Sets document. It includes only what
Metal 2introduces for the
Note: I only included features that are new for
A11and not available for
A10or earlier. Some of these features are also available for
Let’s look briefly into some of these features:
- Imageblocks – is not a new concept on iOS devices, however,
A11lets us treat imageblocks (which are structured image data in tile memory) as data structures with granular and full control. They are integrated with fragment and compute shaders.
- Tile Shading – is a rendering technique that allows fragment and compute shaders to access persistent tile memory between the two rendering phases. Tile memory is GPU on-chip memory that improves performance by storing intermediate results locally instead of using the device memory. Tile memory from one phase is available to any subsequent fragment phases.
- Raster Order Groups – provide ordered memory access from fragment shaders and facilitate features such as order-independent transparency, dual-layer G-buffers, and voxelization.
- Imageblock Sample Coverage Control – Metal 2 on A11 tracks the number of unique samples for each pixel, updating this information as new primitives are rendered. The pixel blends one iteration less than it did on A10 or earlier GPUs, when the covered samples share the same color.
- Threadgroup Sharing – allows threadgroups and the threads within a threadgroup to communicate with each other using atomic operations or a memory fence rather than expensive barriers.
Even though it is not necessarily
Metal related, at the same event, the Face Tracking with ARKit video and Creating Face-Based AR Experiences webpage were also published.
Face Tracking, however, is only possible on the
iPhone X because it is the only one at the moment that has a
TrueDepthfront camera. The most immediate application of face tracking we have all seen during the September Event keynote, were the amazing Animoji! The new
Neural Engine hardware is responsible for
Animoji, among other machine learning tasks. Since I purchased an
iPhone Xrecently, it might give me some ideas for a new post in the
Using ARKit with Metal series.
Until next time!