Apple’s VisionOS Makes a Bold Leap in Computer Interface

Like everyone else Who was allowed to test Apple’s new? Vision Pro After it was unveiled at the Worldwide Developers Conference in Cupertino, California this week, I couldn’t wait to see it. But when an Apple tech at the ad hoc test facility used an optical device to check my lenses, I knew there might be a problem. The lenses in my glasses have prisms to combat a condition that otherwise gives me double vision. Apple has a set of pre-ground Zeiss lenses that will work for most of us eyeglass wearers, but none have solved my problem. (With the Vision Pro still about a year away from launch, I didn’t expect them to be able to handle all prescriptions in this beta; even after years of operation, Warby Parker still can’t grind my lenses.) Absolutely , my fears were justified: when I arrived in the demo room, the facility for eye tracking – an important function of the device – was not working. I could only experience part of the demos.
What I saw convinced me that this was the world’s most advanced consumer AR/VR device, and I was impressed by the fidelity of both the virtual objects and icons appearing in the artificially rendered space I was sitting in. floated, as well as the alternate realities conveyed in immersion mode, including sporting events that pushed me to the edge, a 3D mindfulness dome that encased me in soothing petal shapes, and a trip to a mountain top that left my stomach turning and equaled the best VR I’ve ever experienced. (You can read Lauren Goode’s description of the full demo.)
Unfortunately, my eye-tracking issue prevented me from trying out what might be the most important part of the Vision Pro: Apple’s latest advancement in computer interface. Without a mouse, keyboard, or touchscreen, you can navigate the Vision Pro simply by viewing images projected onto two high-resolution micro-OLED displays and performing finger gestures such as tapping to select menu items, scrolling, and more. and manipulate artificial objects. (The only other controls are a knob called the digital crown and a power button.) Apple describes this as “spatial computing,” but you could also call it “naked computing.” Or maybe that designation will have to wait until the roughly 1-pound scuba face mask is replaced with Supercharger goggles in a future release. Those who tested it said they mastered the tools almost immediately and were able to easily access documents, surf Safari, and take photos.
VisionOS, as it’s called, is a significant step in a half-century journey away from the original computer prison of an interface: the cumbersome and inflexible command line, where nothing happened until you used the keyboard to invoke a stream of alphanumeric characters, and all that After that, an equally restrictive keyboard workaround happened. Beginning in the 1960s, researchers launched an attack on this command line, beginning with that of the Stanford Research Institute Doug Engelbart, whose networked “augmenting computing” system introduced an external device called a mouse to move the cursor and select options over menu options. Later, scientists at Xerox PARC adapted some of these ideas to create what is known as the graphical user interface (GUI). PARC’s most famous innovator, Alan Kay, drew up plans for an ideal computer, which he dubbed the Dynabook, which was the holy grail of portable, intuitive computing. After seeing PARC’s innovations on a lab visit in 1979, Apple engineers brought the GUI to the mass market, first with the Lisa computer and then with the Macintosh. More recently, Apple provided a prime example with the iPhone’s multi-touch interface. These pinch and swipes were intuitive ways to access the digital capabilities of the tiny but powerful phones and watches we carried in our pockets and on our wrists.
The mission of each of these computing shifts was to lower the barrier to interacting with the powerful digital world and make it less awkward to take advantage of computing. This came at a price. Aside from being intuitive in nature, the natural gestures we use when we’re not working at the computer are free. But making the computer as easy to navigate and as vivid as nature is expensive. It required a lot more computation as we switched from the command line to bitmap displays that could display alphanumeric characters in different fonts and allowed us to drag documents moved to file folders. The more the computer mimicked the physical world and accepted the gestures we used to navigate actual reality, the more work and innovation was required.
Vision Pro takes this to the extreme. That’s why it costs $3,500, at least in this first version. (It can be argued that the Vision Pro is a 2023 version of Apple’s 1983 Lisa, a $10,000+ computer that first brought bitmapping and the graphical user interface to a consumer device — and then paved the way for the Macintosh leveled, which was 75 percent cheaper and also a lot cooler.) Apple housed one of its most powerful microprocessors in this face mask; another piece of bespoke silicone designed specifically for the device; a 4K Plus display for each eye; 12 cameras including a lidar scanner; a suite of sensors for head and gaze tracking, 3D mapping and hand gesture preview; dual driver audio pods; exotic textiles for the headband; and a special seal to prevent the light of reality from entering.