Wednesday, 25 March 2020

Review: 100,000 miles and one week with an iPad Pro

For the past eighteen months, the iPad Pro has been my only machine away from home, and until recently, I was away from home a lot, traveling domestically and internationally to event locations around the world or our offices in San Francisco, New York and London. Every moment of every day that I wasn’t at my home desk, the iPad Pro was my main portable machine.

I made the switch on a trip to Brazil for our conference and Startup Battlefield competition (which was rad, by the way, a computer vision cattle scale won the top prize) on somewhat of a whim. I thought I’d take this one-week trip to make sure I got a good handle on how the iPad Pro would perform as a work device and then move back to my trusty 13” MacBook Pro.

The trip changed my mind completely about whether I could run TechCrunch wholly from a tablet. It turns out that it was lighter, smoother and more willing than my MacBook at nearly every turn. I never went back.

iPad Pro, 2018, Brazil

The early days were absolutely full of growing pains for both the iPad and myself. Rebuilding workflows by patching together the share sheet and automation tools and the newly introduced Shortcuts was a big part of making it a viable working machine at that point. And the changes that came with iPadOS that boosted slipover, split and the home screen were welcome in that they made the whole device feel more flexible.

The past year and a half has taught me a lot about what the absolute killer features of the iPad Pro are, while also forcing me to learn about the harsher trade-offs I would have to make for carrying a lighter, faster machine than a laptop.

All of which is to set the context for my past week with the new version of that machine.

For the greater part, this new 2020 iPad Pro still looks much the same as the one released in 2019. Aside from the square camera array, it’s a near twin. The good news on that front is that you can tell Apple nailed the ID the first time because it still feels super crisp and futuristic almost two years later. The idealized expression of a computer. Light, handheld, powerful and functional.

The 12.9” iPad Pro that I tested contains the new A12Z chip which performs at a near identical level to the same model I’ve been using. At over 5015 single-core and over 18,000 multi-core scores in Geekbench 4, it remains one of the more powerful portable computers you can own, regardless of class. The 1TB model appears to still have 6GB of RAM, though I don’t know if that’s still stepped down for the lower models to 4GB.

This version adds an additional GPU core and “enhanced thermal architecture” — presumably better heat distribution under load but that was not especially evident given that the iPad Pro has rarely run hot for me. I’m interested to see what teardowns turn up here. New venting, piping or component distribution perhaps. Or something on-die.

It’s interesting, of course, that this processor is so close in performance (at least at a CPU level) to the A12X Bionic chip. Even at a GPU level Apple says nothing more than that it is faster than the A12X with none of the normal multipliers it typically touts.

The clearest answer for this appears to be that this is a true “refresh” of the iPad Pro. There are new features, which I’ll talk about next, but on the whole this is “the new one” in a way that is rarely but sometimes true of Apple devices. Whatever they’ve learned and are able to execute currently on hardware without a massive overhaul of the design or implementation of hardware is what we see here.

I suppose my one note on this is that the A12X still feels fast as hell and I’ve never wanted for power so, fine? I’ve been arguing against speed bumps at the cost of usability forever, so now is the time I make good on those arguments and don’t really find a reason to complain about something that works so well.

CamARa

The most evident physical difference on the new iPad Pro is, of course, the large camera array which contains a 10MP ultra wide and 12MP wide camera. These work to spec but it’s the addition of the new lidar scanner that is the most intriguing addition.

It is inevitable that we will eventually experience the world on several layers at once. The physical layer we know will be augmented by additional rings of data like the growth rings of a redwood.

In fact, that future has already come for most of us, whether we realize it or not. Right now, we experience these layers mostly in an asynchronous fashion by requesting their presence. Need a data overlay to tell you where to go? Call up a map with turn-by-turn. Want to know the definition of a word or the weather? Ask a voice assistant.

The next era beyond this one, though, is passive, contextually driven info layers that are presented to us proactively visually and audibly.

We’ve been calling this either augmented reality or mixed reality, though I think that neither one of those is ultimately very descriptive of what will eventually come. The augmented human experience has started with the smartphone, but will slowly work its way closer to our cerebellum as we progress down the chain from screens to transparent displays to lenses to ocular implants to brain-stem integration.

If you’re rolling your un-enhanced eyes right now, I don’t blame you. But that doesn’t mean I’m not right. Bookmark this and let’s discuss in 2030.

In the near term, though, the advancement of AR technology is being driven primarily by smartphone experiences. And those are being advanced most quickly by Google and Apple with the frameworks they are offering to developers to integrate AR into their apps and the hardware that they’re willing to fit onboard their devices.

One of the biggest hurdles to AR experiences being incredibly realistic has been occlusion. This is effect that allows one object to intersect with another realistically — to obscure or hide it in a way that tells our brain that “this is behind that.” Occlusion leads to a bunch of interesting things like shared experiences, interaction of physical and digital worlds and just general believability.

This is where the iPad Pro’s lidar scanner comes in. With lidar, two major steps forward are possible for AR applications.

  1. Initialization time is nearly instantaneous. Because lidar works at the speed of light, reading pulses of light it sends out and measuring their “flight” times to determine the shape of objects or environments, it is very fast. That typical “fire it up, wave it around and pray” AR awkwardness should theoretically be eliminated with lidar.
  2. Occlusion becomes an automatic. It no longer requires calculations be done using the camera, small hand movements and computer vision to “guess” at the shape of objects and their relationship to one another. Developers essentially get all of this for “free” computationally and at blazing speed.

There’s a reason lidar is used in many autonomous free roaming vehicle systems and semi-autonomous driving systems. It’s fast, relatively reliable and a powerful mapping tool.

ARKit 3.5 now supplies the ability to create a full topological 3D mesh of an environment with plane and surface detection. It also comes with greater precision than was possible with a simple camera-first approach.

Unfortunately, I was unable to test this system; applications that take advantage of it are not yet available, though Apple says many are on their way from games like Hot Lava to home furnishing apps like Ikea. I’m interested to see how effective this addition is to iPad as it is highly likely that it will also come to the iPhone this year or next at the latest.

One thing I am surprised but not totally shocked by is that the iPad Pro rear-facing camera does not do Portrait photos. Only the front-facing True Depth camera does Portrait mode here.

My guess is that there is a far more accurate Portrait mode coming to iPad Pro that utilizes the lidar array as well as the camera, and it is just not ready yet. There is no reason that Apple should not be able to execute a Portrait style image with an even better understanding of the relationships of subjects to backgrounds.

lidar is a technology with a ton of promise and a slew of potential applications. Having this much more accurate way to bring the outside world into your device is going to open a lot of doors for Apple and developers over time, but my guess is that we’ll see those doors open over the next couple of years rather than all at once.

One disappointment for me is that the True Depth camera placement remains unchanged. In a sea of fantastic choices that Apple made about the iPad Pro’s design, the placement of the camera in a location most likely to be covered by your hand when it is in landscape mode is a standout poor one.

Over the time I’ve been using iPad Pro as my portable machine I have turned it to portrait mode a small handful of times, and most of those were likely because an app just purely did not support landscape.

This is a device that was born to be landscape, and the camera should reflect that. My one consideration here is that the new “floating” design of the Magic Keyboard that ships in May will raise the camera up and away from your hands and may, in fact, work a hell of a lot better because of it.

Keyboard and trackpad support

At this point, enough people have seen the mouse and trackpad support to have formed some opinions on it. In general, the response has been extremely positive, and I agree with that assessment. There are minor quibbles about how much snap Apple is applying to the cursor as it attaches itself to buttons or actions, but overall the effect is incredibly pleasant and useful.

Re-imagining the cursor as a malleable object rather than a hard-edged arrow or hand icon makes a ton of sense in a touch environment. We’re used to our finger becoming whatever tool we need it to be — a pencil or a scrubber or a button pusher. It only makes sense that the cursor on iPad would also be contextually aware as well.

I was only able to use the Magic Trackpad so far, of course, but I have high hopes that it should fall right into the normal flow of work when the Magic Keyboard drops.

And, given the design of the keyboard, I think that it will be nice to be able to keep your hands on the keyboard and away from poking at a screen that is now much higher than it was before.

Surface Comparisons

I think that with the addition of the trackpad to the iPad Pro there has been an instinct to say, “Hey, the Surface was the right thing after all.” I’ve been thinking about this at one point or another for a couple of years now as I’ve been daily driving the iPad.

I made an assessment back in 2018 about this whole philosophical argument, and I think it’s easiest to just quote it here:

One basic summary of the arena is that Microsoft has been working at making laptops into tablets, Apple has been working on making tablets into laptops and everyone else has been doing who knows what.

Microsoft still hasn’t been able (come at me) to ever get it through their heads that they needed to start by cutting the head off of their OS and building a tablet first, then walking backwards. I think now Microsoft is probably much more capable than then Microsoft, but that’s probably another whole discussion.

Apple went and cut the head off of OS X at the very beginning, and has been very slowly walking in the other direction ever since. But the fact remains that no Surface Pro has ever offered a tablet experience anywhere near as satisfying as an iPad’s.

Yes, it may offer more flexibility, but it comes at the cost of unity and reliably functionality. Just refrigerator toasters all the way down.

Still holds, in my opinion, even now.

Essentially, I find the thinking that the iPad has arrived at the doorstep of the Surface because the iPad’s approach was not correct to be so narrow because it focuses on hardware, when the reality is Windows has never been properly adjusted for touch. Apple is coming at this touch first, even as it adds cursor support.

To reiterate what I said above, I am not saying that “the Surface approach is bad” here so go ahead and take a leap on that one. I think the Surface team deserves a ton of credit for putting maximum effort into a convertible computer at the time that nearly the entire industry was headed in another direction. But I absolutely disagree that the iPad is “becoming the Surface” because the touch experience on the Surface is one of the worst of any tablet and the iPad is (for all of the interface’s foibles) indisputably the best.

It is one of the clearer examples of attempting to solve a similar problem from different ends in recent computing design.

That doesn’t mean, however, that several years of using the iPad Pro is without a flaw.

iPad Promise

Back in January, Apple writer and critic John Gruber laid out his case for why the iPad has yet to meet its full potential. The conclusions, basically, were that Apple had missed the mark on the multi-tasking portion of its software.

At the time, I believed a lot of really good points had been made by John and others who followed on and though I had thoughts I wasn’t really ready to crystalize them. I think I’m ready now, though. Here’s the nut of it:

The focus of the iPad Pro, its North Star, must be speed and capability, not ease of use.

Think about the last time that you, say, left your MacBook or laptop sitting for a day or two or ten. What happened when you opened it? Were you greeted with a flurry of alerts and notifications and updates and messages? Were you able to, no matter how long or short a time you had been away from it, open it and start working immediately?

With iPad Pro, no matter where I have been or what I have been doing, I was able to flip it open, swipe up and be issuing my first directive within seconds. As fast as my industry moves and as wild as our business gets, that kind of surety is literally priceless.

Never once, however, did I wish that it was easier to use.

Do you wish that a hammer is easier? No, you learn to hold it correctly and swing it accurately. The iPad could use a bit more of that.

Currently, iPadOS is still too closely tethered to the sacred cow of simplicity. In a strange bout of irony, the efforts on behalf of the iPad software team to keep things simple (same icons, same grid, same app switching paradigms) and true to their original intent have instead caused a sort of complexity to creep into the arrangement.

I feel that much of the issues surrounding the iPad Pro’s multi-tasking system could be corrected by giving professional users a way to immutably pin apps or workspaces in place — offering themselves the ability to “break” the multitasking methodology that has served the iPad for years in service of making their workspaces feel like their own. Ditch the dock entirely and make that a list of pinned spaces that can be picked from at a tap. Lose the protected status of app icons and have them reflect what is happening in those spaces live.

The above may all be terrible ideas, but the core of my argument is sound. Touch interfaces first appeared in the 70’s and have been massively popular for at least a dozen years by now.

The iPad Pro user of today is not new to a touch-based interface and is increasingly likely to have never known a computing life without touch interfaces.

If you doubt me, watch a kid bounce between six different apps putting together a simple meme or message to send to a friend. It’s a virtuoso performance that they give dozens of times a day. These users are touch native. They deserve to eat meat, not milk.

This device is still massively compelling, regardless, for all of the reasons I outlined in 2018 and still feel strongly about today. But I must note that there is little reason so far to upgrade to this from the 2018 iPad Pro. And given that the Magic Keyboard is backward compatible, it won’t change that.

If you don’t currently own an iPad Pro, however, and you’re wondering whether you can work on it or not, well, I can and I did and I do. Talking to 30 employees on multiple continents and time zones while managing the editorial side of a complex, multi-faceted editorial, events and subscription business.

I put 100,000 (airline) miles on the iPad Pro and never once did it fail me. Battery always ample; speed always constant; keyboard not exactly incredible but also spill sealed and bulletproof. I can’t say that of any laptop I’ve ever owned, Apple included.

I do think that the promise of the integrated trackpad and a leveling up of the iPad’s reason to be makes the Magic Keyboard and new iPad Pro one of the more compelling packages currently on the market.

I loved the MacBook Air and have used several models of it to death over the years. There is no way, today, that I would choose to go back to a laptop given my style of work. It’s just too fast, too reliable and too powerful.

It’s insane to have a multi-modal machine that can take typing, swiping and sketching as inputs and has robust support for every major piece of business software on the planet — and that always works, is always fast and is built like an Italian racing car.

Who can argue with that?



from iPhone – TechCrunch https://ift.tt/33J7wGb

Review: 100,000 miles and one week with an iPad Pro

For the past eighteen months, the iPad Pro has been my only machine away from home, and until recently, I was away from home a lot, traveling domestically and internationally to event locations around the world or our offices in San Francisco, New York and London. Every moment of every day that I wasn’t at my home desk, the iPad Pro was my main portable machine.

I made the switch on a trip to Brazil for our conference and Startup Battlefield competition (which was rad, by the way, a computer vision cattle scale won the top prize) on somewhat of a whim. I thought I’d take this one-week trip to make sure I got a good handle on how the iPad Pro would perform as a work device and then move back to my trusty 13” MacBook Pro.

The trip changed my mind completely about whether I could run TechCrunch wholly from a tablet. It turns out that it was lighter, smoother and more willing than my MacBook at nearly every turn. I never went back.

iPad Pro, 2018, Brazil

The early days were absolutely full of growing pains for both the iPad and myself. Rebuilding workflows by patching together the share sheet and automation tools and the newly introduced Shortcuts was a big part of making it a viable working machine at that point. And the changes that came with iPadOS that boosted slipover, split and the home screen were welcome in that they made the whole device feel more flexible.

The past year and a half has taught me a lot about what the absolute killer features of the iPad Pro are, while also forcing me to learn about the harsher trade-offs I would have to make for carrying a lighter, faster machine than a laptop.

All of which is to set the context for my past week with the new version of that machine.

For the greater part, this new 2020 iPad Pro still looks much the same as the one released in 2019. Aside from the square camera array, it’s a near twin. The good news on that front is that you can tell Apple nailed the ID the first time because it still feels super crisp and futuristic almost two years later. The idealized expression of a computer. Light, handheld, powerful and functional.

The 12.9” iPad Pro that I tested contains the new A12Z chip which performs at a near identical level to the same model I’ve been using. At over 5015 single-core and over 18,000 multi-core scores in Geekbench 4, it remains one of the more powerful portable computers you can own, regardless of class. The 1TB model appears to still have 6GB of RAM, though I don’t know if that’s still stepped down for the lower models to 4GB.

This version adds an additional GPU core and “enhanced thermal architecture” — presumably better heat distribution under load but that was not especially evident given that the iPad Pro has rarely run hot for me. I’m interested to see what teardowns turn up here. New venting, piping or component distribution perhaps. Or something on-die.

It’s interesting, of course, that this processor is so close in performance (at least at a CPU level) to the A12X Bionic chip. Even at a GPU level Apple says nothing more than that it is faster than the A12X with none of the normal multipliers it typically touts.

The clearest answer for this appears to be that this is a true “refresh” of the iPad Pro. There are new features, which I’ll talk about next, but on the whole this is “the new one” in a way that is rarely but sometimes true of Apple devices. Whatever they’ve learned and are able to execute currently on hardware without a massive overhaul of the design or implementation of hardware is what we see here.

I suppose my one note on this is that the A12X still feels fast as hell and I’ve never wanted for power so, fine? I’ve been arguing against speed bumps at the cost of usability forever, so now is the time I make good on those arguments and don’t really find a reason to complain about something that works so well.

CamARa

The most evident physical difference on the new iPad Pro is, of course, the large camera array which contains a 10MP ultra wide and 12MP wide camera. These work to spec but it’s the addition of the new lidar scanner that is the most intriguing addition.

It is inevitable that we will eventually experience the world on several layers at once. The physical layer we know will be augmented by additional rings of data like the growth rings of a redwood.

In fact, that future has already come for most of us, whether we realize it or not. Right now, we experience these layers mostly in an asynchronous fashion by requesting their presence. Need a data overlay to tell you where to go? Call up a map with turn-by-turn. Want to know the definition of a word or the weather? Ask a voice assistant.

The next era beyond this one, though, is passive, contextually driven info layers that are presented to us proactively visually and audibly.

We’ve been calling this either augmented reality or mixed reality, though I think that neither one of those is ultimately very descriptive of what will eventually come. The augmented human experience has started with the smartphone, but will slowly work its way closer to our cerebellum as we progress down the chain from screens to transparent displays to lenses to ocular implants to brain-stem integration.

If you’re rolling your un-enhanced eyes right now, I don’t blame you. But that doesn’t mean I’m not right. Bookmark this and let’s discuss in 2030.

In the near term, though, the advancement of AR technology is being driven primarily by smartphone experiences. And those are being advanced most quickly by Google and Apple with the frameworks they are offering to developers to integrate AR into their apps and the hardware that they’re willing to fit onboard their devices.

One of the biggest hurdles to AR experiences being incredibly realistic has been occlusion. This is effect that allows one object to intersect with another realistically — to obscure or hide it in a way that tells our brain that “this is behind that.” Occlusion leads to a bunch of interesting things like shared experiences, interaction of physical and digital worlds and just general believability.

This is where the iPad Pro’s lidar scanner comes in. With lidar, two major steps forward are possible for AR applications.

  1. Initialization time is nearly instantaneous. Because lidar works at the speed of light, reading pulses of light it sends out and measuring their “flight” times to determine the shape of objects or environments, it is very fast. That typical “fire it up, wave it around and pray” AR awkwardness should theoretically be eliminated with lidar.
  2. Occlusion becomes an automatic. It no longer requires calculations be done using the camera, small hand movements and computer vision to “guess” at the shape of objects and their relationship to one another. Developers essentially get all of this for “free” computationally and at blazing speed.

There’s a reason lidar is used in many autonomous free roaming vehicle systems and semi-autonomous driving systems. It’s fast, relatively reliable and a powerful mapping tool.

ARKit 3.5 now supplies the ability to create a full topological 3D mesh of an environment with plane and surface detection. It also comes with greater precision than was possible with a simple camera-first approach.

Unfortunately, I was unable to test this system; applications that take advantage of it are not yet available, though Apple says many are on their way from games like Hot Lava to home furnishing apps like Ikea. I’m interested to see how effective this addition is to iPad as it is highly likely that it will also come to the iPhone this year or next at the latest.

One thing I am surprised but not totally shocked by is that the iPad Pro rear-facing camera does not do Portrait photos. Only the front-facing True Depth camera does Portrait mode here.

My guess is that there is a far more accurate Portrait mode coming to iPad Pro that utilizes the lidar array as well as the camera, and it is just not ready yet. There is no reason that Apple should not be able to execute a Portrait style image with an even better understanding of the relationships of subjects to backgrounds.

lidar is a technology with a ton of promise and a slew of potential applications. Having this much more accurate way to bring the outside world into your device is going to open a lot of doors for Apple and developers over time, but my guess is that we’ll see those doors open over the next couple of years rather than all at once.

One disappointment for me is that the True Depth camera placement remains unchanged. In a sea of fantastic choices that Apple made about the iPad Pro’s design, the placement of the camera in a location most likely to be covered by your hand when it is in landscape mode is a standout poor one.

Over the time I’ve been using iPad Pro as my portable machine I have turned it to portrait mode a small handful of times, and most of those were likely because an app just purely did not support landscape.

This is a device that was born to be landscape, and the camera should reflect that. My one consideration here is that the new “floating” design of the Magic Keyboard that ships in May will raise the camera up and away from your hands and may, in fact, work a hell of a lot better because of it.

Keyboard and trackpad support

At this point, enough people have seen the mouse and trackpad support to have formed some opinions on it. In general, the response has been extremely positive, and I agree with that assessment. There are minor quibbles about how much snap Apple is applying to the cursor as it attaches itself to buttons or actions, but overall the effect is incredibly pleasant and useful.

Re-imagining the cursor as a malleable object rather than a hard-edged arrow or hand icon makes a ton of sense in a touch environment. We’re used to our finger becoming whatever tool we need it to be — a pencil or a scrubber or a button pusher. It only makes sense that the cursor on iPad would also be contextually aware as well.

I was only able to use the Magic Trackpad so far, of course, but I have high hopes that it should fall right into the normal flow of work when the Magic Keyboard drops.

And, given the design of the keyboard, I think that it will be nice to be able to keep your hands on the keyboard and away from poking at a screen that is now much higher than it was before.

Surface Comparisons

I think that with the addition of the trackpad to the iPad Pro there has been an instinct to say, “Hey, the Surface was the right thing after all.” I’ve been thinking about this at one point or another for a couple of years now as I’ve been daily driving the iPad.

I made an assessment back in 2018 about this whole philosophical argument, and I think it’s easiest to just quote it here:

One basic summary of the arena is that Microsoft has been working at making laptops into tablets, Apple has been working on making tablets into laptops and everyone else has been doing who knows what.

Microsoft still hasn’t been able (come at me) to ever get it through their heads that they needed to start by cutting the head off of their OS and building a tablet first, then walking backwards. I think now Microsoft is probably much more capable than then Microsoft, but that’s probably another whole discussion.

Apple went and cut the head off of OS X at the very beginning, and has been very slowly walking in the other direction ever since. But the fact remains that no Surface Pro has ever offered a tablet experience anywhere near as satisfying as an iPad’s.

Yes, it may offer more flexibility, but it comes at the cost of unity and reliably functionality. Just refrigerator toasters all the way down.

Still holds, in my opinion, even now.

Essentially, I find the thinking that the iPad has arrived at the doorstep of the Surface because the iPad’s approach was not correct to be so narrow because it focuses on hardware, when the reality is Windows has never been properly adjusted for touch. Apple is coming at this touch first, even as it adds cursor support.

To reiterate what I said above, I am not saying that “the Surface approach is bad” here so go ahead and take a leap on that one. I think the Surface team deserves a ton of credit for putting maximum effort into a convertible computer at the time that nearly the entire industry was headed in another direction. But I absolutely disagree that the iPad is “becoming the Surface” because the touch experience on the Surface is one of the worst of any tablet and the iPad is (for all of the interface’s foibles) indisputably the best.

It is one of the clearer examples of attempting to solve a similar problem from different ends in recent computing design.

That doesn’t mean, however, that several years of using the iPad Pro is without a flaw.

iPad Promise

Back in January, Apple writer and critic John Gruber laid out his case for why the iPad has yet to meet its full potential. The conclusions, basically, were that Apple had missed the mark on the multi-tasking portion of its software.

At the time, I believed a lot of really good points had been made by John and others who followed on and though I had thoughts I wasn’t really ready to crystalize them. I think I’m ready now, though. Here’s the nut of it:

The focus of the iPad Pro, its North Star, must be speed and capability, not ease of use.

Think about the last time that you, say, left your MacBook or laptop sitting for a day or two or ten. What happened when you opened it? Were you greeted with a flurry of alerts and notifications and updates and messages? Were you able to, no matter how long or short a time you had been away from it, open it and start working immediately?

With iPad Pro, no matter where I have been or what I have been doing, I was able to flip it open, swipe up and be issuing my first directive within seconds. As fast as my industry moves and as wild as our business gets, that kind of surety is literally priceless.

Never once, however, did I wish that it was easier to use.

Do you wish that a hammer is easier? No, you learn to hold it correctly and swing it accurately. The iPad could use a bit more of that.

Currently, iPadOS is still too closely tethered to the sacred cow of simplicity. In a strange bout of irony, the efforts on behalf of the iPad software team to keep things simple (same icons, same grid, same app switching paradigms) and true to their original intent have instead caused a sort of complexity to creep into the arrangement.

I feel that much of the issues surrounding the iPad Pro’s multi-tasking system could be corrected by giving professional users a way to immutably pin apps or workspaces in place — offering themselves the ability to “break” the multitasking methodology that has served the iPad for years in service of making their workspaces feel like their own. Ditch the dock entirely and make that a list of pinned spaces that can be picked from at a tap. Lose the protected status of app icons and have them reflect what is happening in those spaces live.

The above may all be terrible ideas, but the core of my argument is sound. Touch interfaces first appeared in the 70’s and have been massively popular for at least a dozen years by now.

The iPad Pro user of today is not new to a touch-based interface and is increasingly likely to have never known a computing life without touch interfaces.

If you doubt me, watch a kid bounce between six different apps putting together a simple meme or message to send to a friend. It’s a virtuoso performance that they give dozens of times a day. These users are touch native. They deserve to eat meat, not milk.

This device is still massively compelling, regardless, for all of the reasons I outlined in 2018 and still feel strongly about today. But I must note that there is little reason so far to upgrade to this from the 2018 iPad Pro. And given that the Magic Keyboard is backward compatible, it won’t change that.

If you don’t currently own an iPad Pro, however, and you’re wondering whether you can work on it or not, well, I can and I did and I do. Talking to 30 employees on multiple continents and time zones while managing the editorial side of a complex, multi-faceted editorial, events and subscription business.

I put 100,000 (airline) miles on the iPad Pro and never once did it fail me. Battery always ample; speed always constant; keyboard not exactly incredible but also spill sealed and bulletproof. I can’t say that of any laptop I’ve ever owned, Apple included.

I do think that the promise of the integrated trackpad and a leveling up of the iPad’s reason to be makes the Magic Keyboard and new iPad Pro one of the more compelling packages currently on the market.

I loved the MacBook Air and have used several models of it to death over the years. There is no way, today, that I would choose to go back to a laptop given my style of work. It’s just too fast, too reliable and too powerful.

It’s insane to have a multi-modal machine that can take typing, swiping and sketching as inputs and has robust support for every major piece of business software on the planet — and that always works, is always fast and is built like an Italian racing car.

Who can argue with that?



from Apple – TechCrunch https://ift.tt/33J7wGb

Apple will donate 10M face masks to healthcare workers

By way of a working from home Twitter message, Apple CEO Tim Cook announced that the company has sourced and will be donating 10 million face masks. The number is sizable increase over the two million reported last week and a hefty bump over the nine million figure Vice President Mike Pence announced during last night’s White House Press Conference.

“Apple has sourced, procured and is donating 10 million masks to the medical community in the United States,” Cook says in the video. “These people deserve our debt of gratitude for all of the work they’re doing on the front lines.”

Apple is joining fellow tech companies in donating masks amid a national shortage as COVID-19 takes an increasing toll on the U.S. population. Many of the donated masks have been stockpiled, in order to adhere to California Occupational Safety and Health Standards put into action following last year’s devastating wildfires. 

Other companies, like Ford, have transformed production facilities to create additional masks.



from Apple – TechCrunch https://ift.tt/33NbYDS

Tuesday, 24 March 2020

Volvo’s Polestar begins production of the all-electric Polestar 2 in China

Polestar has started production of its all-electric Polestar 2 vehicle at a plant in China amid the COVID-19 pandemic that has upended the automotive industry and triggered a wave of factory closures throughout the world.

The start of Polestar 2 production is a milestone for Volvo Car Group’s standalone electric performance brand  — and not just because it began in the midst of global upheaval caused by COVID-19, a disease that stems from the coronavirus. It’s also the first all-electric car under a brand that was relaunched just three years ago with a new mission.

Polestar was once a high-performance brand under Volvo Cars. In 2017, the company was recast as an electric performance brand aimed at producing exciting and fun-to-drive electric vehicles — a niche that Tesla was the first to fill and has dominated ever since. Polestar is jointly owned by Volvo Car Group and Zhejiang Geely Holding of China. Volvo was acquired by Geely in 2010.

COVID-19 has affected how Polestar and its parent company operate. Factory closures began in China, where the disease first swept through the population. Now Chinese factories are reopening as the epicenter of COVID-19 moves to Europe and North America. Most automakers have suspended production in Europe and North America.

Polestar CEO Thomas Ingenlath said the company started production under these challenging circumstances with a strong focus on the health and safety. He added that the Luqiao, China factory is an example of how Polestar has leveraged the expertise of its parent companies.

Extra precautions have been taken because of the outbreak, including frequent disinfecting of work spaces and requiring workers to wear masks and undergo regular temperature screenings, according to the company. Polestar has said that none of its workers in China tested positive of COVID-19 as a result of its efforts.

COVID-19 has also affected Polestar’s timeline. Polestar will only sell its vehicles online and will offer customers subscriptions to the vehicle. It previously revealed plans to open “Polestar Spaces,” a showroom where customers can interact with the product and schedule test drives. These spaces will be standalone facilities and not within existing Volvo retailer showrooms. Polestar had planned to have 60 of these spaces open by 2020, including Oslo, Los Angeles and Shanghai.

COVID-19 has delayed the opening of the showrooms. The company will have some pop up stores opening as soon as that situation improves, so people can go see the cars and learn more while the permanent showrooms are still under construction, TechCrunch has learned.

It’s not clear just how many Polestar 2 vehicles will be produced, Polestar has told TechCrunch that it is in the “tens of thousands” of cars per calendar year. Those numbers will also depend on demand for the Polestar 2 and other models that are built in the same factory.

Polestar 2 EV

Image Credits: Screenshot/Polestar

Polestar also isn’t providing the exact number of reservations until it begins deliveries, which are supposed to start this summer in Europe followed by China and North America. It was confirmed to TechCrunch that reservations are in the “five digits.”

The Polestar 2, which was first revealed in February 2019, has been positioned by the company to go up against Tesla Model 3. (The company’s first vehicle, the Polestar 1, is a plug-in hybrid with two electrical motors powered by three 34 kilowatt-hour battery packs and a turbo and supercharged gas inline 4 up front.)

But it will likely face off against other competitors launching new EVs in 2020 and 2021, including Volkswagen, GM, Ford and startups Lucid Motors and even adventure-focused Rivian.

Polestar is hoping customers are attracted to the tech and the performance of the fastback, which is produces 408 horsepower, 487 pound feet of torque and a 78 kWh battery pack that delivers an estimated range of 292 miles under Europe’s WLTP.

The Polestar 2’s infotainment system will be powered by Android OS and, as a result, bring into the car embedded Google services such as Google Assistant, Google Maps and the Google Play Store. This shouldn’t be confused with Android Auto, which is a secondary interface that lies on top of an operating system. Android OS is modeled after its open-source mobile operating system that runs on Linux. But instead of running smartphones and tablets, Google modified it so it could be used in cars.



from Android – TechCrunch https://ift.tt/3dtIkI8
via IFTTT

MasterClass is launching free, live Q&A sessions with big shots in their respective industries

MasterClass is known for selling access to pre-recorded online classes by a long list of people who are among the best at what they do, from tennis great Serena Williams to writer David Sedaris to chef Thomas Keller.

More recently, however, the company added live Q&A sessions with these same stars as a member benefit, and now, for the foreseeable future, it’s opening these sessions to non-members, too. It’s the San Francisco startup’s way of making itself more accessible to a broader audience that perhaps can’t rationalize paying $90 per class or $180 for a yearly all-access pass, especially in this increasingly grim market.

The first free session streams live on Wednesday at noon PT from MasterClass’s site and will feature Chris Voss, who was once the lead international kidnapping negotiator for the FBI. Voss had earlier created a module for MasterClass on the art of negotiation, and he’ll be talking to whomever wants to tune in with the help of a moderator who will be asking questions that have been submitted in advance by students.

It’s just “one of a bunch” of such live Q&A sessions that will be made available, according to MasterClass CEO David Rogier, who we chatted with Friday afternoon and who half-kiddingly describes Voss’s mission as partly to help families that are stuck at home to better negotiate who is going to use the big-screen TV at any one time (though more broadly the idea is to teach empathy).

It’s a small step from MasterClass, which separately gives away 130,000 all-access passes each year to organizations in need and has committed to giving away an addition 200,000 of these passes this year. (It’s opening up applications to these passes soon to organizations that can apply on its website, says a spokeswoman.)

Seemingly, MasterClass could lean in even further while much of America, and the rest of the globe, is trapped at home and looking for both entertainment and high-quality educational content.

In the meantime, Rogier is quick to note that MasterClass has a variety of kid-friendly content that’s instructive — if best consumed with parental supervision.

Among the now 80 classes available through the site — including new classes by interior designer Kelly Wearstler, a class on self expression and identity by RuPaul, and Gabriela Cámara teaching Mexican cooking — are classes, for example, by Neil deGrasse Tyson, who walks viewers through his take on scientific thinking and communication. Another segment stars Doris Kearns Goodwin, whose class centers on U.S. presidential history.

Other courses recommended by Rogier himself include Penn and Teller’s class on the art of magic; a class on space exploration by retired astronaut and former Commander of the International Space Station, Chris Hadfield; and, for older kids who might be trying to make sense of the world right now, a class by New York Times columnist Paul Krugman on the economy.

As for how five-year-old MasterClass was doing before the world changed, Rogier declines to share specific growth stats, merely describing its numbers as “great.” He also notes that MasterClass is now available not only via its website and app but on the big screen through Apple TV and Amazon Fire TV.

It’s also rolling out Android TV and Roku soon.

Pictured above: Former FBI hostage negotiator Chris Voss.



from Android – TechCrunch https://ift.tt/2UtqOLl
via IFTTT

Apple releases iOS and iPadOS 13.4 with trackpad support

Apple has released software updates for the iPhone, the iPad, the Apple Watch, the Apple TV and the Mac. The biggest changes are on the iPad. Starting today, you can pair a mouse or trackpad with your iPad and use it to move a cursor on the display.

Apple unveiled trackpad support for iPadOS when it announced the new iPad Pro last week. While the company plans to sell a new Magic Keyboard with a built-in trackpad, you don’t need to buy a new iPad or accessory to access the feature.

When you pair a trackpad and start using it, Apple displays a rounded cursor on the screen. The cursor changes depending on what you’re hovering over. The cursor disappears and highlights the button you’re about to activate. It looks a bit like moving from one icon to another on the Apple TV.

If you’re moving a text cursor for instance, it becomes a vertical bar. If you’re resizing a text zone in a Pages document, it becomes two arrows. If you’re using a trackpad, iPadOS supports gestures that let you switch between apps, open the app switcher and activate the Dock or Control Center.

In addition to trackpad support, iOS and iPadOS 13.4 add a handful of features. You can share an iCloud Drive folder with another iCloud user — it works pretty much like a shared Dropbox folder.

There are nine new Memoji stickers, such as smiling face with hearts, hands pressed together and party face. Apple has tweaked buttons to archive/delete, move, reply and compose and email in the Mail app.

Additionally, Apple added the ability to release a single app binary on all App Stores, including the iOS and Mac App Store. It means that developers can release a paid app on the Mac and the iPhone — and you only have to buy it once.

macOS 10.15.4 adds Screen Time Communication Limits, a feature that already exists on iOS. It lets you set limits on Messages and FaceTime calls.

When it comes to watchOS, version 6.2 adds ECG support for users in Chile, New Zealand and Turkey. Apple now lets developers provide in-app purchases for Apple Watch apps as well.

All updates also include bug fixes and security patches. Head over to the Settings app on your devices to download and update your devices if you haven’t enabled automatic software updates.



from Apple – TechCrunch https://ift.tt/2UzFgS8

Apple Card gets updated privacy policy on new data sharing and more transaction detail

Apple is updating its privacy policy for Apple Card to enable sharing more anonymized data with Goldman Sachs, its financial partner. Apple’s reasoning here is that this will make it able to do a better job of assigning credit to new customers.

The data is aggregate and anonymized, and there is an opt-out for new customers.

Three things are happening here:

  • Apple is changing the privacy policy for Apple Card with iOS to share a richer, but still anonymized credit assignment model with Goldman Sachs in order to expand the kind of user that might be able to secure credit.
  • There is also a beefed up fallback method to share more personal data on an opt-in basis with Goldman Sachs if you do not at first get approved. Things like purchase history of Apple products, when you created your Apple ID and how much you spend with Apple. This has always existed and you may have seen it if the default modeling rejected your Apple Card application — it has a few more data points now but it is still very clearly opt-in with a large share button.
  • Apple is also finally adding detail to its internal transactions. You no longer have to wonder what that random charge labeled Apple Services is for, you’ll get detail on the Hillary Duff box set or Gambino album you purchased right in the list inside Wallet.

As a side effect of the Apple Card policy evolving here it’s also being split off from the Apple Pay privacy policy. Much of the language is either identical or nearly so, but this allows Apple to make changes like the ones above to Apple Card without having to interleave that with the Apple Pay policy — as not all Apple Pay customers are Apple Card customers.

The new policy appears in iOS 13.4 updates but the opt-in sharing of data points will not immediately roll out for new Apple Card users and will begin appearing later.

Here is the additional language that is appearing in the Apple Card privacy notice related to data sharing, with some sections highlighted by us:

“You may be eligible for certain Apple Card programs provided by Goldman Sachs based on the information provided as part of your application. Apple may know whether you receive the invitation to participate and whether you accept or decline the invitation, and may share that information with Goldman Sachs to effectuate the program. Apple will not know additional details about your participation in the program.

Apple may use information about your account with Apple, such as the fact that you have Apple Card, for internal research and analytics purposes, such as financial forecasting. Apple may also use information about your relationship with Apple, such as what Apple products you have purchased, how long you have had your Apple ID, and how often you transact with Apple, to improve Apple Card by helping to identify Apple metrics that may assist Goldman Sachs in improving credit decisioning. No personally identifiable information about your relationship with Apple will be shared with Goldman Sachs to identify the relevant Apple metrics. You can opt out of this use or your Apple relationship information by emailing our privacy team at dpo@apple.com with the subject line “Apple Relationship Data and Apple Card.” Applicants and cardholders may be able to choose to share the identified metrics with Goldman Sachs for re-evaluation of their offer of credit or to increase their credit line. Apple may share information about your relationship with Apple with our service providers, who are obligated to handle the information consistent with this notice and Apple instructions, are required to use reasonable security measures to protect any personal information received, and must delete the personal information as soon as they have completed the services.”

Some thoughts on all of this.

The fact that Apple is sharing a new anonymized, non-personally identifiable information (PII), customer model with Goldman likely engenders two valid responses.

First, there is more data being shared here than there was before, which is always something that should be examined closely, and all of us should be as cognizant as possible about how much information gets traded around about us. That said, your average co-branded card offer (say an airline card or retailer card) is controlled nearly entirely by the financial services side of that equation (basically the credit card companies decide what data they get and how).

Apple’s deal with Goldman Sachs is unique in a lot of ways, not the least of which is that Apple has controlled the flow of data from customers to Goldman very tightly from the beginning. Evidenced by affordances it continues to offer like skipping your March payment to Apple Card without incurring interest. This new arrangement outlined in the privacy policy does not share any PII unless there is an opt-in, and even allows an opt-out of the anonymized model share.

I cannot stress enough how rare that is in financial products, especially credit cards. Most cards take all of the above information and much more in their approval process, and they don’t do any work beyond what is required by regulatory law to inform you of that. Apple is doing more than most.

THAT SAID. I do wish that the opt-out of the anonymized data model was presented in the flow of normal signup, rather than existing as an email address in the privacy policy. I know why this is, because the model is likely far more effective and a lot more people will likely get approved for an Apple Card using it.

But in keeping with the stated Apple goals of protecting user privacy and making the policy as transparent as possible I would prefer that they find a long-term solution that communicates all of those factors to the user clearly and then offers them the ability to risk non-approval but limiting data share.

The idea behind the new model sharing and the secondary opt-in disclosure of 9 key bits of actually personal information about your purchase history and other things is that Apple will be able to offer credit to people who may be automatically rejected under the old way of doing things. And, out beyond that, it will be able to build tools that help customers to manage debt and credit more accurately and transparently. Especially those new to credit.

Any time an agreement changes to enable more data to flow my eyebrows arch. But there is a pretty straight line to be drawn here between the way that Apple transparently and aggressively helps users to not pay interest on Apple Card and the potential for more useful financial product enhancements to Apple Card down the line.

If you’ve ever looked at a credit card statement you know that it can often be difficult to ascertain exactly how much you need to pay at any given time to avoid interest. In the Apple Card interface it’s insanely clear exactly how and when to pay so that you don’t get charged. Most of the industry follows practices that prey on behavioral norms — people will pay the minimum payment by default because that’s what seems logical, rather than paying what is most healthy for them to pay.

My hope here is that the additional modeling makes room for more of these kinds of product decisions for Apple Card down the line. But, my eyes are up and yours should be too. Check the policy, opt-out if it makes sense to you and always be aware of the data you’re sharing, who with and what they plan to use it for.



from Apple – TechCrunch https://ift.tt/2JaJB8X