Friday, 27 March 2020

News-reading app Flipboard expands local coverage, including coronavirus updates, to 12 more US metros

Earlier this year, personalized news aggregation app Flipboard expanded into local news. The feature brought local news, sports, real estate, weather, transportation news and more to 23 cities across the U.S. Today, Flipboard is bringing local news to 12 more U.S. metros and is adding critical coronavirus local coverage to all of the 35 supported locales.

The 12 new metros include the following:  Baltimore, Charlotte, Cleveland, Detroit, Indianapolis, Nashville, Pittsburgh, Orlando, Raleigh, Salt Lake City, St. Louis and Tampa Bay.

They join the 23 cities that were already supported: Atlanta, Austin, Boston, Chicago, Dallas, Denver, Houston, Las Vegas, Los Angeles, Miami, Minneapolis-St. Paul, New Orleans, New York City, Philadelphia, Phoenix, Portland, Sacramento, San Diego, San Francisco Bay Area, Seattle, Toronto, Vancouver and Washington, D.C.

To offer local news in its app, Flipboard works with area partners, big and small, like The Plain Dealer’s Cleveland.com, the Detroit Free Press and the St. Louis Post-Dispatch. It has now added to the list of partners local news service Patch and ProPublica, including its Local Reporting Network partners and its collaborative journalism project Electionland.

Patch alone is putting out more than 200 local coronavirus stories per day. Meanwhile, the ProPublica Local Reporting Network funds and jointly publishes year-long investigative projects with 23 local news organizations across the U.S. The Electionland initiative reports on problems that disenfranchise eligible voters, like misinformation, changing voting laws and rules, voter harassment, equipment failures and long lines at the polls.

To determine if a user should be shown local news, based on a user’s IP address — not a precise location — the app may recommend stories relevant to local audiences. It will also offer the Local sections inside the Explore tab in the Flipboard app. Once added, users can then browse their local news alongside other content they’re interested in, across a variety of topics.

At present, there are two main areas of interest to news readers — the COVID-19 outbreak and the 2020 Election, both of which are now offered in the local sections. In addition to understanding the current state of the pandemic on a global and national level, Flipboard readers in the supported areas will be able to track how the COVID-19 outbreak is impacting where they live. This could include coverage of things like local ordinances, school closings, shelter-in-place laws, number of cases and deaths, testing resources and more.

“Understanding the decisions state and local governments make and their impact on the community is not only important, but gives people a greater connection to their local leaders and the media,” said Marci McCue, VP of Content and Communications at Flipboard. “For instance, as a local resident you may want coverage from national newspapers about the coronavirus outbreak, but even more importantly is a local source that tells you where you can get tested and measures local leaders are taking that impact your daily life,” she noted.

The addition of coronavirus special coverage at a local level, aggregated from across publishers, means readers will be able to track stories without having to hop around different sites or apps from area newspapers or broadcasters.

For Flipboard’s business, adding local news allows advertisers to target against user interests, which may now include a city’s metro region as one of those interests.

Flipboard’s mobile app today reaches 145 million users per month. Local news is available in the supported metros across both iOS and Android.



from Android – TechCrunch https://ift.tt/2xp9jnx
via IFTTT

Apple launches COVID-19 screening website

Today, Apple launched its own coronavirus screening site (apple.com/covid19) and iOS app developed alongside the White House, CDC and FEMA.

The site is pretty simple, with basic information about best practices and safety tips alongside a basic screening tool which should give you a fairly solid idea on whether or not you need to be tested for COVID-19. The site, which is — of course — accessible on mobile and desktop, also includes some quick tips on social distancing, isolation, hand-washing, surface disinfecting and symptom monitoring.

The app, which contains identical information to the site, is U.S.-only at the moment; the website is available worldwide.

Depending on your symptoms, the site will push you to get in contact with your health provider, contact emergency services or inform you that you likely do not need to be tested. It will not route you to a testing center directly.

In a privacy note on the site, Apple notes which data is collected. “Apple is not collecting your answers from the screening tool. To help improve the site, Apple collects some information about how you use it. The information collected will not personally identify you.”

Big tech companies are looking to ensure that people have easy access to key information on COVID-19. Google’s Verily launched a limited version of its Project Baseline coronavirus screening site last week, which, in comparison to Apple’s site does not require users to log in, but it will also not help users with scheduling a test directly.



from Apple – TechCrunch https://ift.tt/39kVi7x

Thursday, 26 March 2020

Apple extends free trials for its pro creative apps

Apple announced today that they are temporarily extending the free trials on Final Cut Pro X and Logic Pro X from 30 days to 90 days, giving potential customers stuck at home a longer window of time to try out the software.

An Apple spokesperson said the company hopes the promotion will “greatly benefit creative Mac users, and especially students, who find themselves creating and learning in new environments…”

Apple joins a number of other software companies extending the free trials of their products in the midst of the COVID-19 crisis. Notably, Apple does not charge subscription pricing, so if you want to continue using the video or audio editing software suites, you’ll have to pony up the full purchase price.

Logic Pro X retails for $199.99 and Final Cut Pro X goes for $299.99. Apple says that the extended trial is already live for Final Cut Pro X and will be honored for recent downloads, while Logic Pro X should gain the extended trial in the coming days.



from Apple – TechCrunch https://ift.tt/3apYq3m

Stocks shoot upward as ‘Phase Three’ stimulus passes Senate and unemployment skyrockets

Stocks soared on Thursday even as the U.S. reported its worst unemployment numbers in 50 years of tracking data.

The pain felt on main street was offset for investors by the federal government opening its wallet to Wall Street, businesses and (at some point) workers in the form of the $2 trillion stimulus package designed as a response to business closures as a result of the COVID-19 epidemic.

Details of the plan and its implications for startup companies are still being assessed, but the spigot is now on for businesses large and small to avail themselves of low interest stimulus loans and financing that should keep them afloat even as prolonged shutdowns look to continue in the nation’s most populous cities.

Here’s the tale of the tape:

  • Dow Jones Industrial Average: jumped 6.38%, or 1,351.62 to close at 22,552.17
  • S&P 500: popped 6.24%, or 154.51, to close at 2,630.07
  • Nasdaq Composite: bounced 5.60%, or 413.24, to close at 7,797.54

Tech stocks followed the broader markets and posted gains on the day. Facebook was up nearly 4.5% and Alphabet (Google’s parent company) was up 5.5%. Shares of Apple were up over 5% as well and Amazon rose 3% on the day.

 

 



from Apple – TechCrunch https://ift.tt/3aqVO5q

India’s MX Player expands to US, UK and other markets in international push

MX Player, the on-demand video streaming service owned by India’s conglomerate Times Internet, is expanding to more than half a dozen new international markets including the U.S. and the UK to supply more entertainment content to millions of people trapped in their homes.

The Singapore-headquartered on-demand video streaming service, which raised $111 million in a round led by Tencent last year, said it has expanded to Canada, Australia, New Zealand, Bangladesh, and Nepal in addition to the U.S. and the UK.

Like in India, MX Player will offer its catalog at no charge to users in the international markets and monetize through ads, Karan Bedi, chief executive of the service, told TechCrunch in an interview.

The streaming service, which has amassed over 175 million monthly active users in India, is offering locally relevant titles in each market, he said. This is notably different from Disney’s Hotstar expansion into select international markets, where it has largely aimed to cater to the Indian diaspora.

MX Player is not currently offering any originally produced titles in any international market — instead offering movies and shows it has licensed from global and local studios — but the streamer plans to change that in the coming months, said Bedi.

Even as the expansion comes at a time when the world is grappling with containing and fighting the coronavirus outbreak, Bedi said MX Player had already been testing the service in several markets for a few months.

“We believe in meeting this rapidly rising demand from discerning entertainment lovers with stories that strike a chord. To that end, we have collaborated with some of the best talent and content partners globally who will help bring us a step closer to becoming the go-to destination for entertainment across the world,” said Nakul Kapur, Business Head for International markets at MX Player, in a statement.

Times Internet acquired MX Player, an app popular for efficiently playing a plethora of locally-stored media files on entry-level Android smartphones, in 2018 for about $140 million. In the years since, Times Internet has introduced video streaming service to it, and then live TV channels in India.

MX Player has also bundled free music streaming (through Gaana, another property owned by Times Internet) and has introduced in-app casual games for users in the country.

Bedi said the company is working on bringing these additional services to international markets, and also looking to enter additional regions including the Middle East and South Asia.



from Android – TechCrunch https://ift.tt/33OaJEi
via IFTTT

Wednesday, 25 March 2020

Review: 100,000 miles and one week with an iPad Pro

For the past eighteen months, the iPad Pro has been my only machine away from home, and until recently, I was away from home a lot, traveling domestically and internationally to event locations around the world or our offices in San Francisco, New York and London. Every moment of every day that I wasn’t at my home desk, the iPad Pro was my main portable machine.

I made the switch on a trip to Brazil for our conference and Startup Battlefield competition (which was rad, by the way, a computer vision cattle scale won the top prize) on somewhat of a whim. I thought I’d take this one-week trip to make sure I got a good handle on how the iPad Pro would perform as a work device and then move back to my trusty 13” MacBook Pro.

The trip changed my mind completely about whether I could run TechCrunch wholly from a tablet. It turns out that it was lighter, smoother and more willing than my MacBook at nearly every turn. I never went back.

iPad Pro, 2018, Brazil

The early days were absolutely full of growing pains for both the iPad and myself. Rebuilding workflows by patching together the share sheet and automation tools and the newly introduced Shortcuts was a big part of making it a viable working machine at that point. And the changes that came with iPadOS that boosted slipover, split and the home screen were welcome in that they made the whole device feel more flexible.

The past year and a half has taught me a lot about what the absolute killer features of the iPad Pro are, while also forcing me to learn about the harsher trade-offs I would have to make for carrying a lighter, faster machine than a laptop.

All of which is to set the context for my past week with the new version of that machine.

For the greater part, this new 2020 iPad Pro still looks much the same as the one released in 2019. Aside from the square camera array, it’s a near twin. The good news on that front is that you can tell Apple nailed the ID the first time because it still feels super crisp and futuristic almost two years later. The idealized expression of a computer. Light, handheld, powerful and functional.

The 12.9” iPad Pro that I tested contains the new A12Z chip which performs at a near identical level to the same model I’ve been using. At over 5015 single-core and over 18,000 multi-core scores in Geekbench 4, it remains one of the more powerful portable computers you can own, regardless of class. The 1TB model appears to still have 6GB of RAM, though I don’t know if that’s still stepped down for the lower models to 4GB.

This version adds an additional GPU core and “enhanced thermal architecture” — presumably better heat distribution under load but that was not especially evident given that the iPad Pro has rarely run hot for me. I’m interested to see what teardowns turn up here. New venting, piping or component distribution perhaps. Or something on-die.

It’s interesting, of course, that this processor is so close in performance (at least at a CPU level) to the A12X Bionic chip. Even at a GPU level Apple says nothing more than that it is faster than the A12X with none of the normal multipliers it typically touts.

The clearest answer for this appears to be that this is a true “refresh” of the iPad Pro. There are new features, which I’ll talk about next, but on the whole this is “the new one” in a way that is rarely but sometimes true of Apple devices. Whatever they’ve learned and are able to execute currently on hardware without a massive overhaul of the design or implementation of hardware is what we see here.

I suppose my one note on this is that the A12X still feels fast as hell and I’ve never wanted for power so, fine? I’ve been arguing against speed bumps at the cost of usability forever, so now is the time I make good on those arguments and don’t really find a reason to complain about something that works so well.

CamARa

The most evident physical difference on the new iPad Pro is, of course, the large camera array which contains a 10MP ultra wide and 12MP wide camera. These work to spec but it’s the addition of the new lidar scanner that is the most intriguing addition.

It is inevitable that we will eventually experience the world on several layers at once. The physical layer we know will be augmented by additional rings of data like the growth rings of a redwood.

In fact, that future has already come for most of us, whether we realize it or not. Right now, we experience these layers mostly in an asynchronous fashion by requesting their presence. Need a data overlay to tell you where to go? Call up a map with turn-by-turn. Want to know the definition of a word or the weather? Ask a voice assistant.

The next era beyond this one, though, is passive, contextually driven info layers that are presented to us proactively visually and audibly.

We’ve been calling this either augmented reality or mixed reality, though I think that neither one of those is ultimately very descriptive of what will eventually come. The augmented human experience has started with the smartphone, but will slowly work its way closer to our cerebellum as we progress down the chain from screens to transparent displays to lenses to ocular implants to brain-stem integration.

If you’re rolling your un-enhanced eyes right now, I don’t blame you. But that doesn’t mean I’m not right. Bookmark this and let’s discuss in 2030.

In the near term, though, the advancement of AR technology is being driven primarily by smartphone experiences. And those are being advanced most quickly by Google and Apple with the frameworks they are offering to developers to integrate AR into their apps and the hardware that they’re willing to fit onboard their devices.

One of the biggest hurdles to AR experiences being incredibly realistic has been occlusion. This is effect that allows one object to intersect with another realistically — to obscure or hide it in a way that tells our brain that “this is behind that.” Occlusion leads to a bunch of interesting things like shared experiences, interaction of physical and digital worlds and just general believability.

This is where the iPad Pro’s lidar scanner comes in. With lidar, two major steps forward are possible for AR applications.

  1. Initialization time is nearly instantaneous. Because lidar works at the speed of light, reading pulses of light it sends out and measuring their “flight” times to determine the shape of objects or environments, it is very fast. That typical “fire it up, wave it around and pray” AR awkwardness should theoretically be eliminated with lidar.
  2. Occlusion becomes an automatic. It no longer requires calculations be done using the camera, small hand movements and computer vision to “guess” at the shape of objects and their relationship to one another. Developers essentially get all of this for “free” computationally and at blazing speed.

There’s a reason lidar is used in many autonomous free roaming vehicle systems and semi-autonomous driving systems. It’s fast, relatively reliable and a powerful mapping tool.

ARKit 3.5 now supplies the ability to create a full topological 3D mesh of an environment with plane and surface detection. It also comes with greater precision than was possible with a simple camera-first approach.

Unfortunately, I was unable to test this system; applications that take advantage of it are not yet available, though Apple says many are on their way from games like Hot Lava to home furnishing apps like Ikea. I’m interested to see how effective this addition is to iPad as it is highly likely that it will also come to the iPhone this year or next at the latest.

One thing I am surprised but not totally shocked by is that the iPad Pro rear-facing camera does not do Portrait photos. Only the front-facing True Depth camera does Portrait mode here.

My guess is that there is a far more accurate Portrait mode coming to iPad Pro that utilizes the lidar array as well as the camera, and it is just not ready yet. There is no reason that Apple should not be able to execute a Portrait style image with an even better understanding of the relationships of subjects to backgrounds.

lidar is a technology with a ton of promise and a slew of potential applications. Having this much more accurate way to bring the outside world into your device is going to open a lot of doors for Apple and developers over time, but my guess is that we’ll see those doors open over the next couple of years rather than all at once.

One disappointment for me is that the True Depth camera placement remains unchanged. In a sea of fantastic choices that Apple made about the iPad Pro’s design, the placement of the camera in a location most likely to be covered by your hand when it is in landscape mode is a standout poor one.

Over the time I’ve been using iPad Pro as my portable machine I have turned it to portrait mode a small handful of times, and most of those were likely because an app just purely did not support landscape.

This is a device that was born to be landscape, and the camera should reflect that. My one consideration here is that the new “floating” design of the Magic Keyboard that ships in May will raise the camera up and away from your hands and may, in fact, work a hell of a lot better because of it.

Keyboard and trackpad support

At this point, enough people have seen the mouse and trackpad support to have formed some opinions on it. In general, the response has been extremely positive, and I agree with that assessment. There are minor quibbles about how much snap Apple is applying to the cursor as it attaches itself to buttons or actions, but overall the effect is incredibly pleasant and useful.

Re-imagining the cursor as a malleable object rather than a hard-edged arrow or hand icon makes a ton of sense in a touch environment. We’re used to our finger becoming whatever tool we need it to be — a pencil or a scrubber or a button pusher. It only makes sense that the cursor on iPad would also be contextually aware as well.

I was only able to use the Magic Trackpad so far, of course, but I have high hopes that it should fall right into the normal flow of work when the Magic Keyboard drops.

And, given the design of the keyboard, I think that it will be nice to be able to keep your hands on the keyboard and away from poking at a screen that is now much higher than it was before.

Surface Comparisons

I think that with the addition of the trackpad to the iPad Pro there has been an instinct to say, “Hey, the Surface was the right thing after all.” I’ve been thinking about this at one point or another for a couple of years now as I’ve been daily driving the iPad.

I made an assessment back in 2018 about this whole philosophical argument, and I think it’s easiest to just quote it here:

One basic summary of the arena is that Microsoft has been working at making laptops into tablets, Apple has been working on making tablets into laptops and everyone else has been doing who knows what.

Microsoft still hasn’t been able (come at me) to ever get it through their heads that they needed to start by cutting the head off of their OS and building a tablet first, then walking backwards. I think now Microsoft is probably much more capable than then Microsoft, but that’s probably another whole discussion.

Apple went and cut the head off of OS X at the very beginning, and has been very slowly walking in the other direction ever since. But the fact remains that no Surface Pro has ever offered a tablet experience anywhere near as satisfying as an iPad’s.

Yes, it may offer more flexibility, but it comes at the cost of unity and reliably functionality. Just refrigerator toasters all the way down.

Still holds, in my opinion, even now.

Essentially, I find the thinking that the iPad has arrived at the doorstep of the Surface because the iPad’s approach was not correct to be so narrow because it focuses on hardware, when the reality is Windows has never been properly adjusted for touch. Apple is coming at this touch first, even as it adds cursor support.

To reiterate what I said above, I am not saying that “the Surface approach is bad” here so go ahead and take a leap on that one. I think the Surface team deserves a ton of credit for putting maximum effort into a convertible computer at the time that nearly the entire industry was headed in another direction. But I absolutely disagree that the iPad is “becoming the Surface” because the touch experience on the Surface is one of the worst of any tablet and the iPad is (for all of the interface’s foibles) indisputably the best.

It is one of the clearer examples of attempting to solve a similar problem from different ends in recent computing design.

That doesn’t mean, however, that several years of using the iPad Pro is without a flaw.

iPad Promise

Back in January, Apple writer and critic John Gruber laid out his case for why the iPad has yet to meet its full potential. The conclusions, basically, were that Apple had missed the mark on the multi-tasking portion of its software.

At the time, I believed a lot of really good points had been made by John and others who followed on and though I had thoughts I wasn’t really ready to crystalize them. I think I’m ready now, though. Here’s the nut of it:

The focus of the iPad Pro, its North Star, must be speed and capability, not ease of use.

Think about the last time that you, say, left your MacBook or laptop sitting for a day or two or ten. What happened when you opened it? Were you greeted with a flurry of alerts and notifications and updates and messages? Were you able to, no matter how long or short a time you had been away from it, open it and start working immediately?

With iPad Pro, no matter where I have been or what I have been doing, I was able to flip it open, swipe up and be issuing my first directive within seconds. As fast as my industry moves and as wild as our business gets, that kind of surety is literally priceless.

Never once, however, did I wish that it was easier to use.

Do you wish that a hammer is easier? No, you learn to hold it correctly and swing it accurately. The iPad could use a bit more of that.

Currently, iPadOS is still too closely tethered to the sacred cow of simplicity. In a strange bout of irony, the efforts on behalf of the iPad software team to keep things simple (same icons, same grid, same app switching paradigms) and true to their original intent have instead caused a sort of complexity to creep into the arrangement.

I feel that much of the issues surrounding the iPad Pro’s multi-tasking system could be corrected by giving professional users a way to immutably pin apps or workspaces in place — offering themselves the ability to “break” the multitasking methodology that has served the iPad for years in service of making their workspaces feel like their own. Ditch the dock entirely and make that a list of pinned spaces that can be picked from at a tap. Lose the protected status of app icons and have them reflect what is happening in those spaces live.

The above may all be terrible ideas, but the core of my argument is sound. Touch interfaces first appeared in the 70’s and have been massively popular for at least a dozen years by now.

The iPad Pro user of today is not new to a touch-based interface and is increasingly likely to have never known a computing life without touch interfaces.

If you doubt me, watch a kid bounce between six different apps putting together a simple meme or message to send to a friend. It’s a virtuoso performance that they give dozens of times a day. These users are touch native. They deserve to eat meat, not milk.

This device is still massively compelling, regardless, for all of the reasons I outlined in 2018 and still feel strongly about today. But I must note that there is little reason so far to upgrade to this from the 2018 iPad Pro. And given that the Magic Keyboard is backward compatible, it won’t change that.

If you don’t currently own an iPad Pro, however, and you’re wondering whether you can work on it or not, well, I can and I did and I do. Talking to 30 employees on multiple continents and time zones while managing the editorial side of a complex, multi-faceted editorial, events and subscription business.

I put 100,000 (airline) miles on the iPad Pro and never once did it fail me. Battery always ample; speed always constant; keyboard not exactly incredible but also spill sealed and bulletproof. I can’t say that of any laptop I’ve ever owned, Apple included.

I do think that the promise of the integrated trackpad and a leveling up of the iPad’s reason to be makes the Magic Keyboard and new iPad Pro one of the more compelling packages currently on the market.

I loved the MacBook Air and have used several models of it to death over the years. There is no way, today, that I would choose to go back to a laptop given my style of work. It’s just too fast, too reliable and too powerful.

It’s insane to have a multi-modal machine that can take typing, swiping and sketching as inputs and has robust support for every major piece of business software on the planet — and that always works, is always fast and is built like an Italian racing car.

Who can argue with that?



from iPhone – TechCrunch https://ift.tt/33J7wGb

Review: 100,000 miles and one week with an iPad Pro

For the past eighteen months, the iPad Pro has been my only machine away from home, and until recently, I was away from home a lot, traveling domestically and internationally to event locations around the world or our offices in San Francisco, New York and London. Every moment of every day that I wasn’t at my home desk, the iPad Pro was my main portable machine.

I made the switch on a trip to Brazil for our conference and Startup Battlefield competition (which was rad, by the way, a computer vision cattle scale won the top prize) on somewhat of a whim. I thought I’d take this one-week trip to make sure I got a good handle on how the iPad Pro would perform as a work device and then move back to my trusty 13” MacBook Pro.

The trip changed my mind completely about whether I could run TechCrunch wholly from a tablet. It turns out that it was lighter, smoother and more willing than my MacBook at nearly every turn. I never went back.

iPad Pro, 2018, Brazil

The early days were absolutely full of growing pains for both the iPad and myself. Rebuilding workflows by patching together the share sheet and automation tools and the newly introduced Shortcuts was a big part of making it a viable working machine at that point. And the changes that came with iPadOS that boosted slipover, split and the home screen were welcome in that they made the whole device feel more flexible.

The past year and a half has taught me a lot about what the absolute killer features of the iPad Pro are, while also forcing me to learn about the harsher trade-offs I would have to make for carrying a lighter, faster machine than a laptop.

All of which is to set the context for my past week with the new version of that machine.

For the greater part, this new 2020 iPad Pro still looks much the same as the one released in 2019. Aside from the square camera array, it’s a near twin. The good news on that front is that you can tell Apple nailed the ID the first time because it still feels super crisp and futuristic almost two years later. The idealized expression of a computer. Light, handheld, powerful and functional.

The 12.9” iPad Pro that I tested contains the new A12Z chip which performs at a near identical level to the same model I’ve been using. At over 5015 single-core and over 18,000 multi-core scores in Geekbench 4, it remains one of the more powerful portable computers you can own, regardless of class. The 1TB model appears to still have 6GB of RAM, though I don’t know if that’s still stepped down for the lower models to 4GB.

This version adds an additional GPU core and “enhanced thermal architecture” — presumably better heat distribution under load but that was not especially evident given that the iPad Pro has rarely run hot for me. I’m interested to see what teardowns turn up here. New venting, piping or component distribution perhaps. Or something on-die.

It’s interesting, of course, that this processor is so close in performance (at least at a CPU level) to the A12X Bionic chip. Even at a GPU level Apple says nothing more than that it is faster than the A12X with none of the normal multipliers it typically touts.

The clearest answer for this appears to be that this is a true “refresh” of the iPad Pro. There are new features, which I’ll talk about next, but on the whole this is “the new one” in a way that is rarely but sometimes true of Apple devices. Whatever they’ve learned and are able to execute currently on hardware without a massive overhaul of the design or implementation of hardware is what we see here.

I suppose my one note on this is that the A12X still feels fast as hell and I’ve never wanted for power so, fine? I’ve been arguing against speed bumps at the cost of usability forever, so now is the time I make good on those arguments and don’t really find a reason to complain about something that works so well.

CamARa

The most evident physical difference on the new iPad Pro is, of course, the large camera array which contains a 10MP ultra wide and 12MP wide camera. These work to spec but it’s the addition of the new lidar scanner that is the most intriguing addition.

It is inevitable that we will eventually experience the world on several layers at once. The physical layer we know will be augmented by additional rings of data like the growth rings of a redwood.

In fact, that future has already come for most of us, whether we realize it or not. Right now, we experience these layers mostly in an asynchronous fashion by requesting their presence. Need a data overlay to tell you where to go? Call up a map with turn-by-turn. Want to know the definition of a word or the weather? Ask a voice assistant.

The next era beyond this one, though, is passive, contextually driven info layers that are presented to us proactively visually and audibly.

We’ve been calling this either augmented reality or mixed reality, though I think that neither one of those is ultimately very descriptive of what will eventually come. The augmented human experience has started with the smartphone, but will slowly work its way closer to our cerebellum as we progress down the chain from screens to transparent displays to lenses to ocular implants to brain-stem integration.

If you’re rolling your un-enhanced eyes right now, I don’t blame you. But that doesn’t mean I’m not right. Bookmark this and let’s discuss in 2030.

In the near term, though, the advancement of AR technology is being driven primarily by smartphone experiences. And those are being advanced most quickly by Google and Apple with the frameworks they are offering to developers to integrate AR into their apps and the hardware that they’re willing to fit onboard their devices.

One of the biggest hurdles to AR experiences being incredibly realistic has been occlusion. This is effect that allows one object to intersect with another realistically — to obscure or hide it in a way that tells our brain that “this is behind that.” Occlusion leads to a bunch of interesting things like shared experiences, interaction of physical and digital worlds and just general believability.

This is where the iPad Pro’s lidar scanner comes in. With lidar, two major steps forward are possible for AR applications.

  1. Initialization time is nearly instantaneous. Because lidar works at the speed of light, reading pulses of light it sends out and measuring their “flight” times to determine the shape of objects or environments, it is very fast. That typical “fire it up, wave it around and pray” AR awkwardness should theoretically be eliminated with lidar.
  2. Occlusion becomes an automatic. It no longer requires calculations be done using the camera, small hand movements and computer vision to “guess” at the shape of objects and their relationship to one another. Developers essentially get all of this for “free” computationally and at blazing speed.

There’s a reason lidar is used in many autonomous free roaming vehicle systems and semi-autonomous driving systems. It’s fast, relatively reliable and a powerful mapping tool.

ARKit 3.5 now supplies the ability to create a full topological 3D mesh of an environment with plane and surface detection. It also comes with greater precision than was possible with a simple camera-first approach.

Unfortunately, I was unable to test this system; applications that take advantage of it are not yet available, though Apple says many are on their way from games like Hot Lava to home furnishing apps like Ikea. I’m interested to see how effective this addition is to iPad as it is highly likely that it will also come to the iPhone this year or next at the latest.

One thing I am surprised but not totally shocked by is that the iPad Pro rear-facing camera does not do Portrait photos. Only the front-facing True Depth camera does Portrait mode here.

My guess is that there is a far more accurate Portrait mode coming to iPad Pro that utilizes the lidar array as well as the camera, and it is just not ready yet. There is no reason that Apple should not be able to execute a Portrait style image with an even better understanding of the relationships of subjects to backgrounds.

lidar is a technology with a ton of promise and a slew of potential applications. Having this much more accurate way to bring the outside world into your device is going to open a lot of doors for Apple and developers over time, but my guess is that we’ll see those doors open over the next couple of years rather than all at once.

One disappointment for me is that the True Depth camera placement remains unchanged. In a sea of fantastic choices that Apple made about the iPad Pro’s design, the placement of the camera in a location most likely to be covered by your hand when it is in landscape mode is a standout poor one.

Over the time I’ve been using iPad Pro as my portable machine I have turned it to portrait mode a small handful of times, and most of those were likely because an app just purely did not support landscape.

This is a device that was born to be landscape, and the camera should reflect that. My one consideration here is that the new “floating” design of the Magic Keyboard that ships in May will raise the camera up and away from your hands and may, in fact, work a hell of a lot better because of it.

Keyboard and trackpad support

At this point, enough people have seen the mouse and trackpad support to have formed some opinions on it. In general, the response has been extremely positive, and I agree with that assessment. There are minor quibbles about how much snap Apple is applying to the cursor as it attaches itself to buttons or actions, but overall the effect is incredibly pleasant and useful.

Re-imagining the cursor as a malleable object rather than a hard-edged arrow or hand icon makes a ton of sense in a touch environment. We’re used to our finger becoming whatever tool we need it to be — a pencil or a scrubber or a button pusher. It only makes sense that the cursor on iPad would also be contextually aware as well.

I was only able to use the Magic Trackpad so far, of course, but I have high hopes that it should fall right into the normal flow of work when the Magic Keyboard drops.

And, given the design of the keyboard, I think that it will be nice to be able to keep your hands on the keyboard and away from poking at a screen that is now much higher than it was before.

Surface Comparisons

I think that with the addition of the trackpad to the iPad Pro there has been an instinct to say, “Hey, the Surface was the right thing after all.” I’ve been thinking about this at one point or another for a couple of years now as I’ve been daily driving the iPad.

I made an assessment back in 2018 about this whole philosophical argument, and I think it’s easiest to just quote it here:

One basic summary of the arena is that Microsoft has been working at making laptops into tablets, Apple has been working on making tablets into laptops and everyone else has been doing who knows what.

Microsoft still hasn’t been able (come at me) to ever get it through their heads that they needed to start by cutting the head off of their OS and building a tablet first, then walking backwards. I think now Microsoft is probably much more capable than then Microsoft, but that’s probably another whole discussion.

Apple went and cut the head off of OS X at the very beginning, and has been very slowly walking in the other direction ever since. But the fact remains that no Surface Pro has ever offered a tablet experience anywhere near as satisfying as an iPad’s.

Yes, it may offer more flexibility, but it comes at the cost of unity and reliably functionality. Just refrigerator toasters all the way down.

Still holds, in my opinion, even now.

Essentially, I find the thinking that the iPad has arrived at the doorstep of the Surface because the iPad’s approach was not correct to be so narrow because it focuses on hardware, when the reality is Windows has never been properly adjusted for touch. Apple is coming at this touch first, even as it adds cursor support.

To reiterate what I said above, I am not saying that “the Surface approach is bad” here so go ahead and take a leap on that one. I think the Surface team deserves a ton of credit for putting maximum effort into a convertible computer at the time that nearly the entire industry was headed in another direction. But I absolutely disagree that the iPad is “becoming the Surface” because the touch experience on the Surface is one of the worst of any tablet and the iPad is (for all of the interface’s foibles) indisputably the best.

It is one of the clearer examples of attempting to solve a similar problem from different ends in recent computing design.

That doesn’t mean, however, that several years of using the iPad Pro is without a flaw.

iPad Promise

Back in January, Apple writer and critic John Gruber laid out his case for why the iPad has yet to meet its full potential. The conclusions, basically, were that Apple had missed the mark on the multi-tasking portion of its software.

At the time, I believed a lot of really good points had been made by John and others who followed on and though I had thoughts I wasn’t really ready to crystalize them. I think I’m ready now, though. Here’s the nut of it:

The focus of the iPad Pro, its North Star, must be speed and capability, not ease of use.

Think about the last time that you, say, left your MacBook or laptop sitting for a day or two or ten. What happened when you opened it? Were you greeted with a flurry of alerts and notifications and updates and messages? Were you able to, no matter how long or short a time you had been away from it, open it and start working immediately?

With iPad Pro, no matter where I have been or what I have been doing, I was able to flip it open, swipe up and be issuing my first directive within seconds. As fast as my industry moves and as wild as our business gets, that kind of surety is literally priceless.

Never once, however, did I wish that it was easier to use.

Do you wish that a hammer is easier? No, you learn to hold it correctly and swing it accurately. The iPad could use a bit more of that.

Currently, iPadOS is still too closely tethered to the sacred cow of simplicity. In a strange bout of irony, the efforts on behalf of the iPad software team to keep things simple (same icons, same grid, same app switching paradigms) and true to their original intent have instead caused a sort of complexity to creep into the arrangement.

I feel that much of the issues surrounding the iPad Pro’s multi-tasking system could be corrected by giving professional users a way to immutably pin apps or workspaces in place — offering themselves the ability to “break” the multitasking methodology that has served the iPad for years in service of making their workspaces feel like their own. Ditch the dock entirely and make that a list of pinned spaces that can be picked from at a tap. Lose the protected status of app icons and have them reflect what is happening in those spaces live.

The above may all be terrible ideas, but the core of my argument is sound. Touch interfaces first appeared in the 70’s and have been massively popular for at least a dozen years by now.

The iPad Pro user of today is not new to a touch-based interface and is increasingly likely to have never known a computing life without touch interfaces.

If you doubt me, watch a kid bounce between six different apps putting together a simple meme or message to send to a friend. It’s a virtuoso performance that they give dozens of times a day. These users are touch native. They deserve to eat meat, not milk.

This device is still massively compelling, regardless, for all of the reasons I outlined in 2018 and still feel strongly about today. But I must note that there is little reason so far to upgrade to this from the 2018 iPad Pro. And given that the Magic Keyboard is backward compatible, it won’t change that.

If you don’t currently own an iPad Pro, however, and you’re wondering whether you can work on it or not, well, I can and I did and I do. Talking to 30 employees on multiple continents and time zones while managing the editorial side of a complex, multi-faceted editorial, events and subscription business.

I put 100,000 (airline) miles on the iPad Pro and never once did it fail me. Battery always ample; speed always constant; keyboard not exactly incredible but also spill sealed and bulletproof. I can’t say that of any laptop I’ve ever owned, Apple included.

I do think that the promise of the integrated trackpad and a leveling up of the iPad’s reason to be makes the Magic Keyboard and new iPad Pro one of the more compelling packages currently on the market.

I loved the MacBook Air and have used several models of it to death over the years. There is no way, today, that I would choose to go back to a laptop given my style of work. It’s just too fast, too reliable and too powerful.

It’s insane to have a multi-modal machine that can take typing, swiping and sketching as inputs and has robust support for every major piece of business software on the planet — and that always works, is always fast and is built like an Italian racing car.

Who can argue with that?



from Apple – TechCrunch https://ift.tt/33J7wGb