Tuesday, 1 October 2019

Defining micromobility and where it’s going with business and mobility analyst Horace Dediu

Micromobility has taken off over the last couple of years. Between electric bike-share and scooter-share, these vehicles have made their way all over the world. Meanwhile, some of these companies, like Bird and Lime, have already hit unicorn status thanks to massive funding rounds.

Horace Dediu, the well-known industry analyst who coined the term micromobility as it relates to this emerging form of transportation, took some time to chat with TechCrunch ahead of Micromobility Europe, a one-day event focused on all-things micromobility.

We chatted about the origin of the word micromobility, where big tech companies like Apple, Google and Amazon fit into the space, opportunities for developers to build tools and services on top of these vehicles, the opportunity for franchising business models, the potential for micromobility to be bigger than autonomous, and much more.

Here’s a Q&A, which I lightly edited for length and clarity, I did with Dediu ahead of his micromobility conference.


Megan Rose Dickey: Hey, Horace. Thanks for taking the time to chat.

Horace Dediu: Hey, no problem. My pleasure.

Rose Dickey: I was hoping to chat with you a bit about micromobility because I know that you have the big conference coming up in Europe, so I figured this would be a good time to touch base with you. I know you’ve been credited with coining the term micromobility as it relates to likes of shared e-bikes and scooters.

So, to kick things off, can you define micromobility?

Dediu: Yes, sure. So, the idea came to me because I actually remembered microcomputing.



from Apple – TechCrunch https://ift.tt/2mLlXZe

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple has been blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available



from iPhone – TechCrunch https://ift.tt/2nzPzsG

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple has been blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available



from Apple – TechCrunch https://ift.tt/2nzPzsG

WhatsApp is testing a self-destructing messages feature

WhatsApp users may soon get the ability to have their messages self-destruct after a set period of time. That’s according to a highly-reliable tipster who spotted the feature combing through the code of a beta version of the app.

Twitter user WABetaInfo said on Tuesday that the recently released public beta of WhatsApp for Android — dubbed v2.19.275 — includes an optional feature that would allow users to set their messages to self-destruct.

The ability to have messages disappear forever after a fixed amount of time could come in handy to users who share sensitive information with friends and colleagues on the app. It’s one of the most popular features on instant messaging client Telegram, for instance.

2EE9D669 87D1 4CC2 842F 2644E7549ED9

Image: WABetaInfo

Telegram offers a “secret chat” feature wherein users can engage with each other and their messages disappear from their devices after a set amount of time. The messaging platform says it does not store the text on its servers and restricts users from forwarding the messages, or take a screenshot of the conversation, to ensure there is “no trail” of the texts.

“All secret chats in Telegram are device-specific and are not part of the Telegram cloud. This means you can only access messages in a secret chat from their device of origin. They are safe for as long as your device is safe in your pocket,” it explains.

Facebook, which owns WhatsApp, also offers a “secret chat” feature on its Messenger app. But there, the secret chat feature only encrypts end-to-end messages and media content shared between two users. On WhatsApp, messages between users are end-to-end encrypted by default.

Currently, WhatsApp is testing the feature in a group setting that supports participation from multiple individuals. Messages could be set to self-destruct as soon as five seconds after they have been sent and as late as an hour. Additionally, an image shared by WABetaInfo shows that group administrators will have the ability to prevent other participants in the group from texting.

Some third-party WhatsApp apps have allowed self-destructing messages feature in the past. But in recent years, WhatsApp has started to crack down on third-party services to ensure safety of its users.

It remains unclear how soon — if ever — WhatsApp plans to roll out this feature to all its users. We have reached out to them for comment.



from Android – TechCrunch https://ift.tt/2oeb2Ys
via IFTTT

Monday, 30 September 2019

Google brings its Jacquard wearables tech to Levi’s Trucker Jacket

Back in 2015, Google’s ATAP team demoed a new kind of wearable tech at Google I/O that used functional fabrics and conductive yarns to allow you to interact with your clothing and, by extension, the phone in your pocket. The company then released a jacket with Levi’s in 2017, but that was expensive, at $350, and never really quite caught on. Now, however, Jacquard is back. A few weeks ago, Saint Laurent launched a backpack with Jacquard support, but at $1,000, that was very much a luxury product. Today, however, Google and Levi’s are announcing their latest collaboration: Jacquard-enabled versions of Levi’s Trucker Jacket.

These jackets, which will come in different styles, including the Classic Trucker and the Sherpa Trucker, and in men’s and women’s versions, will retail for $198 for the Classic Trucker and $248 for the Sherpa Trucker. In addition to the U.S., it’ll be available in Australia, France, Germany, Italy, Japan and the U.K.

The idea here is simple and hasn’t changed since the original launch: a dongle in your jacket’s cuff connects to conductive yarns in your jacket. You can then swipe over your cuff, tap it or hold your hand over it to issue commands to your phone. You use the Jacquard phone app for iOS or Android to set up what each gesture does, with commands ranging from saving your location to bringing up the Google Assistant in your headphones, from skipping to the next song to controlling your camera for selfies or simply counting things during the day, like the coffees you drink on the go. If you have Bose noise-canceling headphones, the app also lets you set a gesture to turn your noise cancellation on or off. In total, there are currently 19 abilities available, and the dongle also includes a vibration motor for notifications.

2019 09 30 0946 1

What’s maybe most important, though, is that this (re-)launch sets up Jacquard as a more modular technology that Google and its partners hope will take it from a bit of a gimmick to something you’ll see in more places over the next few months and years.

“Since we launched the first product with Levi’s at the end of 2017, we were focused on trying to understand and working really hard on how we can take the technology from a single product […] to create a real technology platform that can be used by multiple brands and by multiple collaborators,” Ivan Poupyrev, the head of Jacquard by Google told me. He noted that the idea behind projects like Jacquard is to take things we use every day, like backpacks, jackets and shoes, and make them better with technology. He argued that, for the most part, technology hasn’t really been added to these things that we use every day. He wants to work with companies like Levi’s to “give people the opportunity to create new digital touchpoints to their digital life through things they already have and own and use every day.”

What’s also important about Jacquard 2.0 is that you can take the dongle from garment to garment. For the original jacket, the dongle only worked with this one specific type of jacket; now, you’ll be able to take it with you and use it in other wearables as well. The dongle, too, is significantly smaller and more powerful. It also now has more memory to support multiple products. Yet, in my own testing, its battery still lasts for a few days of occasional use, with plenty of standby time.

jacquard dongle

Poupyrev also noted that the team focused on reducing cost, “in order to bring the technology into a price range where it’s more attractive to consumers.” The team also made lots of changes to the software that runs on the device and, more importantly, in the cloud to allow it to configure itself for every product it’s being used in and to make it easier for the team to add new functionality over time (when was the last time your jacket got a software upgrade?).

He actually hopes that over time, people will forget that Google was involved in this. He wants the technology to fade into the background. Levi’s, on the other hand, obviously hopes that this technology will enable it to reach a new market. The 2017 version only included the Levi’s Commuter Trucker Jacket. Now, the company is going broader with different styles.

“We had gone out with a really sharp focus on trying to adapt the technology to meet the needs of our commuter customer, which a collection of Levi’s focused on urban cyclists,” Paul Dillinger, the VP of Global Product Innovation at Levi’s, told me when I asked him about the company’s original efforts around Jacquard. But there was a lot of interest beyond that community, he said, yet the built-in features were very much meant to serve the needs of this specific audience and not necessarily relevant to the lifestyles of other users. The jackets, of course, were also pretty expensive. “There was an appetite for the technology to do more and be more accessible,” he said — and the results of that work are these new jackets.

IMG 20190930 102524

Dillinger also noted that this changes the relationship his company has with the consumer, because Levi’s can now upgrade the technology in your jacket after you bought it. “This is a really new experience,” he said. “And it’s a completely different approach to fashion. The normal fashion promise from other companies really is that we promise that in six months, we’re going to try to sell you something else. Levi’s prides itself on creating enduring, lasting value in style and we are able to actually improve the value of the garment that was already in the consumer’s closet.”

I spent about a week with the Sherpa jacket before today’s launch. It does exactly what it promises to do. Pairing my phone and jacket took less than a minute and the connection between the two has been perfectly stable. The gesture recognition worked very well — maybe better than I expected. What it can do, it does well, and I appreciate that the team kept the functionality pretty narrow.

Whether Jacquard is for you may depend on your lifestyle, though. I think the ideal user is somebody who is out and about a lot, wearing headphones, given that music controls are one of the main features here. But you don’t have to be wearing headphones to get value out of Jacquard. I almost never wear headphones in public, but I used it to quickly tag where I parked my car, for example, and when I used it with headphones, I found using my jacket’s cuffs easier to forward to the next song than doing the same on my headphones. Your mileage may vary, of course, and while I like the idea of using this kind of tech so you need to take out your phone less often, I wonder if that ship hasn’t sailed at this point — and whether the controls on your headphones can’t do most of the things Jacquard can. Google surely wants Jacquard to be more than a gimmick, but at this stage, it kind of still is.

IMG 20190930 104137IMG 20190930 104137



from Android – TechCrunch https://ift.tt/2miIpIK
via IFTTT

SmartNews’ head of product on how the news discovery app wants to free readers from filter bubbles

Since launching in the United States five years ago, SmartNews, the news aggregation app that recently hit unicorn status, has quietly built a reputation for presenting reliable information from a wide range of publishers. The company straddles two very different markets: the U.S. and its home country of Japan, where it is one of the leading news apps.

SmartNews wants readers to see it as a way to break out of their filter bubbles, says Jeannie Yang, its senior vice president of product, especially as the American presidential election heats up. For example, it recently launched a feature, called “News From All Sides,” that lets people see how media outlets from across the political spectrum are covering a specific topic.

The app is driven by machine-learning algorithms, but it also has an editorial team led by Rich Jaroslovsky, the first managing editor of WSJ.com and founder of the Online News Association. One of SmartNews’ goal is to surface news that its users might not seek out on their own, but it must balance that with audience retention in a market that is crowded with many ways to consume content online, including competing news aggregation apps, Facebook and Google Search.

In a wide-ranging interview with Extra Crunch, Yang talked about SmartNews’ place in the media ecosystem, creating recommendation algorithms that don’t reinforce biases, the difference between its Japanese and American users and the challenges of presenting political news in a highly polarized environment.

Catherine Shu: One of the reasons why SmartNews is interesting is because there are a lot of news aggregation apps in America, but there hasn’t been one huge breakout app like SmartNews is in Japan or Toutiao in China. But at the same time, there are obviously a lot of issues in the publishing and news industry in the United States that a good dominant news app might be able to help, ranging from monetization to fake news.

Jeannie Yang: I think that’s definitely a challenge for everybody in the U.S. With SmartNews, we really want to see how we can help create a healthier media ecosystem and actually have publishers thrive as well. SmartNews has such respect for the publishers and the industry and we want to be good partners, but also really understand the challenges of the business model, as well as the challenges for users and thinking of how we can create a healthier ecosystem.



from Apple – TechCrunch https://ift.tt/2mZix5h

Saturday, 28 September 2019

This Week in Apps: AltStore, acquisitions and Google Play Pass

The app industry shows no signs of slowing down, with 194 billion downloads in 2018 and over $100 billion in consumer spending. People spend 90% of their mobile time in apps and more time using their mobile devices than watching TV. In other words, apps aren’t just a way to spend idle hours — they’re a big business. And one that often seems to change overnight. In this new Extra Crunch series, we’ll help you keep up with the latest news from the world of apps — including everything from the OS’s to the apps that run upon them, as well as the money that flows through it all.

This week, alternatives to the traditional app store is a big theme. Not only has a new, jailbreak-free iOS marketplace called AltStore just popped up, we’ve also got both Apple and Google ramping up their own subscription-based collections of premium apps and games.

Meanwhile, the way brands and publishers want to track their apps’ success is changing, too. And App Annie — the company that was the first to start selling pickaxes for the App Store gold rush — is responding with an acquisition that will help app publishers better understand the return on investment for their app businesses.

Headlines

AltStore is an alternative App Store that doesn’t need a jailbreak

An interesting alternative app marketplace has appeared on the scene, allowing a way for developers to distribute iOS apps outside the official App Store, reports Engadget — without jailbreaking, which can be difficult and has various security implications. Instead, the new store works by tricking your device into thinking you’re a developer sideloading apps. And it uses a companion app on your Mac or PC to re-sign the apps every 7 days via iTunes WiFi syncing protocol. Already, it’s offering a Nintendo emulator and other games, says The Verge. And Apple is probably already working on a way to shut this down. For now, it’s live at Altstore.io.

For the third time in a month, Google mass-deleted Android apps from a big Chinese developer.

Does Google Play have a malicious app problem? That appears to be the case as Google has booted some 46 apps from major Chinese mobile developer iHandy out of its app store, BuzzFeed reported. And it isn’t saying why. The move follows Google’s ban of two other major Chinese app developers, DO Global and CooTek, who had 1 billion total downloads.

Google Firebase gets new tools



from Android – TechCrunch https://ift.tt/2msZrEp
via IFTTT