Wednesday, 2 October 2019

Google announces Action Blocks, a new accessibility tool for creating mobile shortcuts

Google today announced Action Blocks, a new accessibility tool that allows you to create shortcuts for common multi-step tasks with the help of the Google Assistant. In that respect, Action Blocks isn’t all that different from Shortcuts on iOS, for example, but Google is specifically looking at this as an accessibility feature for people with cognitive disabilities.

“If you’ve booked a rideshare using your phone recently, you’ve probably had to go through several steps: unlock your phone, find the right app, navigate through its screens, select appropriate options, and enter your address into the input box,” writes google accessibility software engineer Ajit Narayanan. “At each step, the app assumes that you’re able to read and write, find things by trial-and-error, remember your selections, and focus for a sustained period of time.”

Google’s own research shows that 80 percent of people with severe cognitive disabilities like advanced dementia, autism or Down syndrome don’t use smartphones, in part because of these barriers.

BedtimeStory 1

Action Blocks are essentially a sequence of commands for the Google Assistant, so everything the Assistant can do can be scripted using this new tool, no matter whether that’s starting a call or playing a TV show. Once the Action Block is set up, you can create a shortcut with a custom image on your phone’s home screen.

For now, the only way to get access to Action Blocks is to join Google’s trusted tester program. It’s unclear when this will roll out to a wider audience. When it does, though, I’m sure a wide variety of users will want to use of this feature .

 

 



from Android – TechCrunch https://ift.tt/2ozUeva
via IFTTT

UK privacy ‘class action’ complaint against Google gets unblocked

The UK Court of Appeal has unanimously overturned a block on a class-action style lawsuit brought on behalf of four million iPhone users against Google — meaning the case can now proceed to be heard.

The High Court tossed the suit a year ago on legal grounds. However the claimants sought permission to appeal — and today that’s been granted.

The case pertains to allegations Google used tracking cookies to override iPhone users’ privacy settings in Apple’s Safari browser between 2011 and 2012. Specifically that Google developed a workaround for browser settings that allowed it to set its DoubleClick Ad cookie without iPhone users’ knowledge or consent.

In 2012 the tech giant settled with the FTC over the same issue — agreeing to pay $22.5M to resolve the charge that it bypassed Safari’s privacy settings to serve targeted ads to consumers. Although Google’s settlement with the FTC did not include an admission of any legal wrongdoing.

Several class action lawsuits were also filed in the US and later consolidated. And in 2016 Google agreed to settle those by paying $5.5M to educational institutions or non-profits that campaign to raise public awareness of online security and privacy. Though terms of the settlement remain under legal challenge.

UK law does not have a direct equivalent to a US style class action. But in 2017 a veteran consumer rights campaigner, Richard Lloyd, filed a collective lawsuit over the Safari workaround, seeking to represent millions of UK iPhone users whose browser settings his complaint alleges were ignored by Google’s tracking technologies.

The decision a High Court judge last year to block the action boiled down to the judge not being convinced claimants could demonstrate a basis for bringing a compensation claim. Historically there’s been a high legal bar for that as UK law has required that claimants are able to demonstrate they suffered damage as a result of data protection violation.

The High Court judge was also not persuaded the complaint met the requirements for a representative action.

However the Appeals Court has taken a different view.

The three legal questions it considered were whether a claimant could recover damages for loss of control of their data under section 13 of the UK’s Data Protection Act 1998 “without proving pecuniary loss or distress”; whether the members of the class had the same interest as one another and were identifiable; and whether the judge ought to have exercised discretion to allow the case to proceed.

The court rejected Google’s main argument that UK and EU law require “proof of causation and consequential damage”.

It also took the view that the claim can stand as a representative procedure.

In concluding the judgment, the chancellor of the High Court writes:

… the judge ought to have held: (a) that a claimant can recover damages for loss of control of their data under section 13 of DPA, without proving pecuniary loss or distress, and (b) that the members of the class that Mr Lloyd seeks to represent did have the same interest under CPR Part 19.6(1) and were identifiable.

The judge exercised his discretion as to whether the action should proceed as a representative action on the wrong basis and this court can exercise it afresh. If the other members of the court agree, I would exercise our discretion so as to allow the action to proceed.

I would, therefore, allow the appeal, and make an order granting Mr Lloyd permission to serve the proceedings on Google outside the jurisdiction of the court.

Mishcon de Reya, the law firm representing Lloyd, has described the decision as “groundbreaking” — saying it could establish “a new procedural framework for the conduct of mass data breach claims” under UK civil procedure rules governing group litigations.

In a statement, partner and case lead, James Oldnall, said: This decision is significant not only for the millions of consumers affected by Google’s activity but also for the collective action landscape more broadly. The Court of Appeal has confirmed our view that representative actions are essential for holding corporate giants to account. In doing so it has established an avenue to redress for consumers.”

Mishcon de Reya argues that the decision has confirmed a number of key legal principles around UK data protection law and representative actions, including that:

  • An individual’s personal data has an economic value and loss of control of that data is a violation of their right to privacy which can, in principle, constitute damage under s.13 of the DPA, without the need to demonstrate pecuniary loss or distress. The Court, can therefore, award a uniform per capita sum to members of the class in representative actions for the loss of control of their personal data
  • That individuals who have lost control of their personal data have suffered the same loss and therefore share the “same interest” under CPR 19.6
  • That representative actions are, in practice, the only way that claims such as this can be pursued

Responding to the judgement, a Google spokesperson told us: “Protecting the privacy and security of our users has always been our number one priority. This case relates to events that took place nearly a decade ago and that we addressed at the time. We believe it has no merit and should be dismissed.”



from iPhone – TechCrunch https://ift.tt/2n0e5TK

Duet adds Android tablet support for its second screen app

Sidecar is great. It’s my favorite software thing Apple has introduced in years. I’m using it right now, as I type this, in fact. For a handle of app developers, however, the feature’s arrival with macOS Catalina was an expected, but still potentially devastating piece of news. We spoke to Duet Display and Astropad about the phenomenon of “Sherlock” that would profoundly impact their respective models. 

“We actually have a couple of other big product launches that are not connected to the space this summer,” Duet Founder and CEO Rahul Dewan said at the time. “We should be fairly diverse.” It seems Android tablet compatibility was pretty high on that list. Today the company announced a release for Google’s operating system, after several months of beta testing.

“Our users have frequently asked us to bring our product to Android, and since early this year, we have been working on an Android release of Duet so that we can expand our technology to new platforms,” Duet writes. “We have privately been beta testing with hundreds of users, working hard to create a robust product that performs well across as many Android devices as possible.”The app operates similarly to the iPad version, making it possible to use an Android tablet as a second display. That means, among other things, a much more affordable way to get a second screen for your laptop. The connection will work both wired and wireless. Current users will have to update the latest version of the Mac or Windows desktop version of the app.

It’s tough not to feel bad for a small developer effectively getting sidelined by native support (and really solid implementation in the case of Sidecar), but it’s nice to see Duet continuing to fight.



from Android – TechCrunch https://ift.tt/2nCVPjH
via IFTTT

Tuesday, 1 October 2019

Defining micromobility and where it’s going with business and mobility analyst Horace Dediu

Micromobility has taken off over the last couple of years. Between electric bike-share and scooter-share, these vehicles have made their way all over the world. Meanwhile, some of these companies, like Bird and Lime, have already hit unicorn status thanks to massive funding rounds.

Horace Dediu, the well-known industry analyst who coined the term micromobility as it relates to this emerging form of transportation, took some time to chat with TechCrunch ahead of Micromobility Europe, a one-day event focused on all-things micromobility.

We chatted about the origin of the word micromobility, where big tech companies like Apple, Google and Amazon fit into the space, opportunities for developers to build tools and services on top of these vehicles, the opportunity for franchising business models, the potential for micromobility to be bigger than autonomous, and much more.

Here’s a Q&A, which I lightly edited for length and clarity, I did with Dediu ahead of his micromobility conference.


Megan Rose Dickey: Hey, Horace. Thanks for taking the time to chat.

Horace Dediu: Hey, no problem. My pleasure.

Rose Dickey: I was hoping to chat with you a bit about micromobility because I know that you have the big conference coming up in Europe, so I figured this would be a good time to touch base with you. I know you’ve been credited with coining the term micromobility as it relates to likes of shared e-bikes and scooters.

So, to kick things off, can you define micromobility?

Dediu: Yes, sure. So, the idea came to me because I actually remembered microcomputing.



from Apple – TechCrunch https://ift.tt/2mLlXZe

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple has been blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available



from iPhone – TechCrunch https://ift.tt/2nzPzsG

Apple launches Deep Fusion feature in beta on iPhone 11 and iPhone 11 Pro

Apple is launching an early look at its new Deep Fusion feature on iOS today with a software update for beta users. Deep Fusion is a technique that blends multiple exposures together at the pixel level to give users a higher level of detail than is possible using standard HDR imaging — especially in images with very complicated textures like skin, clothing or foilage.

The developer beta released today supports the iPhone 11 where Deep Fusion will improve photos taken on the wide camera and the iPhone 11 Pro and Pro Max where it will kick in on the telephoto and wide angle but not ultra wide lenses. 

According to Apple, Deep Fusion requires the A13 and will not be available on any older iPhones. 

As I spoke about extensively in my review of the iPhone 11 Pro, Apple’s ‘camera’ in the iPhone is really a collection of lenses and sensors that is processed aggressively by dedicated machine learning software run on specialized hardware. Effectively, a machine learning camera. 

Deep Fusion is a fascinating technique that extends Apple’s philosophy on photography as a computational process out to its next logical frontier. As of the iPhone 7, Apple has been blending output from the wide and telephoto lenses to provide the best result. This process happened without the user ever being aware of it. 

E020289B A902 47A8 A2AD F6B31B16BEC8 52C2191B 02DD 41ED B33F 19279C89CA42

Deep Fusion continues in this vein. It will automatically take effect on images that are taken in specific situations.

On wide lens shots, it will start to be active just above the roughly 10 lux floor where Night Mode kicks in. The top of the range of scenes where it is active is variable depending on light source. On the telephoto lens, it will be active in all but the brightest situations where Smart HDR will take over, providing a better result due to the abundance of highlights.

Apple provided a couple of sample images showing Deep Fusion in action which I’ve embedded here. They have not provided any non-DF examples yet, but we’ll see those as soon as the beta gets out and people install it. 

Deep Fusion works this way:

The camera shoots a ‘short’ frame, at a negative EV value. Basically a slightly darker image than you’d like, and pulls sharpness from this frame. It then shoots 3 regular EV0 photos and a ‘long’ EV+ frame, registers alignment and blends those together. 

This produces two 12MP photos which are combined into one 24MP photo. The combination of the two is done using 4 separate neural networks which take into account the noise characteristics of Apple’s camera sensors as well as the subject matter in the image. 

This combination is done on a pixel-by-pixel basis. One pixel is pulled at a time to result in the best combination for the overall image. The machine learning models look at the context of the image to determine where they belong on the image frequency spectrum. Sky and other broadly similar high frequency areas, skin tones in the medium frequency zone and high frequency items like clothing, foilage etc.

The system then pulls structure and tonality from one image or another based on ratios. 

The overall result, Apple says, results in better skin transitions, better clothing detail and better crispness at the edges of moving subjects.

There is currently no way to turn off the Deep Fusion process but, because the ‘over crop’ feature of the new cameras uses the Ultra Wide a small ‘hack’ to see the difference between the images is to turn that on, which will disable Deep Fusion as it does not use the Ultra Wide lens.

The Deep Fusion process requires around 1 second for processing. If you quickly shoot and then tap a preview of the image, it could take around a half second for the image to update to the new version. Most people won’t notice the process happening at all.

As to how it works IRL? We’ll test and get back to you as Deep Fusion becomes available



from Apple – TechCrunch https://ift.tt/2nzPzsG

WhatsApp is testing a self-destructing messages feature

WhatsApp users may soon get the ability to have their messages self-destruct after a set period of time. That’s according to a highly-reliable tipster who spotted the feature combing through the code of a beta version of the app.

Twitter user WABetaInfo said on Tuesday that the recently released public beta of WhatsApp for Android — dubbed v2.19.275 — includes an optional feature that would allow users to set their messages to self-destruct.

The ability to have messages disappear forever after a fixed amount of time could come in handy to users who share sensitive information with friends and colleagues on the app. It’s one of the most popular features on instant messaging client Telegram, for instance.

2EE9D669 87D1 4CC2 842F 2644E7549ED9

Image: WABetaInfo

Telegram offers a “secret chat” feature wherein users can engage with each other and their messages disappear from their devices after a set amount of time. The messaging platform says it does not store the text on its servers and restricts users from forwarding the messages, or take a screenshot of the conversation, to ensure there is “no trail” of the texts.

“All secret chats in Telegram are device-specific and are not part of the Telegram cloud. This means you can only access messages in a secret chat from their device of origin. They are safe for as long as your device is safe in your pocket,” it explains.

Facebook, which owns WhatsApp, also offers a “secret chat” feature on its Messenger app. But there, the secret chat feature only encrypts end-to-end messages and media content shared between two users. On WhatsApp, messages between users are end-to-end encrypted by default.

Currently, WhatsApp is testing the feature in a group setting that supports participation from multiple individuals. Messages could be set to self-destruct as soon as five seconds after they have been sent and as late as an hour. Additionally, an image shared by WABetaInfo shows that group administrators will have the ability to prevent other participants in the group from texting.

Some third-party WhatsApp apps have allowed self-destructing messages feature in the past. But in recent years, WhatsApp has started to crack down on third-party services to ensure safety of its users.

It remains unclear how soon — if ever — WhatsApp plans to roll out this feature to all its users. We have reached out to them for comment.



from Android – TechCrunch https://ift.tt/2oeb2Ys
via IFTTT