Wednesday, 11 August 2021

Google launches Android 12 beta 4, hitting the platform stability milestone

Google has now taken another step towards the public release of the latest version of the Android operating system, Android 12. The company today released the fourth beta of Android 12, whose most notable new feature is that it has achieved the Platform Stability milestone — meaning the changes impacting Android app developers are now finalized, allowing them to test their apps without worrying about breaking changes in subsequent releases.

While the updated version of Android brings a number of new capabilities for developers to tap into, Google urges its developers to first focus on releasing an Android 12-compatible update. If users find their app doesn’t work properly when they upgrade to the new version of Android, they may stop using the app entirely or even uninstall it, the company warns.

Among the flagship consumer-facing features in Android 12 is the new and more adaptive design system called “Material You,” which lets users apply themes that span across the OS to personalize their Android experience. It also brings new privacy tools, like microphone and camera indicators that show if an app is using those features, as well as a clipboard read notification, similar to iOS, which alerts to apps that read the user’s clipboard history. In addition, Android 12 lets users play games as soon as they download them, through a Google Play Instant feature. Other key Android features and tools, like Quick Settings, Google Pay, Home Controls, and Android widgets, among others, have been improved, too.

Google has continued to roll out smaller consumer-facing updates in previous Android 12 beta releases, but beta 4 is focused on developers getting their apps ready for the public release of Android, which is expected in the fall.

Image Credits: Google

The company suggested developers look out for changes that include the new Privacy Dashboard in Settings, which lets users see which app are accessing what type of data and when, and other privacy features like the indicator lights for the mic and camera, clipboard read tools, and new toggles that lets users turn off mic and camera access across all apps.

There’s also a new “stretch” overscroll effect that replace the older “glow” overscroll effect systemwide, new splash screen animations for apps and keygen changes to be aware of. And there are a number of SDKs and libraries that developers use that will need to be tested for compatibility, including those from Google and third-parties.

The new Android 12 beta 4 release is available on supported Pixel devices, and on devices from select partners including ASUS, OnePlus, Oppo, Realme, Sharp, and ZTE. Android TV developers can access beta 4 as well, via the ADT-3 developer kit.



from Android – TechCrunch https://ift.tt/3CBtI5Y
via IFTTT

WhatsApp gains the ability to transfer chat history between mobile operating systems

WhatsApp users will finally be able to move their entire chat history between mobile operating systems — something that’s been one of users’ biggest requests to date. The company today introduced a feature that will soon become available to users of both iOS and Android devices, allowing them to move their WhatsApp voice notes, photos, and conversations securely between devices when they switch between mobile operating systems.

The company had been rumored to be working on such functionality for some time, but the details of which devices would be initially supported or when it would be released weren’t yet known.

In product leaks, WhatsApp had appeared to be working on an integration into Android’s built-in transfer app, the Google Data Transfer Tool, which lets users move their files from one Android device to another, or switch from iOS to Android.

The feature WhatsApp introduced today, however, works with Samsung devices and Samsung’s own transfer tool, known as Smart Switch. Today, Smart Switch helps users transfer contacts, photos, music, messages, notes, calendars, and more to Samsung Galaxy devices. Now, it will transfer WhatsApp chat history, too.

WhatsApp showed off the new tool at Samsung’s Galaxy Unpacked event, and announced Samsung’s newest Galaxy foldable devices would get the feature first in the weeks to come. The feature will later roll out to Android more broadly. WhatsApp didn’t say when iOS users would gain access, but a spokesperson for WhatsApp told TechCrunch the team is working to bring the feature to users worldwide.

To use the feature, WhatsApp users will connect their old and new device together via a USB-C to Lightning cable, and launch Smart Switch. The new phone will then prompt you to scan a QR code using your old phone and export your WhatsApp history. To complete the transfer, you’ll sign into WhatsApp on the new device and import the messages.

Building such a feature was non-trivial, the company also explained, as messages across its service are end-to-end encrypted by default and stored on users’ devices. That meant the creation of a tool to move chat history between operating systems required additional work from both WhatsApp as well as operating system and device manufacturers in order to build it in a secure way, the company said.

“Your WhatsApp messages belong to you. That’s why they are stored on your phone by default, and not accessible in the cloud like many other messaging services,” noted Sandeep Paruchuri, product manager at WhatsApp, in a statement about the launch. “We’re excited for the first time to make it easy for people to securely transfer their WhatsApp history from one operating system to another. This has been one of our most requested features from users for years and we worked together with operating systems and device manufacturers to solve it,” he added.

 



from Android – TechCrunch https://ift.tt/2Xiw6yq
via IFTTT

Apple drops its lawsuit against maker of iPhone emulation software

Apple has settled its 2019 lawsuit with Corellium, a company that builds virtual iOS devices used by security researchers to find bugs in iPhones and other iOS devices, the Washington Post has reported. The terms of the settlement weren’t disclosed, but the agreement comes after Apple suffered a major court loss in the dispute in late 2020.

Corellium’s software allows users to run virtual iPhones on a computer browser, giving them deep access to iOS without the need for a physical device. In addition to accusing Corellium of infringing on its copyright, Apple said the company was selling its product indiscriminately, thereby compromising the platform’s security.

Specifically, Apple accused the company of selling its products to governments that could have probed its products for flaws. When he was employed by another company, Corellium co-founder David Wang helped the FBI unlock an iPhone used by a terrorist responsible for the San Bernardino attacks. 

However, a judge dismissed the copyright claims, calling them “puzzling, if not disingenuous.” He wrote in his ruling that “the Court finds that Corellium has met its burden of establishing fair use,” adding that its use of iOS in that context was permissible.

Corellium started offering its platform to individual subscribers earlier this year, after previously only making it available to enterprise users. Each request for access is vetted individually so that it won’t fall into the wrong hands for malicious purposes, according to the company.

Editor’s note: This post originally appeared on Engadget.



from Apple – TechCrunch https://ift.tt/2U7SQQm

Tuesday, 10 August 2021

With liberty and privacy for some: Widening inequality on the digital frontier

Privacy is emotional — we often value privacy the most when we feel vulnerable or powerless when confronted with creepy data practices. But in the eyes of the court, emotions don’t always constitute harm or a reason for structural change in how privacy is legally codified.

It might take a material perspective on widening privacy disparities — and their implication in broader social inequality — to catalyze the privacy improvements the U.S. desperately needs.

Apple’s leaders announced their plans for the App Tracking Transparency (ATT) update in 2020. In short, iOS users can refuse an app’s ability to track their activity on other apps and websites. The ATT update has led to a sweeping three-quarters of iOS users opting out of cross-app tracking.

Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance.

With less data available to advertisers looking to develop individual profiles for targeted advertising, targeted ads for iOS users look less effective and appealing to ad agencies. As a result, new findings show that advertisers are spending one-third less in advertising spending on iOS devices.

They are redirecting that capital into advertising on Android systems, which account for just over 42.06% of the mobile OS market share, compared to iOS at 57.62%.

Beyond a vague sense of creepiness, privacy disparities increasingly pose risks of material harm: emotional, reputational, economic and otherwise. If privacy belongs to all of us, as many tech companies say, then why does it cost so much? Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance, toward the populations with fewer resources, legal or technical, to control their data.

More than just ads

As more money goes into Android ads, we could expect advertising techniques to become more sophisticated, or at least more aggressive. It is not illegal for companies to engage in targeted advertising, so long as it is done in compliance with users’ legal rights to opt out under relevant laws like CCPA in California.

This raises two immediate issues. First, residents of every state except California currently lack such opt-out rights. Second, granting some users the right to opt out of targeted advertising strongly implies that there are harms, or at least risks, to targeted advertising. And indeed, there can be.

Targeted advertising involves third parties building and maintaining behind-the-scenes profiles of users based on their behavior. Gathering data on app activity, such as fitness habits or shopping patterns, could lead to further inferences about sensitive aspects of a user’s life.

At this point, a representation of a user exists in an under-regulated data system containing — whether correctly or incorrectly inferenced — data that the user did not consent to sharing. (Unless the user lives in California, but let’s suppose they live anywhere else in the U.S.)

Further, research finds that targeted advertising, in building detailed profiles of users, can enact discrimination in housing and employment opportunities, sometimes in violation of federal law. And targeted advertising can impede individuals’ autonomy, preemptively narrowing their window of purchasing options, even when they don’t want to. On the other hand, targeted advertising can support niche or grassroots organizations in connecting them directly with interested audiences. Regardless of a stance on targeted advertising, the underlying problem is when users have no say in whether they are subject to it.

Targeted advertising is a massive and booming practice, but it is only one practice within a broader web of business activities that do not prioritize respect for users’ data. And these practices are not illegal in much of the U.S. Instead of the law, your pocketbook can keep you clear of data disrespect.

Privacy as a luxury

Prominent tech companies, particularly Apple, declare privacy a human right, which makes complete sense from a business standpoint. In the absence of the U.S. federal government codifying privacy rights for all consumers, a bold privacy commitment from a private company sounds pretty appealing.

If the government isn’t going to set a privacy standard, at least my phone manufacturer will. Even though only 6% of Americans claim to understand how companies use their data, it is companies that are making the broad privacy moves.

But if those declaring privacy as a human right only make products affordable to some, what does that say about our human rights? Apple products skew toward wealthier, more educated consumers compared to competitors’ products. This projects a troubling future of increasingly exacerbated privacy disparities between the haves and the have-nots, where a feedback loop is established: Those with fewer resources to acquire privacy protections may have fewer resources to navigate the technical and legal challenges that come with a practice as convoluted as targeted advertising.

Don’t take this as me siding with Facebook in its feud with Apple about privacy versus affordability (see: systemic access control issues recently coming to light). In my view, neither side of that battle is winning.

We deserve meaningful privacy protections that everyone can afford. In fact, to turn the phrase on its head, we deserve meaningful privacy protections that no company can afford to omit from their products. We deserve a both/and approach: privacy that is both meaningful and widely available.

Our next steps forward

Looking ahead, there are two key areas for privacy progress: privacy legislation and privacy tooling for developers. I again invoke the both/and approach. We need lawmakers, rather than tech companies, setting reliable privacy standards for consumers. And we need widely available developer tools that give developers no reason — financially, logistically or otherwise — to implement privacy at the product level.

On privacy legislation, I believe that policy professionals are already raising some excellent points, so I’ll direct you to some of my favorite recent writing from them.

Stacey Gray and her team at the Future of Privacy Forum have begun an excellent blog series on how a federal privacy law could interact with the emerging patchwork of state laws.

Joe Jerome published an outstanding recap of the 2021 state-level privacy landscape and the routes toward widespread privacy protections for all Americans. A key takeaway: The effectiveness of privacy regulation hinges on how well it harmonizes among individuals and businesses. That’s not to say that regulation should be business-friendly, but rather that businesses should be able to reference clear privacy standards so they can confidently and respectfully handle everyday folks’ data.

On privacy tooling, if we make privacy tools readily accessible and affordable for all developers, we really leave tech with zero excuses to meet privacy standards. Take the issue of access control, for instance. Engineers attempt to build manual controls over which personnel and end users can access various data in a complex data ecosystem already populated with sensitive personal information.

The challenge is twofold. First, the horse has already bolted. Technical debt accumulates rapidly, while privacy has remained outside of software development. Engineers need tools that enable them to build privacy features like nuanced access control prior to production.

This leads into the second aspect of the challenge: Even if the engineers overcame all of the technical debt and could make structural privacy improvements at the code level, what standards and widely available tools are available to use?

As a June 2021 report from the Future of Privacy Forum makes clear, privacy technology is in dire need of consistent definitions, which are required for widespread adoption of trustworthy privacy tools. With more consistent definitions and widely available developer tools for privacy, these technical transformations translate into material improvements in how tech at large — not just tech of Brand XYZ — gives users control over their data.

We need privacy rules set by an institution that is not itself playing the game. Regulation alone cannot save us from modern privacy perils, but it is a vital ingredient in any viable solution.

Alongside regulation, every software engineering team should have privacy tools immediately available. When civil engineers are building a bridge, they cannot make it safe for a subset of the population; it must work for all who cross it. The same must hold for our data infrastructure, lest we exacerbate disparities within and beyond the digital realm.



from Android – TechCrunch https://ift.tt/3s9Y2zJ
via IFTTT

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an Apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s iCloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.



from iPhone – TechCrunch https://ift.tt/2Vz8Coh

Interview: Apple’s Head of Privacy details child abuse detection and Messages safety features

Last week, Apple announced a series of new features targeted at child safety on its devices. Though not live yet, the features will arrive later this year for users. Though the goals of these features are universally accepted to be good ones — the protection of minors and the limit of the spread of Child Sexual Abuse Material (CSAM), there have been some questions about the methods Apple is using.

I spoke to Erik Neuenschwander, Head of Privacy at Apple, about the new features launching for its devices. He shared detailed answers to many of the concerns that people have about the features and talked at length to some of the tactical and strategic issues that could come up once this system rolls out. 

I also asked about the rollout of the features, which come closely intertwined but are really completely separate systems that have similar goals. To be specific, Apple is announcing three different things here, some of which are being confused with one another in coverage and in the minds of the public. 

CSAM detection in iCloud Photos – A detection system called NeuralHash creates identifiers it can compare with IDs from the National Center for Missing and Exploited Children and other entities to detect known CSAM content in iCloud Photo libraries. Most cloud providers already scan user libraries for this information — Apple’s system is different in that it does the matching on device rather than in the cloud.

Communication Safety in Messages – A feature that a parent opts to turn on for a minor on their iCloud Family account. It will alert children when an image they are going to view has been detected to be explicit and it tells them that it will also alert the parent.

Interventions in Siri and search – A feature that will intervene when a user tries to search for CSAM-related terms through Siri and search and will inform the user of the intervention and offer resources.

For more on all of these features you can read our articles linked above or Apple’s new FAQ that it posted this weekend.

From personal experience, I know that there are people who don’t understand the difference between those first two systems, or assume that there will be some possibility that they may come under scrutiny for innocent pictures of their own children that may trigger some filter. It’s led to confusion in what is already a complex rollout of announcements. These two systems are completely separate, of course, with CSAM detection looking for precise matches with content that is already known to organizations to be abuse imagery. Communication Safety in Messages takes place entirely on the device and reports nothing externally — it’s just there to flag to a child that they are or could be about to be viewing explicit images. This feature is opt-in by the parent and transparent to both parent and child that it is enabled.

Apple’s Communication Safety in Messages feature. Image Credits: Apple

There have also been questions about the on-device hashing of photos to create identifiers that can be compared with the database. Though NeuralHash is a technology that can be used for other kinds of features like faster search in photos, it’s not currently used for anything else on iPhone aside from CSAM detection. When iCloud Photos is disabled, the feature stops working completely. This offers an opt-out for people but at an admittedly steep cost given the convenience and integration of iCloud Photos with Apple’s operating systems.

Though this interview won’t answer every possible question related to these new features, this is the most extensive on-the-record discussion by Apple’s senior privacy member. It seems clear from Apple’s willingness to provide access and its ongoing FAQ’s and press briefings (there have been at least 3 so far and likely many more to come) that it feels that it has a good solution here. 

Despite the concerns and resistance, it seems as if it is willing to take as much time as is necessary to convince everyone of that. 

This interview has been lightly edited for clarity.

TC: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users libraries on cloud services that — as you point out — isn’t something that we’ve ever done; to look through user’s iCloud Photos. This system doesn’t change that either, it neither looks through data on the device, nor does it look through all photos in iCloud Photos. Instead what it does is gives us a new ability to identify accounts which are starting collections of known CSAM.

So the development of this new CSAM detection technology is the watershed that makes now the time to launch this. And Apple feels that it can do it in a way that it feels comfortable with and that is ‘good’ for your users?

That’s exactly right. We have two co-equal goals here. One is to improve child safety on the platform and the second is to preserve user privacy, And what we’ve been able to do across all three of the features, is bring together technologies that let us deliver on both of those goals.

Announcing the Communications safety in Messages features and the CSAM detection in iCloud Photos system at the same time seems to have created confusion about their capabilities and goals. Was it a good idea to announce them concurrently? And why were they announced concurrently, if they are separate systems?

Well, while they are [two] systems they are also of a piece along with our increased interventions that will be coming in Siri and search. As important as it is to identify collections of known CSAM where they are stored in Apple’s iCloud Photos service, It’s also important to try to get upstream of that already horrible situation. So CSAM detection means that there’s already known CSAM that has been through the reporting process, and is being shared widely re-victimizing children on top of the abuse that had to happen to create that material in the first place. for the creator of that material in the first place. And so to do that, I think is an important step, but it is also important to do things to intervene earlier on when people are beginning to enter into this problematic and harmful area, or if there are already abusers trying to groom or to bring children into situations where abuse can take place, and Communication Safety in Messages and our interventions in Siri and search actually strike at those parts of the process. So we’re really trying to disrupt the cycles that lead to CSAM that then ultimately might get detected by our system.

The process of Apple’s CSAM detection in iCloud Photos system. Image Credits: Apple

Governments and agencies worldwide are constantly pressuring all large organizations that have any sort of end-to-end or even partial encryption enabled for their users. They often lean on CSAM and possible terrorism activities as rationale to argue for backdoors or encryption defeat measures. Is launching the feature and this capability with on-device hash matching an effort to stave off those requests and say, look, we can provide you with the information that you require to track down and prevent CSAM activity — but without compromising a user’s privacy?

So, first, you talked about the device matching so I just want to underscore that the system as designed doesn’t reveal — in the way that people might traditionally think of a match — the result of the match to the device or, even if you consider the vouchers that the device creates, to Apple. Apple is unable to process individual vouchers; instead, all the properties of our system mean that it’s only once an account has accumulated a collection of vouchers associated with illegal, known CSAM images that we are able to learn anything about the user’s account. 

Now, why to do it is because, as you said, this is something that will provide that detection capability while preserving user privacy. We’re motivated by the need to do more for child safety across the digital ecosystem, and all three of our features, I think, take very positive steps in that direction. At the same time we’re going to leave privacy undisturbed for everyone not engaged in the illegal activity.

Does this, creating a framework to allow scanning and matching of on-device content, create a framework for outside law enforcement to counter with, ‘we can give you a list, we don’t want to look at all of the user’s data but we can give you a list of content that we’d like you to match’. And if you can match it with this content you can match it with other content we want to search for. How does it not undermine Apple’s current position of ‘hey, we can’t decrypt the user’s device, it’s encrypted, we don’t hold the key?’

It doesn’t change that one iota. The device is still encrypted, we still don’t hold the key, and the system is designed to function on on-device data. What we’ve designed has a device side component — and it has the device side component by the way, for privacy improvements. The alternative of just processing by going through and trying to evaluate users data on a server is actually more amenable to changes [without user knowledge], and less protective of user privacy.

Our system involves both an on-device component where the voucher is created, but nothing is learned, and a server-side component, which is where that voucher is sent along with data coming to Apple service and processed across the account to learn if there are collections of illegal CSAM. That means that it is a service feature. I understand that it’s a complex attribute that a feature of the service has a portion where the voucher is generated on the device, but again, nothing’s learned about the content on the device. The voucher generation is actually exactly what enables us not to have to begin processing all users’ content on our servers which we’ve never done for iCloud Photos. It’s those sorts of systems that I think are more troubling when it comes to the privacy properties — or how they could be changed without any user insight or knowledge to do things other than what they were designed to do.

One of the bigger queries about this system is that Apple has said that it will just refuse action if it is asked by a government or other agency to compromise by adding things that are not CSAM to the database to check for them on-device. There are some examples where Apple has had to comply with local law at the highest levels if it wants to operate there, China being an example. So how do we trust that Apple is going to hew to this rejection of interference If pressured or asked by a government to compromise the system?

Well first, that is launching only for US, iCloud accounts, and so the hypotheticals seem to bring up generic countries or other countries that aren’t the US when they speak in that way, and the therefore it seems to be the case that people agree US law doesn’t offer these kinds of capabilities to our government. 

But even in the case where we’re talking about some attempt to change the system, it has a number of protections built in that make it not very useful for trying to identify individuals holding specifically objectionable images. The hash list is built into the operating system, we have one global operating system and don’t have the ability to target updates to individual users and so hash lists will be shared by all users when the system is enabled. And secondly, the system requires the threshold of images to be exceeded so trying to seek out even a single image from a person’s device or set of people’s devices won’t work because the system simply does not provide any knowledge to Apple for single photos stored in our service. And then, thirdly, the system has built into it a stage of manual review where, if an account is flagged with a collection of illegal CSAM material, an apple team will review that to make sure that it is a correct match of illegal CSAM material prior to making any referral to any external entity. And so the hypothetical requires jumping over a lot of hoops, including having Apple change its internal process to refer material that is not illegal, like known CSAM and that we don’t believe that there’s a basis on which people will be able to make that request in the US. And the last point that I would just add is that it does still preserve user choice, if a user does not like this kind of functionality, they can choose not to use iCloud Photos and if iCloud Photos is not enabled no part of the system is functional.

So if iCloud Photos is disabled, the system does not work, which is the public language in the FAQ. I just wanted to ask specifically, when you disable iCloud Photos, does this system continue to create hashes of your photos on device, or is it completely inactive at that point?

If users are not using iCloud Photos, NeuralHash will not run and will not generate any vouchers. CSAM detection is a neural hash being compared against a database of the known CSAM hashes that are part of the operating system image. None of that piece, nor any of the additional parts including the creation of the safety vouchers or the uploading of vouchers to iCloud Photos is functioning if you’re not using iCloud Photos. 

In recent years, Apple has often leaned into the fact that on-device processing preserves user privacy. And in nearly every previous case and I can think of that’s true. Scanning photos to identify their content and allow me to search them, for instance. I’d rather that be done locally and never sent to a server. However, in this case, it seems like there may actually be a sort of anti-effect in that you’re scanning locally, but for external use cases, rather than scanning for personal use — creating a ‘less trust’ scenario in the minds of some users. Add to this that every other cloud provider scans it on their servers and the question becomes why should this implementation being different from most others engender more trust in the user rather than less?

I think we’re raising the bar, compared to the industry standard way to do this. Any sort of server side algorithm that’s processing all users photos is putting that data at more risk of disclosure and is, by definition, less transparent in terms of what it’s doing on top of the user’s library. So, by building this into our operating system, we gain the same properties that the integrity of the operating system provides already across so many other features, the one global operating system that’s the same for all users who download it and install it, and so it in one property is much more challenging, even how it would be targeted to an individual user. On the server side that’s actually quite easy — trivial. To be able to have some of the properties and building it into the device and ensuring it’s the same for all users with the features enable give a strong privacy property. 

Secondly, you point out how use of on device technology is privacy preserving, and in this case, that’s a representation that I would make to you, again. That it’s really the alternative to where users’ libraries have to be processed on a server that is less private.

The things that we can say with this system is that it leaves privacy completely undisturbed for every other user who’s not into this illegal behavior, Apple gain no additional knowledge about any users cloud library. No user’s Cloud Library has to be processed as a result of this feature. Instead what we’re able to do is to create these cryptographic safety vouchers. They have mathematical properties that say, Apple will only be able to decrypt the contents or learn anything about the images and users specifically that collect photos that match illegal, known CSAM hashes, and that’s just not something anyone can say about a cloud processing scanning service, where every single image has to be processed in a clear decrypted form and run by routine to determine who knows what? At that point it’s very easy to determine anything you want [about a user’s images] versus our system only what is determined to be those images that match a set of known CSAM hashes that came directly from NCMEC and and other child safety organizations. 

Can this CSAM detection feature stay holistic when the device is physically compromised? Sometimes cryptography gets bypassed locally, somebody has the device in hand — are there any additional layers there?

I think it’s important to underscore how very challenging and expensive and rare this is. It’s not a practical concern for most users though it’s one we take very seriously, because the protection of data on the device is paramount for us. And so if we engage in the hypothetical where we say that there has been an attack on someone’s device: that is such a powerful attack that there are many things that that attacker could attempt to do to that user. There’s a lot of a user’s data that they could potentially get access to. And the idea that the most valuable thing that an attacker — who’s undergone such an extremely difficult action as breaching someone’s device — was that they would want to trigger a manual review of an account doesn’t make much sense. 

Because, let’s remember, even if the threshold is met, and we have some vouchers that are decrypted by Apple. The next stage is a manual review to determine if that account should be referred to NCMEC or not, and that is something that we want to only occur in cases where it’s a legitimate high value report. We’ve designed the system in that way, but if we consider the attack scenario you brought up, I think that’s not a very compelling outcome to an attacker.

Why is there a threshold of images for reporting, isn’t one piece of CSAM content too many?

We want to ensure that the reports that we make to NCMEC are high value and actionable, and one of the notions of all systems is that there’s some uncertainty built in to whether or not that image matched, And so the threshold allows us to reach that point where we expect a false reporting rate for review of one in 1 trillion accounts per year. So, working against the idea that we do not have any interest in looking through users’ photo libraries outside those that are holding collections of known CSAM the threshold allows us to have high confidence that those accounts that we review are ones that when we refer to NCMEC, law enforcement will be able to take up and effectively investigate, prosecute and convict.



from Apple – TechCrunch https://ift.tt/2Vz8Coh

With liberty and privacy for some: Widening inequality on the digital frontier

Privacy is emotional — we often value privacy the most when we feel vulnerable or powerless when confronted with creepy data practices. But in the eyes of the court, emotions don’t always constitute harm or a reason for structural change in how privacy is legally codified.

It might take a material perspective on widening privacy disparities — and their implication in broader social inequality — to catalyze the privacy improvements the U.S. desperately needs.

Apple’s leaders announced their plans for the App Tracking Transparency (ATT) update in 2020. In short, iOS users can refuse an app’s ability to track their activity on other apps and websites. The ATT update has led to a sweeping three-quarters of iOS users opting out of cross-app tracking.

Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance.

With less data available to advertisers looking to develop individual profiles for targeted advertising, targeted ads for iOS users look less effective and appealing to ad agencies. As a result, new findings show that advertisers are spending one-third less in advertising spending on iOS devices.

They are redirecting that capital into advertising on Android systems, which account for just over 42.06% of the mobile OS market share, compared to iOS at 57.62%.

Beyond a vague sense of creepiness, privacy disparities increasingly pose risks of material harm: emotional, reputational, economic and otherwise. If privacy belongs to all of us, as many tech companies say, then why does it cost so much? Whenever one user base gears up with privacy protections, companies simply redirect their data practices along the path of least resistance, toward the populations with fewer resources, legal or technical, to control their data.

More than just ads

As more money goes into Android ads, we could expect advertising techniques to become more sophisticated, or at least more aggressive. It is not illegal for companies to engage in targeted advertising, so long as it is done in compliance with users’ legal rights to opt out under relevant laws like CCPA in California.

This raises two immediate issues. First, residents of every state except California currently lack such opt-out rights. Second, granting some users the right to opt out of targeted advertising strongly implies that there are harms, or at least risks, to targeted advertising. And indeed, there can be.

Targeted advertising involves third parties building and maintaining behind-the-scenes profiles of users based on their behavior. Gathering data on app activity, such as fitness habits or shopping patterns, could lead to further inferences about sensitive aspects of a user’s life.

At this point, a representation of a user exists in an under-regulated data system containing — whether correctly or incorrectly inferenced — data that the user did not consent to sharing. (Unless the user lives in California, but let’s suppose they live anywhere else in the U.S.)

Further, research finds that targeted advertising, in building detailed profiles of users, can enact discrimination in housing and employment opportunities, sometimes in violation of federal law. And targeted advertising can impede individuals’ autonomy, preemptively narrowing their window of purchasing options, even when they don’t want to. On the other hand, targeted advertising can support niche or grassroots organizations in connecting them directly with interested audiences. Regardless of a stance on targeted advertising, the underlying problem is when users have no say in whether they are subject to it.

Targeted advertising is a massive and booming practice, but it is only one practice within a broader web of business activities that do not prioritize respect for users’ data. And these practices are not illegal in much of the U.S. Instead of the law, your pocketbook can keep you clear of data disrespect.

Privacy as a luxury

Prominent tech companies, particularly Apple, declare privacy a human right, which makes complete sense from a business standpoint. In the absence of the U.S. federal government codifying privacy rights for all consumers, a bold privacy commitment from a private company sounds pretty appealing.

If the government isn’t going to set a privacy standard, at least my phone manufacturer will. Even though only 6% of Americans claim to understand how companies use their data, it is companies that are making the broad privacy moves.

But if those declaring privacy as a human right only make products affordable to some, what does that say about our human rights? Apple products skew toward wealthier, more educated consumers compared to competitors’ products. This projects a troubling future of increasingly exacerbated privacy disparities between the haves and the have-nots, where a feedback loop is established: Those with fewer resources to acquire privacy protections may have fewer resources to navigate the technical and legal challenges that come with a practice as convoluted as targeted advertising.

Don’t take this as me siding with Facebook in its feud with Apple about privacy versus affordability (see: systemic access control issues recently coming to light). In my view, neither side of that battle is winning.

We deserve meaningful privacy protections that everyone can afford. In fact, to turn the phrase on its head, we deserve meaningful privacy protections that no company can afford to omit from their products. We deserve a both/and approach: privacy that is both meaningful and widely available.

Our next steps forward

Looking ahead, there are two key areas for privacy progress: privacy legislation and privacy tooling for developers. I again invoke the both/and approach. We need lawmakers, rather than tech companies, setting reliable privacy standards for consumers. And we need widely available developer tools that give developers no reason — financially, logistically or otherwise — to implement privacy at the product level.

On privacy legislation, I believe that policy professionals are already raising some excellent points, so I’ll direct you to some of my favorite recent writing from them.

Stacey Gray and her team at the Future of Privacy Forum have begun an excellent blog series on how a federal privacy law could interact with the emerging patchwork of state laws.

Joe Jerome published an outstanding recap of the 2021 state-level privacy landscape and the routes toward widespread privacy protections for all Americans. A key takeaway: The effectiveness of privacy regulation hinges on how well it harmonizes among individuals and businesses. That’s not to say that regulation should be business-friendly, but rather that businesses should be able to reference clear privacy standards so they can confidently and respectfully handle everyday folks’ data.

On privacy tooling, if we make privacy tools readily accessible and affordable for all developers, we really leave tech with zero excuses to meet privacy standards. Take the issue of access control, for instance. Engineers attempt to build manual controls over which personnel and end users can access various data in a complex data ecosystem already populated with sensitive personal information.

The challenge is twofold. First, the horse has already bolted. Technical debt accumulates rapidly, while privacy has remained outside of software development. Engineers need tools that enable them to build privacy features like nuanced access control prior to production.

This leads into the second aspect of the challenge: Even if the engineers overcame all of the technical debt and could make structural privacy improvements at the code level, what standards and widely available tools are available to use?

As a June 2021 report from the Future of Privacy Forum makes clear, privacy technology is in dire need of consistent definitions, which are required for widespread adoption of trustworthy privacy tools. With more consistent definitions and widely available developer tools for privacy, these technical transformations translate into material improvements in how tech at large — not just tech of Brand XYZ — gives users control over their data.

We need privacy rules set by an institution that is not itself playing the game. Regulation alone cannot save us from modern privacy perils, but it is a vital ingredient in any viable solution.

Alongside regulation, every software engineering team should have privacy tools immediately available. When civil engineers are building a bridge, they cannot make it safe for a subset of the population; it must work for all who cross it. The same must hold for our data infrastructure, lest we exacerbate disparities within and beyond the digital realm.



from Apple – TechCrunch https://ift.tt/3s9Y2zJ