Tuesday, 26 May 2020

AI can battle coronavirus, but privacy shouldn’t be a casualty

South Korea has successfully slowed down the spread of coronavirus. Alongside widespread quarantine measures and testing, the country’s innovative use of technology is credited as a critical factor in combating the spread of the disease. As Europe and the United States struggle to cope, many governments are turning to AI tools to both advance the medical research and manage public health, now and in the long term: technical solutions for contact tracing, symptom tracking, immunity certificates and other applications are underway. These technologies are certainly promising, but they must be implemented in ways that do not undermine human rights.

Seoul has collected extensively and intrusively the personal data of its citizens, analyzing millions of data points from credit card transactions, CCTV footage and cellphone geolocation data. South Korea’s Ministry of the Interior and Safety even developed a smartphone app that shares with officials GPS data of self-quarantined individuals. If those in quarantine cross the “electronic fence” of their assigned area, the app alerts officials. The implications for privacy and security of such widespread surveillance are deeply concerning.

South Korea is not alone in leveraging personal data in containment efforts. China, Iran, Israel, Italy, Poland, Singapore, Taiwan and others have used location data from cellphones for various applications tasked with combating coronavirus. Supercharged with artificial intelligence and machine learning, this data cannot only be used for social control and monitoring, but also to predict travel patterns, pinpoint future outbreak hot spots, model chains of infection or project immunity.

Implications for human rights and data privacy reach far beyond the containment of COVID-19. Introduced as short-term fixes to the immediate threat of coronavirus, widespread data-sharing, monitoring and surveillance could become fixtures of modern public life. Under the guise of shielding citizens from future public health emergencies, temporary applications may become normalized. At the very least, government decisions to hastily introduce immature technologies — and in some cases to oblige citizens by law to use them — set a dangerous precedent.

Nevertheless, such data  and AI-driven applications could be useful advances in the fight against coronavirus, and personal data — anonymized and unidentifiable — offers valuable insights for governments navigating this unprecedented public health emergency. The White House is reportedly in active talks with a wide array of tech companies about how they can use anonymized aggregate-level location data from cellphones. The U.K. government is in discussion with cellphone operators about using location and usage data. And even Germany, which usually champions data rights, introduced a controversial app that uses data donations from fitness trackers and smartwatches to determine the geographical spread of the virus.

Big tech too is rushing to the rescue. Google makes available “Community Mobility Reports” for more than 140 countries, which offer insights into mobility trends in places such as retail and recreation, workplaces and residential areas. Apple and Google collaborate on a contact-tracing app and have just launched a developer toolkit including an API. Facebook is rolling out “local alerts” features that allow municipal governments, emergency response organizations and law enforcement agencies to communicate with citizens based on their location.

It is evident that data revealing the health and geolocation of citizens is as personal as it gets. The potential benefits weigh heavy, but so do concerns about the abuse and misuse of these applications. There are safeguards for data protection — perhaps, the most advanced one being the European GDPR — but during times of national emergency, governments hold rights to grant exceptions. And the frameworks for the lawful and ethical use of AI in democracy are much less developed — if at all.

There are many applications that could help governments enforce social controls, predict outbreaks and trace infections — some of them more promising than others. Contact-tracing apps are at the center of government interest in Europe and the U.S. at the moment. Decentralized Privacy-Preserving Proximity Tracing, or “DP3T,” approaches that use Bluetooth may offer a secure and decentralized protocol for consenting users to share data with public health authorities. Already, the European Commission released a guidance for contact-tracing applications that favors such decentralized approaches. Whether centralized or not, evidently, EU member states will need to comply with the GDPR when implementing such tools.

Austria, Italy and Switzerland have announced they plan to use the decentralized frameworks developed by Apple and Google. Germany, after ongoing public debate, and stern warnings from privacy experts, recently ditched plans for a centralized app opting for a decentralized solution instead. But France and Norway are using centralized systems where sensitive personal data is stored on a central server.

The U.K. government, too, has been experimenting with an app that uses a centralized approach and that is currently being tested in the Isle of Wight: The NHSX of the National Health Service will allow health officials to reach out directly and personally to potentially infected people. To this point, it remains unclear how the data collected will be used and if it will be combined with other sources of data. Under current provisions, the U.K. is still bound to comply with the GDPR until the end of the Brexit transition period in December 2020.

Aside from government-led efforts, worryingly, a plethora of apps and websites for contact tracing and other forms of outbreak control are mushrooming, asking citizens to volunteer their personal data yet offering little — if any — privacy and security features, let alone functionality. Certainly well-intentioned, these tools often come from hobby developers and often originate from amateur hackathons.

Sorting the wheat from the chaff is not an easy task, and our governments are most likely not equipped to accomplish it. At this point, artificial intelligence, and especially its use in governance, is still new to public agencies. Put on the spot, regulators struggle to evaluate the legitimacy and wider-reaching implications of different AI systems for democratic values. In the absence of sufficient procurement guidelines and legal frameworks, governments are ill-prepared to make these decisions now, when they are most needed.

And worse yet, once AI-driven applications are let out of the box, it will be difficult to roll them back, not unlike increased safety measures at airports after 9/11. Governments may argue that they require data access to avoid a second wave of coronavirus or another looming pandemic.

Regulators are unlikely to generate special new terms for AI during the coronavirus crisis, so at the very least we need to proceed with a pact: all AI applications developed to tackle the public health crisis must end up as public applications, with the data, algorithms, inputs and outputs held for the public good by public health researchers and public science agencies. Invoking the coronavirus pandemic as a sop for breaking privacy norms and reason to fleece the public of valuable data can’t be allowed.

We all want sophisticated AI to assist in delivering a medical cure and managing the public health emergency. Arguably, the short-term risks to personal privacy and human rights of AI wane in light of the loss of human lives. But when coronavirus is under control, we’ll want our personal privacy back and our rights reinstated. If governments and firms in democracies are going to tackle this problem and keep institutions strong, we all need to see how the apps work, the public health data needs to end up with medical researchers and we must be able to audit and disable tracking systems. AI must, over the long term, support good governance.

The coronavirus pandemic is a public health emergency of most pressing concern that will deeply impact governance for decades to come. And it also sheds a powerful spotlight on gaping shortcomings in our current systems. AI is arriving now with some powerful applications in stock, but our governments are ill-prepared to ensure its democratic use. Faced with the exceptional impacts of a global pandemic, quick and dirty policymaking is insufficient to ensure good governance, but may be the best solution we have.



from Apple – TechCrunch https://ift.tt/2M0GmT1

Apple fixes bug that stopped iOS apps from opening

Apple has now resolved the bug that was plaguing iPhone and iPad apps over the weekend, causing some apps to not launch at all. The issue was related to a bug with Apple’s Family Sharing system, it appears, as users reported error messages which said “This app is no longer shared with you,” and directed them to buy the app from the App Store in order to still use it.

Following this issue, users on Sunday said they were seeing dozens of pending app updates for their iOS devices, some of which even went back to the app’s last update from well over a week ago. Users reported in forums seeing as many as 10, 20, 50 or even 100-plus new updates to install. This indicated a fix was in the works, as these were not brand-new updates — the apps were already up to date. Instead, these reissued updates seem to have been part of the fix for the Family Sharing problem, as afterward the bug was resolved.

Apple confirmed the issue has been now resolved for all affected customers.

Apple-focused news sites including MacRumors, 9to5Mac, Appleinsider, and others previously reported on the news of bug and the following deluge of app updates. 9to5Mac also offered a plausible explanation for what happened, saying it was likely due to a signing issue of some kind. Apps were essentially behaving as if they were paid downloads and the right to use the app had been removed from the iCloud family circle, the site explained.

Some users discovered they could delete the troubled app then re-download it to resolve the problem. That’s what the forced app updates did, too — they overwrote the parts of the apps causing the issue. Had Apple not reissued the app updates, many iOS users would have likely assumed it was the app developer’s fault. And they may have then left unfair complaints and 1-star reviews on the app’s App Store page as a result.

Apple has not shared any additional details about why the problem occurred in the first place, but if you happened to notice a significant increase in app updates on Sunday, that’s why.

 



from Apple – TechCrunch https://ift.tt/2zohK45

India’s contact tracing app is going open source

India said it will publicly release the source code of its contact tracing app, Aarogya Setu, in a relief to privacy and security experts who have been advocating for this ever since the app launched in early April. 

Ajay Prakash Sawhney, secretary in the ministry of electronics and information technology, made the announcement on Tuesday, slating it as “opening the heart” of Aarogya Setu’s Android app, which has amassed over 115 million users in fewer than 60 days, to allow engineers to inspect and tinker with the code. The source code will be published on GitHub at midnight Tuesday (local time).

Sawhney said the government will also offer cash prize of up to $1,325 for identifying and reporting bugs and vulnerabilities in the code of Aarogya Setu.

More to follow…

 



from Android – TechCrunch https://ift.tt/2XyIKpd
via IFTTT

A new Android bug, Strandhogg 2.0, lets malware pose as real apps and steal user data

Security researchers have found a major vulnerability in almost every version of Android, which lets malware imitate legitimate apps to steal app passwords and other sensitive data.

The vulnerability, dubbed Strandhogg 2.0 (named after the Norse term for a hostile takeover) affects all devices running Android 9.0 and earlier. It’s the “evil twin” to an earlier bug of the same name, according to Norwegian security firm Promon, which discovered both vulnerabilities six months apart. Strandhogg 2.0 works by tricking a victim into thinking they’re entering their passwords on a legitimate app while instead interacting with a malicious overlay. Strandhogg 2.0 can also hijack other app permissions to siphon off sensitive user data, like contacts, photos, and track a victim’s real-time location.

The bug is said to be more dangerous than its predecessor because it’s “nearly undetectable,” Tom Lysemose Hansen, founder and chief technology officer at Promon, told TechCrunch.

The good news is that Promon said it has no evidence that hackers have used the bug in active hacking campaigns. The caveat is that there are “no good ways” to detect an attack. Fearing the bug could still be abused by hackers, Promon delayed releasing details of the bug until Google could fix the “critical”-rated vulnerability.

A spokesperson for Google told TechCrunch that the company also saw no evidence of active exploitation. “We appreciate the work of the researchers, and have released a fix for the issue they identified.” The spokesperson said Google Play Protect, an app screening service built-in to Android devices, blocks apps that exploit the Strandhogg 2.0 vulnerability.

Standhogg 2.0 works by abusing Android’s multitasking system, which keeps tabs on every recently opened app so that the user can quickly switch back and forth. A victim would have to download a malicious app — disguised as a normal app — that can exploit the Strandhogg 2.0 vulnerability. Once installed and when a victim opens a legitimate app, the malicious app quickly hijacks the app and injects malicious content in its place, such as a fake login window.

When a victim enters their password on the fake overlay, their passwords are siphoned off to the hacker’s servers. The real app then appears as though the login was real.

Strandhogg 2.0 doesn’t need any Android permissions to run, but it can also hijack the permissions of other apps that have access to a victim’s contacts, photos, and messages by triggering a permissions request.

“If the permission is granted, then the malware now has this dangerous permission,” said Hansen.

Once that permission is granted, the malicious app can upload data from a user’s phone. The malware can upload entire text message conversations, said Hansen, allowing the hackers to defeat two-factor authentication protections.

The risk to users is likely low, but not zero. Promon said updating Android devices with the latest security updates — out now — will fix the vulnerability. Users are advised to update their Android devices as soon as possible.



from Android – TechCrunch https://ift.tt/2X3jWa8
via IFTTT

Saturday, 23 May 2020

Hackers release a new jailbreak that unlocks every iPhone

A renowned iPhone hacking team has released a new “jailbreak” tool that unlocks every iPhone, even the most recent models running the latest iOS 13.5.

For as long as Apple has kept up its “walled garden” approach to iPhones by only allowing apps and customizations that it approves, hackers have tried to break free from what they call the “jail,” hence the name “jailbreak.” Hackers do this by finding a previously undisclosed vulnerability in iOS that break through some of the many restrictions that Apple puts in place to prevent access to the underlying software. Apple says it does this for security. But jailbreakers say breaking through those restrictions allows them to customize their iPhones more than they would otherwise, in a way that most Android users are already accustomed to.

The jailbreak, released by the unc0ver team, supports all iPhones that run iOS 11 and above, including up to iOS 13.5, which Apple released this week.

Details of the vulnerability that the hackers used to build the jailbreak aren’t known, but it’s not expected to last forever. Just as jailbreakers work to find a way in, Apple works fast to patch the flaws and close the jailbreak.

Security experts typically advise iPhone users against jailbreaking, because breaking out of the “walled garden” vastly increases the surface area for new vulnerabilities to exist and to be found.

The jailbreak comes at a time where the shine is wearing off of Apple’s typically strong security image. Last week, Zerodium, a broker for exploits, said it would no longer buy certain iPhone vulnerabilities because there were too many of them. Motherboard reported this week that hackers got their hands on a pre-release version of the upcoming iOS 14 release several months ago.



from iPhone – TechCrunch https://ift.tt/3bVIBkT

Hackers release a new jailbreak that unlocks every iPhone

A renowned iPhone hacking team has released a new “jailbreak” tool that unlocks every iPhone, even the most recent models running the latest iOS 13.5.

For as long as Apple has kept up its “walled garden” approach to iPhones by only allowing apps and customizations that it approves, hackers have tried to break free from what they call the “jail,” hence the name “jailbreak.” Hackers do this by finding a previously undisclosed vulnerability in iOS that break through some of the many restrictions that Apple puts in place to prevent access to the underlying software. Apple says it does this for security. But jailbreakers say breaking through those restrictions allows them to customize their iPhones more than they would otherwise, in a way that most Android users are already accustomed to.

The jailbreak, released by the unc0ver team, supports all iPhones that run iOS 11 and above, including up to iOS 13.5, which Apple released this week.

Details of the vulnerability that the hackers used to build the jailbreak aren’t known, but it’s not expected to last forever. Just as jailbreakers work to find a way in, Apple works fast to patch the flaws and close the jailbreak.

Security experts typically advise iPhone users against jailbreaking, because breaking out of the “walled garden” vastly increases the surface area for new vulnerabilities to exist and to be found.

The jailbreak comes at a time where the shine is wearing off of Apple’s typically strong security image. Last week, Zerodium, a broker for exploits, said it would no longer buy certain iPhone vulnerabilities because there were too many of them. Motherboard reported this week that hackers got their hands on a pre-release version of the upcoming iOS 14 release several months ago.



from Apple – TechCrunch https://ift.tt/3bVIBkT

This Week in Apps: Facebook takes on Shopify, Tinder considers its future, contact-tracing tech goes live

Welcome back to This Week in Apps, the Extra Crunch series that recaps the latest OS news, the applications they support and the money that flows through it all.

The app industry is as hot as ever, with a record 204 billion downloads and $120 billion in consumer spending in 2019. People are now spending three hours and 40 minutes per day using apps, rivaling TV. Apps aren’t just a way to pass idle hours — they’re a big business. In 2019, mobile-first companies had a combined $544 billion valuation, 6.5x higher than those without a mobile focus.

In this Extra Crunch series, we help you keep up with the latest news from the world of apps, delivered on a weekly basis.

This week we’re continuing to look at how the coronavirus outbreak is impacting the world of mobile applications. Notably, we saw the launch of the Apple/Google exposure-notification API with the latest version of iOS out this week. The pandemic is also inspiring other new apps and features, including upcoming additions to Apple’s Schoolwork, which focus on distance learning, as well as Facebook’s new Shops feature designed to help small business shift their operations online in the wake of physical retail closures.

Tinder, meanwhile, seems to be toying with the idea of pivoting to a global friend finder and online hangout in the wake of social distancing, with its test of a feature that allows users to match with others worldwide — meaning, with no intention of in-person dating.

Headlines

COVID-19 apps in the news

  • Fitbit app: The fitness tracker app launched a COVID-19 early detection study aimed at determining whether wearables can help detect COVID-19 or the flu. The study will ask volunteers questions about their health, including whether they had COVID-19, then pair that with activity data to see if there are any clues that could be used to build an early warning algorithm of sorts.
  • U.K. contact-tracing app: The app won’t be ready in mid-May as promised, as the government mulls the use of the Apple/Google API. In testing, the existing app drains the phone battery too quickly. In addition, researchers have recently identified seven security flaws in the app, which is currently being trialed on the Isle of Wight.

Apple launches iOS/iPadOS 13.5 with Face ID tweak and contact-tracing API

Apple this week released the latest version of iOS/iPadOS with two new features related to the pandemic. The first is an update to Face ID which will now be able to tell when the user is wearing a mask. In those cases, Face ID will instead switch to the Passcode field so you can type in your code to unlock your phone, or authenticate with apps like the App Store, Apple Books, Apple Pay, iTunes and others.

The other new feature is the launch of the exposure-notification API jointly developed by Apple and Google. The API allows for the development of apps from public health organizations and governments that can help determine if someone has been exposed by COVID-19. The apps that support the API have yet to launch, but some 22 countries have requested API access.



from Android – TechCrunch https://ift.tt/2ZvkdUU
via IFTTT