Friday, 22 May 2020

The FBI is mad because it keeps getting into locked iPhones without Apple’s help

The debate over encryption continues to drag on without end.

In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.

At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.

Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.

After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.

You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.

This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.

That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.

By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?

We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?

Further, how do the FBI’s in-house capabilities complement the larger ecosystem of tools and techniques for law enforcement to access locked phones? Those include third-party vendors GrayShift and Cellebrite’s devices, which, in addition to the FBI, count numerous U.S. state and local police departments and federal immigration authorities among their clients. When plugged into a locked phone, these devices can bypass the phone’s encryption to yield up its contents, and (in the case of GrayShift) can plant spyware on an iPhone to log its passcode when police trick a phone’s owner into entering it. These devices work on very recent iPhone models: Cellebrite claims it can unlock any iPhone for law enforcement, and the FBI has unlocked an iPhone 11 Pro Max using GrayShift’s GrayKey device.

In addition to Cellebrite and GrayShift, which have a well-established U.S. customer base, the ecosystem of third-party phone-hacking companies includes entities that market remote-access phone-hacking software to governments around the world. Perhaps the most notorious example is the Israel-based NSO Group, whose Pegasus software has been used by foreign governments against dissidents, journalists, lawyers and human rights activists. The company’s U.S. arm has attempted to market Pegasus domestically to American police departments under another name. Which third-party vendors are supplying phone-hacking solutions to the FBI, and at what price?

Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?

We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.

The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.

As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.

The bestcase scenario now is that the federal agency that proved its untrustworthiness by lying to the Foreign Intelligence Surveillance Court can crack into our smartphones, but maybe not all of them; that maybe it isn’t sharing its toys with state and local police departments (which are rife with domestic abusers who’d love to get access to their victims’ phones); that unlike third-party vendor devices, maybe the FBI’s tools won’t end up on eBay where criminals can buy them; and that hopefully it hasn’t paid taxpayer money to the spyware company whose best-known government customer murdered and dismembered a journalist.

The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.



from iPhone – TechCrunch https://ift.tt/2ZstkFH

The FBI is mad because it keeps getting into locked iPhones without Apple’s help

The debate over encryption continues to drag on without end.

In recent months, the discourse has largely swung away from encrypted smartphones to focus instead on end-to-end encrypted messaging. But a recent press conference by the heads of the Department of Justice (DOJ) and the Federal Bureau of Investigation (FBI) showed that the debate over device encryption isn’t dead, it was merely resting. And it just won’t go away.

At the presser, Attorney General William Barr and FBI Director Chris Wray announced that after months of work, FBI technicians had succeeded in unlocking the two iPhones used by the Saudi military officer who carried out a terrorist shooting at the Pensacola Naval Air Station in Florida in December 2019. The shooter died in the attack, which was quickly claimed by Al Qaeda in the Arabian Peninsula.

Early this year — a solid month after the shooting — Barr had asked Apple to help unlock the phones (one of which was damaged by a bullet), which were older iPhone 5 and 7 models. Apple provided “gigabytes of information” to investigators, including “iCloud backups, account information and transactional data for multiple accounts,” but drew the line at assisting with the devices. The situation threatened to revive the 2016 “Apple versus FBI” showdown over another locked iPhone following the San Bernardino terror attack.

After the government went to federal court to try to dragoon Apple into doing investigators’ job for them, the dispute ended anticlimactically when the government got into the phone itself after purchasing an exploit from an outside vendor the government refused to identify. The Pensacola case culminated much the same way, except that the FBI apparently used an in-house solution instead of a third party’s exploit.

You’d think the FBI’s success at a tricky task (remember, one of the phones had been shot) would be good news for the Bureau. Yet an unmistakable note of bitterness tinged the laudatory remarks at the press conference for the technicians who made it happen. Despite the Bureau’s impressive achievement, and despite the gobs of data Apple had provided, Barr and Wray devoted much of their remarks to maligning Apple, with Wray going so far as to say the government “received effectively no help” from the company.

This diversion tactic worked: in news stories covering the press conference, headline after headline after headline highlighted the FBI’s slam against Apple instead of focusing on what the press conference was nominally about: the fact that federal law enforcement agencies can get into locked iPhones without Apple’s assistance.

That should be the headline news, because it’s important. That inconvenient truth undercuts the agencies’ longstanding claim that they’re helpless in the face of Apple’s encryption and thus the company should be legally forced to weaken its device encryption for law enforcement access. No wonder Wray and Barr are so mad that their employees keep being good at their jobs.

By reviving the old blame-Apple routine, the two officials managed to evade a number of questions that their press conference left unanswered. What exactly are the FBI’s capabilities when it comes to accessing locked, encrypted smartphones? Wray claimed the technique developed by FBI technicians is “of pretty limited application” beyond the Pensacola iPhones. How limited? What other phone-cracking techniques does the FBI have, and which handset models and which mobile OS versions do those techniques reliably work on? In what kinds of cases, for what kinds of crimes, are these tools being used?

We also don’t know what’s changed internally at the Bureau since that damning 2018 Inspector General postmortem on the San Bernardino affair. Whatever happened with the FBI’s plans, announced in the IG report, to lower the barrier within the agency to using national security tools and techniques in criminal cases? Did that change come to pass, and did it play a role in the Pensacola success? Is the FBI cracking into criminal suspects’ phones using classified techniques from the national security context that might not pass muster in a court proceeding (were their use to be acknowledged at all)?

Further, how do the FBI’s in-house capabilities complement the larger ecosystem of tools and techniques for law enforcement to access locked phones? Those include third-party vendors GrayShift and Cellebrite’s devices, which, in addition to the FBI, count numerous U.S. state and local police departments and federal immigration authorities among their clients. When plugged into a locked phone, these devices can bypass the phone’s encryption to yield up its contents, and (in the case of GrayShift) can plant spyware on an iPhone to log its passcode when police trick a phone’s owner into entering it. These devices work on very recent iPhone models: Cellebrite claims it can unlock any iPhone for law enforcement, and the FBI has unlocked an iPhone 11 Pro Max using GrayShift’s GrayKey device.

In addition to Cellebrite and GrayShift, which have a well-established U.S. customer base, the ecosystem of third-party phone-hacking companies includes entities that market remote-access phone-hacking software to governments around the world. Perhaps the most notorious example is the Israel-based NSO Group, whose Pegasus software has been used by foreign governments against dissidents, journalists, lawyers and human rights activists. The company’s U.S. arm has attempted to market Pegasus domestically to American police departments under another name. Which third-party vendors are supplying phone-hacking solutions to the FBI, and at what price?

Finally, who else besides the FBI will be the beneficiary of the technique that worked on the Pensacola phones? Does the FBI share the vendor tools it purchases, or its own home-rolled ones, with other agencies (federal, state, tribal or local)? Which tools, which agencies and for what kinds of cases? Even if it doesn’t share the techniques directly, will it use them to unlock phones for other agencies, as it did for a state prosecutor soon after purchasing the exploit for the San Bernardino iPhone?

We have little idea of the answers to any of these questions, because the FBI’s capabilities are a closely held secret. What advances and breakthroughs it has achieved, and which vendors it has paid, we (who provide the taxpayer dollars to fund this work) aren’t allowed to know. And the agency refuses to answer questions about encryption’s impact on its investigations even from members of Congress, who can be privy to confidential information denied to the general public.

The only public information coming out of the FBI’s phone-hacking black box is nothingburgers like the recent press conference. At an event all about the FBI’s phone-hacking capabilities, Director Wray and AG Barr cunningly managed to deflect the press’s attention onto Apple, dodging any difficult questions, such as what the FBI’s abilities mean for Americans’ privacy, civil liberties and data security, or even basic questions like how much the Pensacola phone-cracking operation cost.

As the recent PR spectacle demonstrated, a press conference isn’t oversight. And instead of exerting its oversight power, mandating more transparency, or requiring an accounting and cost/benefit analysis of the FBI’s phone-hacking expenditures — instead of demanding a straight and conclusive answer to the eternal question of whether, in light of the agency’s continually-evolving capabilities, there’s really any need to force smartphone makers to weaken their device encryption — Congress is instead coming up with dangerous legislation such as the EARN IT Act, which risks undermining encryption right when a population forced by COVID-19 to do everything online from home can least afford it.

The bestcase scenario now is that the federal agency that proved its untrustworthiness by lying to the Foreign Intelligence Surveillance Court can crack into our smartphones, but maybe not all of them; that maybe it isn’t sharing its toys with state and local police departments (which are rife with domestic abusers who’d love to get access to their victims’ phones); that unlike third-party vendor devices, maybe the FBI’s tools won’t end up on eBay where criminals can buy them; and that hopefully it hasn’t paid taxpayer money to the spyware company whose best-known government customer murdered and dismembered a journalist.

The worst-case scenario would be that, between in-house and third-party tools, pretty much any law enforcement agency can now reliably crack into everybody’s phones, and yet nevertheless this turns out to be the year they finally get their legislative victory over encryption anyway. I can’t wait to see what else 2020 has in store.



from Apple – TechCrunch https://ift.tt/2ZstkFH

First major GDPR decisions looming on Twitter and Facebook

The lead data regulator for much of big tech in Europe is moving inexorably towards issuing its first major cross-border GDPR decision — saying today it’s submitted a draft decision related to Twitter’s business to its fellow EU watchdogs for review.

“The draft decision focusses on whether Twitter International Company has complied with Articles 33(1) and 33(5) of the GDPR,” said the Irish Data Protection Commission (DPC) in a statement.

Europe’s General Data Protection Regulation came into application two years ago, as an update to the European Union’s long-standing data protection framework which bakes in supersized fines for compliance violations. More interestingly, regulators have the power to order that violating data processing cease. While, in many EU countries, third parties such as consumer rights groups can file complaints on behalf of individuals.

Since GDPR begun being applied, there have been thousands of complaints filed across the bloc, targeting companies large and small — alongside a rising clamour around a lack of enforcement in major cross-border cases pertaining to big tech.

So the timing of the DPC’s announcement on reaching a draft decision in its Twitter probe is likely no accident. (GDPR’s actual anniversary of application is May 25.)

The draft decision relates to an inquiry the regulator instigated itself, in November 2018, after the social network had reported a data breach — as data controllers are required to do promptly under GDPR, risking penalties should they fail to do so.

Other interested EU watchdogs (all of them in this case) will now have one month to consider the decision — and lodge “reasoned and relevant objections” should they disagree with the DPC’s reasoning, per the GDPR’s one-stop-shop mechanism which enables EU regulators to liaise on cross-border inquiries.

In instances where there is disagreement between DPAs on a decision the regulation contains a dispute resolution mechanism (Article 65) — which loops in the European Data Protection Board (EDPB) to make a final decision on a majority basis.

On the Twitter decision, the DPC told us it’s hopeful this can be finalized in July.

Commissioner Helen Dixon has previously said the first cross border decisions would be coming “early” in 2020. However the complexity of working through new processes — such as the one-stop-shop — appear to have taken EU regulators longer than hoped.

The DPC is also dealing with a massive case load at this point, with more than 20 cross border investigations related to complaints and/or inquiries still pending decisions — with active probes into the data processing habits of a large number of tech giants; including Apple, Facebook, Google, Instagram, LinkedIn, Tinder, Verizon (TechCrunch’s parent company) and WhatsApp — in addition to its domestic caseload (operating with a budget that’s considerably less than it requested from the Irish government).

The scope of some of these major cross-border inquiries may also have bogged Ireland’s regulator down.

But — two years in — there are signs of momentum picking up, with the DPC’s deputy commissioner, Graham Doyle, pointing today to developments on four additional investigations from the cross-border pile — all of which concern Facebook owned platforms.

The furthest along of these is a probe into the level of transparency the tech giant provides about how user data is shared between its WhatsApp and Facebook services.

“We have this week sent a preliminary draft decision to WhatsApp Ireland Limited for their submissions which will be taken in to account by the DPC before preparing a draft decision in that matter also for Article 60 purposes,” said Doyle in a statement on that. “The inquiry into WhatsApp Ireland examines its compliance with Articles 12 to 14 of the GDPR in terms of transparency including in relation to transparency around what information is shared with Facebook.”

The other three cases the DPC said it’s making progress on relate to GDPR consent complaints filed back in May 2018 by the EU privacy rights not-for-profit, noyb.

noyb argues that Facebook uses a strategy of “forced consent” to continue processing individuals’ personal data — when the standard required by EU law is for users to be given a free choice unless consent is strictly necessary for provision of the service. (And noyb argues that microtargeted ads are not core to the provision of a social networking service; contextual ads could instead be served, for example.)

Back in January 2019, Google was fined $57M by France’s data watchdog, CNIL, over a similar complaint.

Per its statement today, the DPC said it has now completed the investigation phase of this complaint-based inquiry which it said is focused on “Facebook Ireland’s obligations to establish a lawful basis for personal data processing”.

“This inquiry is now in the decision-making phase at the DPC,” it added.

In further related developments it said it’s sent draft inquiry reports to the complainants and companies concerned for the same set of complaints for (Facebook owned) Instagram and WhatsApp. 

Doyle declined to give any firm timeline for when any of these additional inquiries might yield final decisions. But a summer date would, presumably, be the very earliest timeframe possible.

The regulator’s hope looks to be that once the first cross-border decision has made it through the GDPR’s one-stop-shop mechanism — and yielded something all DPAs can sign up to — it will grease the tracks for the next tranche of decisions.

That said, not all inquiries and decisions are equal clearly. And what exactly the DPC decides in such high profile probes will be key to whether or not there’s disagreement from other data protection agencies. Different EU DPAs can take a harder or softer line on applying the bloc’s rules, with some considerably more ‘business friendly‘ than others. Albeit, the GDPR was intended to try to shrink differences of application.

If there is disagreement among regulators on major cross border cases, such as the Facebook ones, the GDPR’s one-stop-shop mechanism will require more time to work through to find consensus. So critics of the regulation are likely to have plenty of attack area still.

Some of the inquiries the DPC is leading are also likely to set standards which could have major implications for many platforms and digital businesses so there will be vested interests seeking to influence outcomes on all sides. But with GDPR hitting its second birthday — and still hardly any decision-shaped lumps taken out of big tech — the regional pressure for enforcements to get flowing is massive.

Given the blistering pace of tech developments — and the market muscle of big tech being applied to steamroller individual rights — EU regulators have to be able to close the gap between investigation and enforcement or watch their flagship framework derided as a paper tiger…

Schrems II

Summer is also shaping up to be an interesting time for privacy watchers for another reason, with a landmark decision due from Europe’s top court on July 16 on the so called ‘Schrems II’ case (named for the Austrian lawyer, privacy rights campaigner and noyb founder, Max Schrems, who lodged the original complaint) — which relates to the legality of Standard Contractual Clauses (SCC) as a mechanism for personal data transfers out of the EU.

The DPC’s statement today makes a point of flagging this looming decision, with the regulator writing: “The case concerns proceedings initiated and pursued in the Irish High Court by the DPC which raised a number of significant questions about the regulation of international data transfers under EU data protection law. The judgement from the CJEU on foot of the reference made arising from these proceedings is anticipated to bring much needed clarity to aspects of the law and to represent a milestone in the law on international transfers.”

A legal opinion issued at the end of last year by an influential advisor to the court emphasized that EU data protection authorities have an obligation to step in and suspend data transfers by SCC if they are being used to send citizens’ data to a place where their information cannot be adequately protected.

Should the court hold to that view, all EU DPAs will have an obligation to consider the legality of SCC transfers to the US “on a case-by-case basis”, per Doyle.

“It will be in every single case you’d have to go and look at the set of circumstances in every single case to make a judgement whether to instruct them to cease doing it. There won’t be just a one size fits all,” he told TechCrunch. “It’s an extremely significant ruling.”

(If you’re curious about ‘Schrems I’, read this from 2015.)



from Apple – TechCrunch https://ift.tt/3cU5dne

Apple’s handling of Siri snippets back in the frame after letter of complaint to EU privacy regulators

Apple is facing fresh questions from its lead data protection regulator in Europe following a public complaint by a former contractor who revealed last year that workers doing quality grading for Siri were routinely overhearing sensitive user data.

Earlier this week the former Apple contractor, Thomas le Bonniec, sent a letter to European regulators laying out his concern at the lack of enforcement on the issue — in which he wrote: “I am extremely concerned that big tech companies are basically wiretapping entire populations despite European citizens being told the EU has one of the strongest data protection laws in the world. Passing a law is not good enough: it needs to be enforced upon privacy offenders.”

The timing of the letter comes as Europe’s updated data protection framework, the GDPR, reaches its two-year anniversary — facing ongoing questions around the lack of enforcement related to a string of cross-border complaints.

Ireland’s Data Protection Commission (DPC) has been taking the brunt of criticism over whether the General Data Protection Regulation is functioning as intended — as a result of how many tech giants locate their regional headquarters on its soil (Apple included).

Responding to the latest Apple complaint from le Bonniec, the DPC’s deputy commissioner, Graham Doyle, told TechCrunch: “The DPC engaged with Apple on this issue when it first arose last summer and Apple has since made some changes. However, we have followed up again with Apple following the release of this public statement and await responses.”

At the time of writing Apple had not responded to a request for comment.

The Irish DPC is currently handling with more than 20 major cross-border cases, as lead data protection agency — probing the data processing activities of companies including Apple, Facebook, Google and Twitter. So le Bonniec’s letter adds to the pile of pressure on commissioner Helen Dixon to begin issuing decisions vis-a-vis cross-border GDPR complaints. (Some of which are now a full two years’ old.)

Last year Dixon said the first decisions for these cross-border cases would be coming “early” in 2020.

At issue is that if Europe’s recently updated flagship data protection regime isn’t seen to be functioning well two years in — and is still saddled with a bottleneck of high profile cases, rather than having a string of major decisions to its name — it will be increasingly difficult for the region’s lawmakers to sell it as a success.

At the same time the existence of a pan-EU data protection regime — and the attention paid to contravention, by both media and regulators — has had a tangible impact on certain practices.

Apple suspended human review of Siri snippets globally last August, after The Guardian had reported that contractors it employed to review audio recordings of users of its voice assistant tech — for quality grading purposes — regularly listened in to sensitive content such as medical information and even recordings of couples having sex.

Later the same month it made changes to the grading program, switching audio review to an explicitly opt-in process. It also brought the work in house — meaning only Apple employees have since been reviewing Siri users’ opt-in audio.

The tech giant also apologized. But did not appear to face any specific regulatory sanction for practices that do look to have been incompatible with Europe’s laws — owing to the lack of transparency and explicit consent around the human review program. Hence le Bonniec’s letter of complaint now.

A number of other tech giants also made changes to their own human grading programs around the same time.

Doyle also pointed out that guidance for EU regulators on voice AI tech is in the works, saying: “It should be noted that the European Data Protection Board is working on the production of guidance in the area of voice assistant technologies.”

We’ve reached out to the European Data Protection Board for comment.



from Apple – TechCrunch https://ift.tt/2WRtchj

New non-profit from Google Maps co-creator offers temporary ‘safe’ passes to aid COVID-19 reopening effort

There are a number of different technologies both proposed and in development to help smooth the reopening of parts of the economy even as the threat of the global COVID-19 pandemic continues. One such tech solution launching today comes from Brian McClendon, co-founder of Keyhole, the company that Google purchased in 2004 that would form the basis of Google Earth and Google Maps. McClendon’s new CVKey Project is a registered non-profit that is launching with an app for symptom self-assessment that generates a temporary QR code which will work with participating community facilities as a kind of health ‘pass’ on an opt-in basis.

Ultimately, CVKey Project hopes to launch an entire suite of apps dedicated to making it easier to reopen public spaces safely, including apps for things like exposure notification, which is what Apple and Google have partnered to deliver a framework for that works across both of their mobile operating systems. CVKey is also going to be providing information about what types of facilities are open under current government guidelines, as well as what those places are doing in terms of their own policies to prevent the spread of COVID-19 as much as possible.

The core element of CVKey Project’s approach, however, is use of a QR code generated by its app that essentially acts as a verification that you’re ‘safe’ to enter one of these shared spaces. The system is designed with user privacy in mind, according to McClendon – any identify or health data exists only on a user’s individual device, and they’re never uploaded to a cloud server or shared without a user’s consent and information provided about what that sharing entails. All users only voluntarily offer their own health info, and the app never asks for location information. Most of what it does can be done without an internet connection at all, in fact, McClendon explains.

When you generate a QR code for use at places that have opted in to participate in the system, they scan it and receive a simple binary indicator of whether or not you’re cleared to pass, based on the policies they’ve set. They don’t see any specifics about your health information – the code transmits all the particulars of whether you have shown symptoms, which ones and how recently, for instance, and then that is matched against the policy set for the particular public space and they provide a go/no-go response.

McClendon created CVKey Project together with Manik Gupt and Waleed Kadous, who he worked with previously at Google Earth, Google Maps and Uber, as well as Dr. Marci Nielsen, a public health specialist with a long history of leadership at both public and private institutions.

The apps created by CVKey Project will be available soon, and the non-profit is looking for potential partners to participate in its program. Like just about everything else designed to address the COVID-19 crisis, it’s not a simple fix, but it could form part of a larger strategy that provides a path forward for dealing with the pandemic.



from Apple – TechCrunch https://ift.tt/3eaXCkm

Thursday, 21 May 2020

Personal finance tracker Copilot adds support for Apple Card spreadsheet imports

When Apple added the ability to export transactions via spreadsheet to its credit card, Matthew hit up the folks at Copilot, asking whether they planned to support the feature. The answer was essentially “not yet, but soon.” This week, however, it’s finally official.

The makers of the personal finance tracking app announced that users can now import the Apple Card’s CSV spreadsheet into Copilot. The app will then go to work categorizing the transactions into topics, like transportation, subscription services, shops and restaurants.

Those who manually manage their expenses can consolidate the information into a single place, while the app removes any duplicates from the list. From there, it will create a historical balance and utilization rate for the Apple Card.

Removing as much friction as possible from a daunting subject like expenses is the bread and butter of apps like Copilot, and the Apple integration looks to be a stupidly easy way to keep charges organized in one convenient spot. Copilot’s chief competitor Mint already accepts spreadsheet imports, as do other apps, including Clarity Money, YNAB and Lunch Money.

Unfortunately, there’s no automated way to import the sheets at the moment, meaning you’ll have do it manually for each. Copilot founder Andres Ugarte says the company is working on a fully automated process. Per Urgate, “Apple Card support has been a top request from our users since we launched. This integration required extensive backend development to ensure that upon import, Copilot could seamlessly integrate Apple Card data with the rest of a user’s financial life. We wanted to ensure we weren’t cutting any corners, and that Apple Card transactions could take advantage of the same algorithmic categorization and analysis that Copilot uses for other financial institutions.”



from Apple – TechCrunch https://ift.tt/2TuK7nY

Wednesday, 20 May 2020

Identity management startup Truework raises $30M to help you verify your work history

As organizations look for safe and efficient ways of running their services in the new global paradigm of increased social distancing, a startup that has built a platform to help people verify their work details in a secure way is announcing a round of growth funding.

Truework, which provides a way for banks, apartment-rental agencies, and others to check the employment details of an applicant in a quick and secure manner online, has raised $30 million, money that CEO and co-founder Ryan Sandler said in an interview that it would use both grow its existing business, as well to explore adding more details — both via its own service and via third-party partnerships — to the identity information that it shares.

The Series B is being led by Activant Capital — a VC that focuses on B2B2C startups — with participation also from Sequoia Capital and Khosla Ventures, as well as a number of high profile execs and entrepreneurs — Jeff Weiner (LinkedIn); Tom Gonser (Docusign); William Hockey (Plaid); and Daniel Yanisse (Checkr) among them.

The LinkedIn connection is an interesting one. Both Sandler and co-founder Victor Kabdebon were engineers at LinkedIn working on profile and improving the kind of data that LinkedIn sources on its users (the third co-founder, Ethan Winchell, previously worked elsewhere), and while Sandler tells me that the idea for Truework came to them after both left the company, he sees LinkedIn “as a potential partner here,” so watch this space.

The problem that Truework is aiming to solve is the very clunky, and often insecure, nature of how organizations typically verify an individual’s employment information. Details about salary and where you work, and the job you do, are typically essential for larger financial transactions, whether it’s securing a mortgage or another financing loan, or renting an apartment, or for others who might need to verify that information for other purposes, such as staffing agencies.

Typically that kind of information gathering is time-consuming both to reach out to get and to confirm (Sandler cites statistics that say on average an HR person spends over 1,000 hours annually answering questions like these). And some of the systems that have been put in place to do that work — specifically consumer reporting agencies — have been proven not be as watertight in their security as you would hope.

“Your data is flowing around lots of third party platforms,” Sandler said. “You’re releasing a lot of information about yourself and you don’t know where the data is going and if it’s even accurate.”

Truework’s solution is based around a platform, and now an API, that a company buys into. In turn, it gives its employees the ability to consent to using it. If the employee agrees, Truework sources a worker’s place of employment and salary details. Then when a third party wants to verify that information for the person in question, it uses Truework to do so, rather than contacting the company directly.

Then, when those queries come in, Truework contacts the individual with an email or text about the inquiry, so that he/she can okay (or reject) the request. Truework’s Sandler said that it uses ISO27001, SOC2 Type 1 & 2 protections, but he also confirmed that it does store your data.

Currently the idea is that if you leave your job, your next employer would need to also be a Truework customer in order to update the information it has on you: the startup makes money by charging both larger enterprises to make the platform accessible to employees as well as those organizations that are querying for the information/verifications (small business employers using the platform can use it for free).

Over time, the plan will be to configure a way to update your profiles regardless of where you work.

So far, the concept has seen a lot of traction: there are 20,000 small businesses using the platform, as well as 100 enterprises, with the number of verifiers (its term for those requesting information) now at 40,000. Customers include The College Board, The Real Real, Oscar Health, The Motley Fool, and Tuft & Needle.

While all of this was built at a time before COVID-19, the global health pandemic has highlighted the importance of having more efficient and secure systems for doing work, especially at a time when many people are not in the office.

“Our biggest competitor is the fax machine and the phone call,” Sandler said, “but as companies move to more remote working, no one is manning the phones or fax machines. But these operations still need to happen.” Indeed, he points out that at the end of 2019, Truework had 25,000 verifiers. Nearly doubling its end-user customers speaks to the huge boost in business it has seen in the last five months.

That is part of the reason the company has attracted the investment it has.

“Truework’s platform sits at the center of consumers’ most important transactions and life events – from purchasing a home, to securing a new job,” said Steve Sarracino, founder and partner at Activant Capital, in a statement. “Up until now, the identity verification process has been painful, expensive, and opaque for all parties involved, something we’ve seen first-hand in the mortgage space. Starting with income and employment, Truework is setting the standard for consent-based verifications and unlocking the next wave of the digital economy. We’re thrilled to be partnering with this exceptional team as they continue to scale the platform.” Sarracino is joining the board with this round.

While a big focus in the world of tech right now may be on building more and better ways of connecting goods and services to people in as contact-free a way as possible, the bigger play around identity management has been around for years, and will continue to be a huge part of how the internet develops in the future.

The fax and phone may be the primary tools these days for verifying employment information, but on a more general level, there are companies like Facebook, Google and Apple already playing a big role in how we “log in” and use all kinds of services online. They, along with others focused squarely on the identity and verification space (and Truework works with some of them), and using a myriad of approaches that include biometrics, ‘wallet’-style passports that link to information elsewhere, and more, will all continue to try to make the case for why they might be the most trusted provider of that layer of information, at a time when we may want to share less and especially share less with multiple parties.

That is the bigger opportunity that investors are betting on here.

“The increasing momentum Truework has seen since its founding in 2017 demonstrates the critical need for transformation in this space,” said Alfred Lin, partner at Sequoia, in a statement. “Privacy, especially around identity data, is becoming increasingly top of mind for consumers and how they make transactions online.”

Truework has now raised close to $45 million, and it’s not disclosing its valuation.



from Apple – TechCrunch https://ift.tt/2z7jVsC