Friday, 29 June 2018

Questions about Apple’s new Maps, answered

Earlier today we revealed that Apple was re-building maps from the ground up. These are some questions from readers that came up when we went live. You can ask more questions here and I’ll try to add them.

What part of Maps will be new?

The actual map. Apple is building them from scratch, with its own data rather than relying on external partners.

What does that mean in terms of what I’ll see?

New foliage markers, showing you where ground cover like grass and trees exists more accurately. Pools, parking lots, exact building shapes, sports areas like baseball diamonds, tennis and basketball courts and pedestrian pathways that are commonly walked but previously unmapped. There are also some new features like the ability to determine where the entrances are to buildings based on maps data.

Will it look visually different?

Only with regards to additional detail. Maps is not getting a visual “overhaul” yet (it was implied that it will eventually) but you’ll notice differences immediately. Here’s an example:

Does it use information from iPhones?

Yes. It uses segments of trips you take that have been anonymized called probe data to determine things like “is this a valid route?” or to glean traffic congestion information.

Can I be identified by this data — does Apple know it’s me making the trips?

No. The only device that knows about your entire trip is your personal device. When information and/or requests are sent to Apple, a rotating random identifier is assigned to chunks of data, which are segmented for additional safety before transmission. Basically, all Apple will ever see is a random slice of any person’s trip without beginning or end connected directly, which it uses to update its maps and traffic info. Not only can it not tell who it came from, Apple says it cannot even reconstruct a trip based on this data — no matter who asks for it.

Can I opt out?

Yes. It will not happen if you do not turn on location services, and it can be toggled off in the Privacy settings for Maps. It’s not a new setting, it’s just the existing maps setting.

Will it use more data or battery?

Apple says no. It’s saying that the amount of both resources used are so negligible as to be swallowed up in normal efficiency gains.

When is it coming to the rest of the world?

Bay Area in beta next week and Northern California this fall were as much as I got; however, Apple SVP Eddy Cue did say that Apple’s overall maps team was global.

We’ve got a dedicated team — we started this four years ago — across a variety of fields from ML, to map design, to you name it. There’s thousands of people working on this all around the globe from here in the Bay Area, to Seattle, Austin, New York. We have people in other countries, in cities like Berlin, Paris, Singapore, Beijing, Malmö, Hyderabad.

This team’s dispersed around the globe. It’s important to have that when you’re trying to create and do this. We’re trying to look at how people use our devices all around the world. Our focus is to build these maps for people on the go.

Does this mean street view mode is coming?

Maybe; Apple did not announce anything related to a street-level view. With the data that it is gathering from the cars, it could absolutely accomplish this, but no news yet.

What about businesses?

The computer vision system Apple is using can absolutely recognize storefronts and business names so I’d expect that to improve.

Will building shapes improve in 3D?

Yes. Apple has tools specifically to allow its maps editors to measure building heights in the 3D views and to tweak the shapes of the buildings to make them as accurate as possible. The measuring tools also serve to nail down how many floors a building might have for internal navigation.



from Apple – TechCrunch https://ift.tt/2tEUQ22

Questions about Apple’s new Maps, answered

Earlier today we revealed that Apple was re-building maps from the ground up. These are some questions from readers that came up when we went live. You can ask more questions here and I’ll try to add them.

What part of Maps will be new?

The actual map. Apple is building them from scratch, with its own data rather than relying on external partners.

What does that mean in terms of what I’ll see?

New foliage markers, showing you where ground cover like grass and trees exists more accurately. Pools, parking lots, exact building shapes, sports areas like baseball diamonds, tennis and basketball courts and pedestrian pathways that are commonly walked but previously unmapped. There are also some new features like the ability to determine where the entrances are to buildings based on maps data.

Will it look visually different?

Only with regards to additional detail. Maps is not getting a visual ‘overhaul’ yet (it was implied that it will eventually) but you’ll notice differences immediately. Here’s an example:

Does it use information from iPhones?

Yes. It uses segments of trips you take that have been anonymized called probe data to determine things like ‘is this a valid route?’ or to glean traffic congestion information.

Can I be identified by this data — does Apple know it’s me making the trips?

No. The only device that knows about your entire trip is your personal device. When information and/or requests are sent to Apple, a rotating random identifier is assigned to chunks of data which are segmented for additional safety before transmission. Basically, all Apple will ever see is a random slice of any person’s trip without beginning or end that it uses to update its maps and traffic info. Not only can it not tell who it came from, Apple says it cannot even reconstruct a trip based on this data — no matter who asks for it.

Can I opt out?

Yes. It will not happen if you do not turn location services on, and it can be toggled off in the Privacy settings for Maps. It’s not a new setting, it’s just the existing maps setting.

Will it use more data or battery?

Apple says no. It’s saying that the amount of both resources used are so negligible as to be swallowed up in normal efficiency gains.

When is it coming to the rest of the world?

Bay Area in beta next week and Northern California this fall were as much as I got, however, Apple SVP Eddy Cue did say that Apple’s overall maps team was global.

We’ve got a dedicated team — we started this four years ago — across a variety of fields from ML, to map design, to you name it. There’s thousands of people working on this all around the globe from here in the Bay Area, to Seattle, Austin, New York. We have people in other countries, in cities like Berlin, Paris, Singapore, Beijing, Malmö, Hyderabad.

This team’s dispersed around the globe. It’s important to have that when you’re trying to create and do this. We’re trying to look at how people use our devices all around the world. Our focus is to build these maps for people on the go.

Does this mean street view mode is coming?

Maybe, Apple did not announce anything related to a street-level view. With the data that it is gathering from the cars, it could absolutely accomplish this, but no news yet.

What about businesses?

The computer vision system Apple is using can absolutely recognize storefronts and business names so I’d expect that to improve.

Will building shapes improve in 3D?

Yes. Apple has tools specifically to allow its maps editors to measure building heights in the 3D views and to tweak the shapes of the buildings to make them as accurate as possible. The measuring tools also serve to nail down how many floors a building might have for internal navigation.



from iPhone – TechCrunch https://ift.tt/2tEUQ22

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google.

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”



from iPhone – TechCrunch https://ift.tt/2KyERsT

Apple is rebuilding Maps from the ground up

I’m not sure if you’re aware, but the launch of Apple Maps went poorly. After a rough first impression, an apology from the CEO, several years of patching holes with data partnerships and some glimmers of light with long-awaited transit directions and improvements in business, parking and place data, Apple Maps is still not where it needs to be to be considered a world class service.

Maps needs fixing.

Apple, it turns out, is aware of this, so It’s re-building the maps part of Maps.

It’s doing this by using first-party data gathered by iPhones with a privacy-first methodology and its own fleet of cars packed with sensors and cameras. The new product will launch in San Francisco and the Bay Area with the next iOS 12 Beta and will cover Northern California by fall.

Every version of iOS will get the updated maps eventually and they will be more responsive to changes in roadways and construction, more visually rich depending on the specific context they’re viewed in and feature more detailed ground cover, foliage, pools, pedestrian pathways and more.

This is nothing less than a full re-set of Maps and it’s been 4 years in the making, which is when Apple began to develop its new data gathering systems. Eventually, Apple will no longer rely on third-party data to provide the basis for its maps, which has been one of its major pitfalls from the beginning.

“Since we introduced this six years ago — we won’t rehash all the issues we’ve had when we introduced it — we’ve done a huge investment in getting the map up to par,” says Apple SVP Eddy Cue, who now owns Maps in an interview last week.  “When we launched, a lot of it was all about directions and getting to a certain place. Finding the place and getting directions to that place. We’ve done a huge investment of making millions of changes, adding millions of locations, updating the map and changing the map more frequently. All of those things over the past six years.”

But, Cue says, Apple has room to improve on the quality of Maps, something that most users would agree on, even with recent advancements.

“We wanted to take this to the next level,” says Cue. “We have been working on trying to create what we hope is going to be the best map app in the world, taking it to the next step. That is building all of our own map data from the ground up.”

In addition to Cue, I spoke to Apple VP Patrice Gautier and over a dozen Apple Maps team members at its mapping headquarters in California this week about its efforts to re-build Maps, and to do it in a way that aligned with Apple’s very public stance on user privacy.

If, like me, you’re wondering whether Apple thought of building its own maps from scratch before it launched Maps, the answer is yes. At the time, there was a choice to be made about whether or not it wanted to be in the business of Maps at all. Given that the future of mobile devices was becoming very clear, it knew that mapping would be at the core of nearly every aspect of its devices from photos to directions to location services provided to apps. Decision made, Apple plowed ahead, building a product that relied on a patchwork of data from partners like TomTom, OpenStreetMap and other geo data brokers. The result was underwhelming.

Almost immediately after Apple launched Maps, it realized that it was going to need help and it signed on a bunch of additional data providers to fill the gaps in location, base map, point-of-interest and business data.

It wasn’t enough.

“We decided to do this just over four years ago. We said, “Where do we want to take Maps? What are the things that we want to do in Maps? We realized that, given what we wanted to do and where we wanted to take it, we needed to do this ourselves,” says Cue.

Because Maps are so core to so many functions, success wasn’t tied to just one function. Maps needed to be great at transit, driving and walking — but also as a utility used by apps for location services and other functions.

Cue says that Apple needed to own all of the data that goes into making a map, and to control it from a quality as well as a privacy perspective.

There’s also the matter of corrections, updates and changes entering a long loop of submission to validation to update when you’re dealing with external partners. The Maps team would have to be able to correct roads, pathways and other updating features in days or less, not months. Not to mention the potential competitive advantages it could gain from building and updating traffic data from hundreds of millions of iPhones, rather than relying on partner data.

Cue points to the proliferation of devices running iOS, now numbering in the millions, as a deciding factor to shift its process.

“We felt like because the shift to devices had happened — building a map today in the way that we were traditionally doing it, the way that it was being done — we could improve things significantly, and improve them in different ways,” he says. “One is more accuracy. Two is being able to update the map faster based on the data and the things that we’re seeing, as opposed to driving again or getting the information where the customer’s proactively telling us. What if we could actually see it before all of those things?”

I query him on the rapidity of Maps updates, and whether this new map philosophy means faster changes for users.

“The truth is that Maps needs to be [updated more], and even are today,” says Cue. “We’ll be doing this even more with our new maps, [with] the ability to change the map real-time and often. We do that every day today. This is expanding us to allow us to do it across everything in the map. Today, there’s certain things that take longer to change.

“For example, a road network is something that takes a much longer time to change currently. In the new map infrastructure, we can change that relatively quickly. If a new road opens up, immediately we can see that and make that change very, very quickly around it. It’s much, much more rapid to do changes in the new map environment.”

So a new effort was created to begin generating its own base maps, the very lowest building block of any really good mapping system. After that, Apple would begin layering on living location data, high resolution satellite imagery and brand new intensely high resolution image data gathered from its ground cars until it had what it felt was a ‘best in class’ mapping product.

There is only really one big company on earth who owns an entire map stack from the ground up: Google.

Apple knew it needed to be the other one. Enter the vans.

Apple vans spotted

Though the overall project started earlier, the first glimpse most folks had of Apple’s renewed efforts to build the best Maps product was the vans that started appearing on the roads in 2015 with ‘Apple Maps’ signs on the side. Capped with sensors and cameras, these vans popped up in various cities and sparked rampant discussion and speculation.

The new Apple Maps will be the first time the data collected by these vans is actually used to construct and inform its maps. This is their coming out party.

Some people have commented that Apple’s rigs look more robust than the simple GPS + Camera arrangements on other mapping vehicles — going so far as to say they look more along the lines of something that could be used in autonomous vehicle training.

Apple isn’t commenting on autonomous vehicles, but there’s a reason the arrays look more advanced: they are.

Earlier this week I took a ride in one of the vans as it ran a sample route to gather the kind of data that would go into building the new maps. Here’s what’s inside.

In addition to a beefed up GPS rig on the roof, four LiDAR arrays mounted at the corners and 8 cameras shooting overlapping high-resolution images – there’s also the standard physical measuring tool attached to a rear wheel that allows for precise tracking of distance and image capture. In the rear there is a surprising lack of bulky equipment. Instead, it’s a straightforward Mac Pro bolted to the floor, attached to an array of solid state drives for storage. A single USB cable routes up to the dashboard where the actual mapping capture software runs on an iPad.

While mapping, a driver…drives, while an operator takes care of the route, ensuring that a coverage area that has been assigned is fully driven and monitoring image capture. Each drive captures thousands of images as well as a full point cloud (a 3D map of space defined by dots that represent surfaces) and GPS data. I later got to view the raw data presented in 3D and it absolutely looks like the quality of data you would need to begin training autonomous vehicles.

More on why Apple needs this level of data detail later.

When the images and data are captured, they are then encrypted on the fly immediately and recorded on to the SSDs. Once full, the SSDs are pulled out, replaced and packed into a case which is delivered to Apple’s data center where a suite of software eliminates private information like faces, license plates and other info from the images. From the moment of capture to the moment they’re sanitized, they are encrypted with one key in the van and the other key in the data center. Technicians and software that are part of its mapping efforts down the pipeline from there never see unsanitized data.

This is just one element of Apple’s focus on the privacy of the data it is utilizing in New Maps.

Probe data and Privacy

Throughout every conversation I have with any member of the team throughout the day, privacy is brought up, emphasized. This is obviously by design as it wants to impress upon me as a journalist that it’s taking this very seriously indeed, but it doesn’t change the fact that it’s evidently built in from the ground up and I could not find a false note in any of the technical claims or the conversations I had.

Indeed, from the data security folks to the people whose job it is to actually make the maps work well, the constant refrain is that Apple does not feel that it is being held back in any way by not hoovering every piece of customer-rich data it can, storing and parsing it.

The consistent message is that the team feels it can deliver a high quality navigation, location and mapping product without the directly personal data used by other platforms.

“We specifically don’t collect data, even from point A to point B,” notes Cue. “We collect data — when we do it —in an anonymous fashion, in subsections of the whole, so we couldn’t even say that there is a person that went from point A to point B. We’re collecting the segments of it. As you can imagine, that’s always been a key part of doing this. Honestly, we don’t think it buys us anything [to collect more]. We’re not losing any features or capabilities by doing this.”

The segments that he is referring to are sliced out of any given person’s navigation session. Neither the beginning or the end of any trip is ever transmitted to Apple. Rotating identifiers, not personal information, are assigned to any data or requests sent to Apple and it augments the ‘ground truth’ data provided by its own mapping vehicles with this ‘probe data’ sent back from iPhones.

Because only random segments of any person’s drive is ever sent and that data is completely anonymized, there is never a way to tell if any trip was ever a single individual. The local system signs the IDs and only it knows who that ID refers to. Apple is working very hard here to not know anything about its users. This kind of privacy can’t be added on at the end, it has to be woven in at the ground level.

Because Apple’s business model does not rely on it serving, say, an ad for a Chevron on your route to you, it doesn’t need to even tie advertising identifiers to users.

Any personalization or Siri requests are all handled on-board by the iOS device’s processor. So if you get a drive notification that tells you it’s time to leave for your commute, that’s learned, remembered and delivered locally, not from Apple’s servers.

That’s not new, but it’s important to note given the new thing to take away here: Apple is flipping on the power of having millions of iPhones passively and actively improving their mapping data in real time.

In short: traffic, real-time road conditions, road systems, new construction and changes in pedestrian walkways are about to get a lot better in Apple Maps.

The secret sauce here is what Apple calls probe data. Essentially little slices of vector data that represent direction and speed transmitted back to Apple completely anonymized with no way to tie it to a specific user or even any given trip. It’s reaching in and sipping a tiny amount of data from millions of users instead, giving it a holistic, real-time picture without compromising user privacy.

If you’re driving, walking or cycling, your iPhone can already tell this. Now if it knows you’re driving it can also send relevant traffic and routing data in these anonymous slivers to improve the entire service. This only happens if your maps app has been active, say you check the map, look for directions etc. If you’re actively using your GPS for walking or driving, then the updates are more precise and can help with walking improvements like charting new pedestrian paths through parks — building out the map’s overall quality.

All of this, of course, is governed by whether you opted into location services and can be toggled off using the maps location toggle in the Privacy section of settings.

Apple says that this will have a near zero effect on battery life or data usage, because you’re already using the ‘maps’ features when any probe data is shared and it’s a fraction of what power is being drawn by those activities.

From the point cloud on up

But maps cannot live on ground truth and mobile data alone. Apple is also gathering new high resolution satellite data to combine with its ground truth data for a solid base map. It’s then layering satellite imagery on top of that to better determine foliage, pathways, sports facilities, building shapes and pathways.

After the downstream data has been cleaned up of license plates and faces, it gets run through a bunch of computer vision programming to pull out addresses, street signs and other points of interest. These are cross referenced to publicly available data like addresses held by the city and new construction of neighborhoods or roadways that comes from city planning departments.

But one of the special sauce bits that Apple is adding to the mix of mapping tools is a full on point cloud that maps the world around the mapping van in 3D. This allows them all kinds of opportunities to better understand what items are street signs (retro-reflective rectangular object about 15 feet off the ground? Probably a street sign) or stop signs or speed limit signs.

It seems like it could also enable positioning of navigation arrows in 3D space for AR navigation, but Apple declined to comment on ‘any future plans’ for such things.

Apple also uses semantic segmentation and Deep Lambertian Networks to analyze the point cloud coupled with the image data captured by the car and from high-resolution satellites in sync. This allows 3D identification of objects, signs, lanes of traffic and buildings and separation into categories that can be highlighted for easy discovery.

The coupling of high resolution image data from car and satellite, plus a 3D point cloud results in Apple now being able to produce full orthogonal reconstructions of city streets with textures in place. This is massively higher resolution and easier to see, visually. And it’s synchronized with the ‘panoramic’ images from the car, the satellite view and the raw data. These techniques are used in self driving applications because they provide a really holistic view of what’s going on around the car. But the ortho view can do even more for human viewers of the data by allowing them to ‘see’ through brush or tree cover that would normally obscure roads, buildings and addresses.

This is hugely important when it comes to the next step in Apple’s battle for supremely accurate and useful Maps: human editors.

Apple has had a team of tool builders working specifically on a toolkit that can be used by human editors to vet and parse data, street by street. The editor’s suite includes tools that allow human editors to assign specific geometries to flyover buildings (think Salesforce tower’s unique ridged dome) that allow them to be instantly recognizable. It lets editors look at real images of street signs shot by the car right next to 3D reconstructions of the scene and computer vision detection of the same signs, instantly recognizing them as accurate or not.

Another tool corrects addresses, letting an editor quickly move an address to the center of a building, determine whether they’re misplaced and shift them around. It also allows for access points to be set, making Apple Maps smarter about the ‘last 50 feet’ of your journey. You’ve made it to the building, but what street is the entrance actually on? And how do you get into the driveway? With a couple of clicks, an editor can make that permanently visible.

“When we take you to a business and that business exists, we think the precision of where we’re taking you to, from being in the right building,” says Cue. “When you look at places like San Francisco or big cities from that standpoint, you have addresses where the address name is a certain street, but really, the entrance in the building is on another street. They’ve done that because they want the better street name. Those are the kinds of things that our new Maps really is going to shine on. We’re going to make sure that we’re taking you to exactly the right place, not a place that might be really close by.”

Water, swimming pools (new to Maps entirely), sporting areas and vegetation are now more prominent and fleshed out thanks to new computer vision and satellite imagery applications. So Apple had to build editing tools for those as well.

Many hundreds of editors will be using these tools, in addition to the thousands of employees Apple already has working on maps, but the tools had to be built first, now that Apple is no longer relying on third parties to vet and correct issues.

And the team also had to build computer vision and machine learning tools that allow it to determine whether there are issues to be found at all.

Anonymous probe data from iPhones, visualized, looks like thousands of dots, ebbing and flowing across a web of streets and walkways, like a luminescent web of color. At first, chaos. Then, patterns emerge. A street opens for business, and nearby vessels pump orange blood into the new artery. A flag is triggered and an editor looks to see if a new road needs a name assigned.

A new intersection is added to the web and an editor is flagged to make sure that the left turn lanes connect correctly across the overlapping layers of directional traffic. This has the added benefit of massively improved lane guidance in the new Apple Maps.

Apple is counting on this combination of human and AI flagging to allow editors to first craft base maps and then also maintain them as the ever changing biomass wreaks havoc on roadways, addresses and the occasional park.

Here there be Helvetica

Apple’s new Maps, like many other digital maps, display vastly differently depending on scale. If you’re zoomed out, you get less detail. If you zoom in, you get more. But Apple has a team of cartographers on staff that work on more cultural, regional and artistic levels to ensure that its Maps are readable, recognizable and useful.

These teams have goals that are at once concrete and a bit out there — in the best traditions of Apple pursuits that intersect the technical with the artistic.

The maps need to be usable, but they also need to fulfill cognitive goals on cultural levels that go beyond what any given user might know they need. For instance, in the US, it is very common to have maps that have a relatively low level of detail even at a medium zoom. In Japan, however, the maps are absolutely packed with details at the same zoom, because that increased information density is what is expected by users.

This is the department of details. They’ve reconstructed replicas of hundreds of actual road signs to make sure that the shield on your navigation screen matches the one you’re seeing on the highway road sign. When it comes to public transport, Apple licensed all of the type faces that you see on your favorite subway systems, like Helvetica for NYC. And the line numbers are in the exact same order that you’re going to see them on the platform signs.

It’s all about reducing the cognitive load that it takes to translate the physical world you have to navigate through into the digital world represented by Maps.

Bottom line

The new version of Apple Maps will be in preview next week with just the Bay Area of California going live. It will be stitched seamlessly into the ‘current’ version of Maps, but the difference in quality level should be immediately visible based on what I’ve seen so far.

Better road networks, more pedestrian information, sports areas like baseball diamonds and basketball courts, more land cover including grass and trees represented on the map as well as buildings, building shapes and sizes that are more accurate. A map that feels more like the real world you’re actually traveling through.

Search is also being revamped to make sure that you get more relevant results (on the correct continents) than ever before. Navigation, especially pedestrian guidance, also gets a big boost. Parking areas and building details to get you the last few feet to your destination are included as well.

What you won’t see, for now, is a full visual redesign.

“You’re not going to see huge design changes on the maps,” says Cue. “We don’t want to combine those two things at the same time because it would cause a lot of confusion.”

Apple Maps is getting the long awaited attention it really deserves. By taking ownership of the project fully, Apple is committing itself to actually creating the map that users expected of it from the beginning. It’s been a lingering shadow on iPhones, especially, where alternatives like Google Maps have offered more robust feature sets that are so easy to compare against the native app but impossible to access at the deep system level.

The argument has been made ad nauseam, but it’s worth saying again that if Apple thinks that mapping is important enough to own, it should own it. And that’s what it’s trying to do now.

“We don’t think there’s anybody doing this level of work that we’re doing,” adds Cue. “We haven’t announced this. We haven’t told anybody about this. It’s one of those things that we’ve been able to keep pretty much a secret. Nobody really knows about it. We’re excited to get it out there. Over the next year, we’ll be rolling it out, section by section in the US.”



from Apple – TechCrunch https://ift.tt/2KyERsT

Thursday, 28 June 2018

Disney Imagineering has created autonomous robot stunt doubles

For over 50 years, Disneyland and its sister parks have been a showcase for increasingly technically proficient versions of its “animatronic” characters. First pneumatic and hydraulic and more recently fully electronic — these figures create a feeling of life and emotion inside rides and attractions, in shows and, increasingly, in interactive ways throughout the parks.

The machines they’re creating are becoming more active and mobile in order to better represent the wildly physical nature of the characters they portray within the expanding Disney universe. And a recent addition to the pantheon could change the way that characters move throughout the parks and influence how we think about mobile robots at large.

I wrote recently about the new tack Disney was taking with self-contained characters that felt more flexible, interactive and, well, alive than ‘static’, pre-programmed animatronics. That has done a lot to add to the convincing nature of what is essentially a very limited robot.

Traditionally, most animatronic figures cannot move from where they sit or stand, and are pre-built to exacting show specifications. The design and programming phases of the show are closely related, so that the hero characters are efficient and durable enough to run hundreds of times a day, every day, for years.

The Na’avi Shaman from Pandora: The World of Avatar, at Walt Disney World, represents the state of the art of this kind of figure.

However, with the expanded universe of Disney properties including more and more dynamic and heroic figures by the year, it makes sense that they’d want to explore ways of making the robots that represent those properties in the parks more believable and active.

That’s where the Stuntronics project comes in. Built out of a research experiment called Stickman, which we covered a few months ago, Stuntronics are autonomous, self-correcting aerial performers that make on-the-go corrections to nail high-flying stunts every time. Basically robotic stuntpeople, hence the name.

I spoke to Tony Dohi, Principle R&D Imagineer and Morgan Pope, Associate Research Scientist at Disney, about the project.

“So what this is about is the realization we came to after seeing where our characters are going on screen,” says Dohi, “whether they be Star Wars characters, or Pixar characters, or Marvel characters or our own animation characters, is that they’re they’re doing all these things that are really, really active. And so that becomes the expectation our park guests have that our characters are doing all these things on screen — but when it comes to our attractions, what are our animatronic figures doing? We realized we have kind of a disconnect here.”

So they came up with the concept of a stunt double for the ‘hero’ animatronic figures that could take their place within a show or scene to perform more aggressive maneuvering, much in the same way a double replaces a valuable and delicate actor in a dangerous scene.

The Stuntronics robot features on-board accelerometer and gyroscope arrays supported by laser range finding. In its current form, it’s humanoid, taking on the size and shape of a performer that could easily be imagined clothed in the costume of, say, one of The Incredibles, or someone on the Marvel roster. The bot is able to be slung from the end of a wire to fly through the air, controlling its pose, rotation and center of mass to not only land aerial tricks correctly but to do them on target while holding heroic poses in midair.

One use of this could be mid-show in an attraction. For relatively static shots, hero animatronics like the Shaman or new figures Imagineering is constantly working on could provide nuanced performances of face and figure. Then, a transition to a scene that requires dramatic, un-fettered action and boom, a Stuntronics double could fly across the space on its own, calculating trajectories and striking poses with its on-board hardware, hitting a target dead on every time. Queue re-set for the next audience.

This focus on creating scenarios where animatronics feel more ‘real’ and dynamic is at work in other areas of Imagineering as well, with autonomous rolling robots and — some day — the holy grail of bipedal walking robots. But Stuntronics fills one specific gap in the repertoire of a standard Animatronic figure — the ability to convince you it can be a being of action and dynamism.

“So often our robots are in the uncanny valley where you got a lot of function, but it still doesn’t look quite right. And I think here the opposite is true,” says Pope. “When you’re flying through the air, you can have a little bit of function and you can produce a lot of stuff that looks pretty good, because of this really neat physics opportunity — you’ve got these beautiful kinds of parabolas and sine waves that just kind of fall out of rotating and spinning through the air in ways that are hard for people to predict, but that look fantastic.”

The original BRICK

Like many of the solutions Imagineering comes up with for its problems, Stuntronics started out as a research project without a real purpose. In this case, it was called BRICK (Binary Robotic Inertially Controlled bricK). Basically, a metal brick with sensors and the ability to change its center of mass to control its spin to hit a precise orientation at a precise height – to ‘stick the landing’ every time.

From the initial BRICK, Disney moved on to Stickman, an articulated version of the device that could now more aggressively control the rotation and orientation of the device. Combined with some laser rangefinders you had the bones of something that, if you squint, could emulate a ‘human’ acrobat.

“Morgan, I got together and said, maybe there’s something here, we’re not really be sure. But let’s poke at it in a bunch of different directions and see what comes out of it,” says Dohi.

But the Stickman didn’t stick for long.

“When we did the BRICK, I thought that was pretty cool,” says Pope. “And then by the time I was presenting the BRICK at a conference, Tony [Dohi] had helped us make stick man. And I was like, well, this isn’t cool anymore. The Stickman is what’s really cool. And then I was down in Australia presenting Stickman and I knew we were doing the full Stuntronic back at R&D. And I was like, well, this isn’t cool anymore,” he jokes.

“But it has been so much fun. Every step of the way I think oh, this is blowing my mind. but,they just keep pushing…so it’s nice to have that challenge.”

This process has always been one of the fascinating things to me about the way that Imagineering works as a whole. You have people that are enabled by management and internal structure to spool out the threads of a problem, even though you’re not really sure what’s going to come out of it. The biggest companies on the planet have similar R&D departments in place — though the ones that make a habit of disconnecting them from a balance sheet, like Apple, are few and far in between, in my experience. Typically, so much of R&D is tied to a profit/loss spreadsheet so tightly that it’s really, really difficult to sussurate something enough to see what comes of it.

The ability to kind of have vastly different specialities like math, physics, art and design to be able to put ideas on the table and sift through them and say hey, we have this storytelling problem on one hand and this research project on the other. If we drill down on this a bit more — would this serve the purpose? As long as the storytelling always remains the North Star then you end up having a a guiding light to serve drag you through the pile and you come out the other end, holding a couple of things that could be coupled to solve a problem.

“We’re set up to do the really high risk stuff that you don’t know is going to be successful or not, because you don’t know if there’s going to be a direct application of what you’re doing,” says Dohi. “But you just have a hunch that there might be something there, and they give us a long leash, and they let us explore the possibilities and the space around just an idea, which is really quite a privilege. It’s one of the reasons why I love this place.”

This process of play and iteration and pursuit of a goal of storytelling pops up again and again with Imagineering. It’s really a cluster of very smart people across a broad spectrum of disciplines that are governed by a central nervous system of leaders like Jon Snoddy, the head of R&D at the studios, who help to connect the dots between the research side and the other areas of Imagineering that deal with the Parks or interactive projects or the digital division.

There’s an economy and lack of ego to the organization that enables exploration without wastefulness and organically curtails the pursuit of things not in service to the story. In my time exploring the workings of Imagineering I’ve often found that there is a significant disconnect between how fascinating the process is and how well the organization communicates the cleverness of its solutions.

The Disney Research white papers are certainly infinitely fascinating to people interested in emerging tech, but the points of integration between the research and the practical applications in the parks often remain unexplored. Still, they’re getting better at understanding when they’ve really got something they feel is killer and thinking about better ways to communicate that to the world.

Indeed, near the end of our conversation, Dohi says he’s come up with a solid sound byte and I have him give me his best pitch.

“One of our goals of Stuntronics is to see if we can leap across the uncanny valley.”

Not bad.



from Apple – TechCrunch https://ift.tt/2Kf37nH

Apple could bundle TV, music and news in a single subscription

According to a report from The Information, Apple could choose to bundle all its media offerings into a single subscription. While Apple’s main media subscription product is currently Apple Music, it’s no secret that the company is investing in other areas.

In particular, Apple has bought the distribution rights of many TV shows. But nobody knows how Apple plans to sell those TV shows. For instance, you could imagine paying a monthly fee to access Apple’s content in the TV app on your iPhone, iPad and Apple TV.

In addition to that, Apple acquired Texture back in March. Texture lets you download and read dozens of magazines with a single subscription. The company has partnered with Condé Nast, Hearst, Meredith, News Corp., Rogers Communications, and Time Inc. to access their catalog of magazines

Texture is still available, but it’s clear that Apple has bigger plans. In addition to reformatting and redistributing web content in the Apple News app, the company could add paid content from magazines.

Instead of creating three different subscriptions (with potential discounts if you subscribe to multiple services), The Information believes that Apple is going to create a unified subscription. It’s going to work a bit like Amazon Prime, but without the package deliveries.

For a single monthly or annual fee, you’ll be able to access Apple Music, Apple TV’s premium content and Apple News’ premium content.

Even if you don’t consume everything in the subscription, users could see it as a good value, which could reduce attrition.

With good retention rates and such a wide appeal, it could help Apple’s bottom line now that iPhone unit sales are only growing by 0.5 percent year over year. It’s still unclear when Apple plans to launch its TV and news offerings.



from Apple – TechCrunch https://ift.tt/2yR85Sj

Apple buries the hatchet with Samsung but could tap LG displays

After years of legal procedures, Apple and Samsung have reached an agreement in the infamous patent case. Terms of the settlement were undisclosed. So is everything clear between Samsung and Apple? Not so fast, as Bloomberg reports that Apple wants to use OLED displays from LG to reduce its dependence on Samsung.

You might remember that Apple first sued Samsung for copying the design of the iPhone with early Samsung Galaxy phones. The first trial led to an Apple victory. Samsung had to pay $1 billion.

But the U.S. Patent and Trademark Office later invalidated one of Apple’s patents. It led to multiple retrials and appeals, and the Supreme Court even had to rule at some point.

After many years, Samsung ended up owing $539 million to Apple. According to Reuters, Samsung has already paid $399 million.

If you look closely at the original case, it feels like it happened many decades ago. At some point, the Samsung Galaxy S 4G, the Nexus S and a few other devices looked a lot like the iPhone 3G.

But now, it’s hard to say that Samsung is copying Apple. For instance, Samsung is one of the only phone manufacturers that hasn’t switched to a notch design. The Samsung Galaxy S9 and the rest of the product lineup still features a rectangular display . Huawei, LG, OnePlus, Asus and countless of others sell devices with a notch.

That could be the reason why it seems weird to spend all this money on legal fees for things that are no longer true.

And yet, the irony is that Apple and Samsung are the perfect example of asymmetric competition. They both sell smartphones, laptops and other electronics devices. But they also work together on various projects.

In particular, the iPhone X is the first iPhone with an OLED display. It’s a better display technology compared to traditional LCD displays. It’s also one of the most expensive components of the iPhone X.

According to Bloomberg, Apple wants to find a second supplier to drive component prices down. And that second supplier is LG.

LG already manufactures OLED displays. But it’s difficult to meet Apple’s demands when it comes to the iPhone. Apple sells tens of millions of smartphones every year. So you need to have a great supply chain to be able to become an Apple supplier. LG could be ramping up its production capacity for future iPhone models.

According to multiple rumors, Apple plans to ship an updated iPhone X with an OLED display as well as a bigger iPhone. The company could also introduce another phone with an edge-to-edge LCD display with a notch and a cheaper price.

There’s one thing for sure, it’ll take time to switch the entire iPhone lineup to OLED displays.



from Apple – TechCrunch https://ift.tt/2KvZ1nc

LiDAR startup Luminar hires former Fitbit and Apple execs

LiDAR company Luminar and its whiz founder Austin Russell burst onto the autonomous vehicle startup scene last April after operating for years in secrecy. Now, Luminar has nabbed two high-profile hires that signal its grander ambitions in the race to develop and deploy autonomous vehicles.

Luminar announced Thursday it has hired Fitbit executive Bill Zerella as its chief financial officer and Tami Rosen as chief people officer. Both have years of experience in their respective arenas. Zerella was CFO of FitBit for four years. He has held the CFO position in various other companies, including wireless communications company Vocera, Force10 Networks, and telecom equipment firm Infinera.

His specialty is helping burgeoning startups scale up in revenue as well as operationally to hit high-volumes hardware and software products. He has also helped companies navigate the path to an IPO. During his stint at Fitbit, Zerella led the largest consumer electronics IPO in history.

Rosen also has a long and fruitful HR career, including 16 years at Goldman Sachs and a  role senior director of human resources at Apple. She was most recently vice president of people at Quora.

Rosen will need to rely on her deep experience. The explosion of companies working on autonomous vehicle technology has firms competing for a limited pool of talent. HR will be a keystone to Luminar’s plans to scale, and to the broader transformation of the future of transportation, Rosen told TechCrunch.

“It really takes looking at how you build a strong culture, one that’s inclusive and motivates the workforce and that can be key for us to hit these ambitious goals,” Rosen said.

LiDAR, or light detection and ranging radar, measures distance using laser light to generate a highly accurate 3D map of the world around the car. LiDAR is considered by many automakers and tech companies an essential piece of technology to safely roll out self-driving cars.

Russell has argued that companies have been wrongly focused on the price and should instead work on LiDAR’s performance. That’s where Luminar started.

The company built its LiDAR from scratch, a lengthy process that resulted in a simpler design and better performance. Now the company is working on reducing the cost through its own smart engineering and good old-fashioned economies of scale.

That’s where Zerella and Rosen come in. Russell has built out the tech, grown the company to about 400 employees over three locations, made a strategic acquisition of Black Forest Engineering, and landed partnerships with Toyota Research Institute and most recently Volvo. Luminar also has 136,000-square-foot manufacturing center in Orlando, Florida.

Zerella and Rosen aim to take Luminar further.

“Last year it was all about demonstrating how the technology was coming together, adopting some of these initial commercial partners, building out the production facility,” Russell told TechCrunch in an interview ahead of the announcement. “Now it all comes down to execution and scale.”



from Apple – TechCrunch https://ift.tt/2tMkBwD

India’s Times Internet buys popular video app MX Player for $140M to get into streaming

Times Internet, the digital arm of Indian media firm Times Group, is getting into the digital content space, but not in the way you might think.

The company’s previous venture — an OTT called BoxTV.com — shut down in 2016 after an underwhelming four-year period. Now it is taking a radically different strategy by buying video playback app MX Player for Rs 1,000 crore, or around $140 million. The company didn’t disclose its stake but said it is a majority percentage.

The service originates from Korea but it has become hugely popular in India as a way to play media files, for example from an SD card, on a mobile device. It is a huge hit India, where the app claims 175 million monthly users — while the country accounts for 350 million of its 500 million downloads.

From here, Times Internet plans to introduce a streaming content service to MX Player users which Karan Bedi, MX Player CEO, expects to go live before August. The plan is to introduce at least 20 original shows and more 50,000 content across multiple local languages in India during the first year. The duo said the lion’s share of that investment money would go into developing content.

Bedi, a long-time media executive who took the job at MX Player eight months ago, said the service will be freemium and very much targeted at the idea of providing an alternative to television in India. He added that the deal had been in negotiation for the past year, which validates a January report from The Ken which first broke news of acquisition.

There are plenty of video streaming services in India. Beyond Netflix and Amazon Prime, Hotstar (from Rupert Murdoch-owned Star India) is making waves alongside Jio TV from Reliance Jio, but data from App Annie suggests MX Player is way out ahead. The analytics firm pegs MX Player at nearly 50 million daily users, as of June, well ahead of Hotstar (14.1 million), JioTV (7.4 million) and others.

Both Bedi and Times Internet MD Satyan Gajwani explained to TechCrunch in an interview that a big focus is differentiation and building a digital channel for India’s young since the average viewer demographics for MX player are hugely different to Indian TV audiences. Some 80 percent of the app’s users are aged under 35 (70 percent is aged under 25), while the gender balance is skewed more towards men.

“A lot of people aren’t happy with Indian TV,” Bedi said. “There are a lot is soaps and it is not focused on young people. [The MX PLayer audience] is exactly the opposite of the Indian tv demographic.”

That not only plays into growing a place for ‘millennial’ content, but it also means the streaming service may find success with advertisers if it can offer a gateway to young Indians. Beyond audience, there’s also flexibility. Gajwani explained further that unlike traditional TV and even YouTube, the Times Internet-MX Player service will offer different options for advertisers who “work with content creators to create stuff, sponsor a show, or find various different ways to reach scale.”

“India has a $6 billion TV ad market and we think this could unlock some of the money going to TV,” he said.

Times Internet MD Satyan Gajwani

“This audience on here is genuinely different, [rather than cord-cuttters] they’re almost cord-nevers,” Bedi added. “This is a big new audience that’s never been tapped by broadcasters.”

The idea is to gently introduce programming that is accessible to a large audience in India, who might not be open to paying, and then test other revenue models later.

“Further down the line, we might include subscriptions to scale,” Gajwani added. “Subscription is growing but it’s much much smaller today, what excites us is the idea we’ll have 100 million people streaming a show.”

MX Player might not be well known, but scale is one thing it certainly has in spades. The company just crossed 500 million downloads on Android, but Bedi pointed out that many are not counted because they are side-loaded, which doesn’t register with the Google Play Store.

All told, he said, the app picks up 1.2 million downloads per day with around 350,000 coming from the official Android app store, he said. Bedi said that, among other things, the app is typically distributed by smartphone vendors in tier-two and three Indian cities to help phone buyers get the essential apps for their device right away.

The question now is whether Times Internet can leverage that organic growth to build another business on top of the basic demand for video playback. This is certainly a unique approach.



from Android – TechCrunch https://ift.tt/2yRtYkA
via IFTTT

Wednesday, 27 June 2018

The Sonos Beam is the soundbar evolved

Sonos has always gone its own way. The speaker manufacturer dedicated itself to network-connected speakers before there were home networks and they sold a tablet-like remote control before there were tablets. Their surround sound systems install quickly and run seamlessly. You can buy a few speakers, tap a few buttons, and have 5.1 sound in less time than it takes to pull a traditional home audio system out of its shipping box.

This latest model is an addition to the Sonos line and is sold alongside the Playbase – a lumpen soundbar designed to sit directly underneath TVs not attached to the wall – and the Playbar, a traditionally-styled soundbar that preceded the Beam. Both products had all of the Sonos highlights – great sound, amazing interfaces, and easy setup – but the Base had too much surface area for more elegant installations and the Bar was too long while still sporting an aesthetic that harkened back to 2008 Crutchfield catalogs.

The $399 Beam is Sonos’ answer to that and it is more than just a pretty box. The speaker includes Alexa – and promised Google Assistant support – and it improves your TV sound immensely. Designed as an add-on to your current TV, it can stand alone or connect with the Sonos subwoofer and a few satellite surround speakers for a true surround sound experience. It truly shines alone, however, thanks to its small size and more than acceptable audio range.

To use the Beam you bring up an iOS or Android app to display your Spotify, Apple Music, Amazon, and Pandora accounts (this is a small sampling. Sonos supports more.) You select a song or playlist and start listening. Then, when you want to watch TV, the speaker automatically flips to TV mode – including speech enhancement features that actually work – when the TV is turned on. An included tuning system turns your phone into a scanner that improves the room audio automatically.

The range is limited by the Beam’s size and shape and there is very little natural bass coming out of this thing. However, in terms of range the Beam is just fine. It can play an action movie with a bit of thump and then go on to play some light jazz or pop. I’ve had some surprisingly revelatory sessions with the Beam when listening to classic rock and more modern fare and it’s very usable as a home audio center.

The Beam is two feet long and 3 inches tall. It comes in black or white and is very unobtrusive in aly home theatre setup. Interestingly, the product supports HDMI-ARC aka HDMI Audio Return Channel. This standard, introduced in TVs made in the past five years, allows the TV to automatically output audio and manage volume controls via a single HDMI cable. What this means, however, is you’re going to have a bad time if you don’t have HDMI-ARC.

Sonos includes an adapter that can also accept optical audio output but setup requires you to turn off your TV speakers and route all the sound to the optical out. This is a bit of a mess and if you don’t have either of those outputs – HDMI-ARC or optical – then you’re probably in need of a new TV. That said, HDMI-ARC is a bit jarring for first timers but Sonos is sure that enough TVs support it that they can use it instead of optical-only.

The Beam doesn’t compete directly with other “smart” speakers like the HomePod. It is very specifically a consumer electronics device, even though it supports AirPlay 2 and Alexa. Sonos makes speakers and good ones at that and that goal has always been front and center. While other speakers may offer a more fully-featured sound in a much smaller package, the Beam offers both great TV audio and great music playback for less than any other higher end soundbar. Whole room audio does get expensive – about $1,200 for a Sub and two satellites – but you can simply add on pieces as you go. One thing, however, is clear: Sonos has always been the best wireless speaker for the money and the Beam is another win for the scrappy and innovative speaker company.

[gallery ids="1663460,1663461,1663462,1663463"] [gallery ids="1663385,1663386,1663388"]

from Android – TechCrunch https://ift.tt/2IutbWk
via IFTTT

Tuesday, 26 June 2018

macOS Mojave 10.14 first look

Seems like iOS gets all the love these days. And it’s easy enough to see why. The smartphone has long been the dominant device in many users’ lives, while the desktop/laptop category has been on the decline. But macOS still has some life left in it yet.

A year after introducing the more incremental High Sierra (it’s right there in the name), Apple has returned with a macOS update that’s jam-packed with new features. Unlike other recent updates, a number of the big additions here are targeted at creative professionals, as Apple refocuses its efforts on the user base that has long been a core part of its target market. In the case of features like Dark Mode and Gallery View, there’s a lot to like on that front, as well.

For the first time iOS apps have been directly ported to macOS in an effort to kickstart cross-platform development, while Stacks should go a ways toward helping users stay a bit more organized — and sane. Now that the operating system is in public beta, here’s a rundown of the biggest and best new features Mojave has to offer.

Dark mode

The biggest addition to Mojave is also one of the more interesting from a populist standpoint. Apple made it clear during its WWDC presentation earlier this month that Dark Mode is a hat tip to creative professions. It’s a category the company once owned outright, but one Microsoft has been aggressively gunning for in recent years with its Surface line.

Apple’s been knocked for a handful of decisions viewed as taking its eye off the ball for the small but loyal contingent that has formed its core user base. The company’s been making amends for this over the past year and change, with the addition of the iMac Pro and the promised return of the Mac Pro. Dark Mode is clearly a nod toward those who spend long stretches staring at bright screens in dark rooms.

Of course, it’s not just for creative pros. Dark Mode is a potential boon to all of us desk jockeys looking for some respite from eye strain. It’s also just aesthetically pleasing, and a nice visual break from a Mac desktop design that really hasn’t changed much in the past several generations.

Apple’s done a good job here maintaining consistency across its own apps. Along with darkened menus and frames, Mail, Contacts and Calendar invert to white text on a dark background. The default Mojave desktop image of a winding sand dune has also been transformed accordingly.

Better still, there’s a dynamic version of the wallpaper that will darken, based on the time of day, as the sun sets and stars come out in the desert sky. A nice touch. However, only the default wallpaper is capable of doing that at the moment. If you want the effect, I hope you don’t mind staring at sand.

The biggest issue with Dark Mode (in this admittedly still early public beta stage) is compatibility. Apple says that the mode is designed for easy adoption by third-party developers, assuming their apps are built for the macOS Mojave SDK, but there’s no guarantee the apps you use regularly will have that compatibility at launch. That means there’s a decent chance your dark desktop scheme will be regularly interrupted by a blast of white light.

This is also the case for Apple’s own apps like Safari (though iWork and other not preloaded Apple apps don’t yet have the functionality), which have implemented aspects of Dark Mode, but in which you’re going to be spending a lot of time looking at bright pages regardless.  For most of us who spend time in and out of various apps, Dark Mode’s actual functionality is pretty limited, but you’ll no doubt be compelled give it a go anyway. At the very least, it’s a nice departure from the default macOS color scheme you’ve been ensconced in for so long.

Stacks

Dark Mode may be the feature that got the biggest crowd reaction at WWDC, but Stacks is the best. No question here, really, and this is coming from someone who’s gotten fairly consistent about tidying up his desktop. This thing is worthy of a clickbait-style “One Weird Trick to Organize Your Life” headline.

This is a surprisingly cathartic act. Hover over the wallpaper of your out of control desktop, two-finger tap the touchpad and select “Use Stacks” from the drop-down. Poof, they all shoot into their pre-ordained piles on the right. The default mode categorizes files by product type, which is probably the most straightforward method of the bunch (you also can switch to category or tag). If a file is the only one of its kind on the desktop, it will maintain its name below the thumbnail; otherwise, the file kind will show off below. Unclassified files will show up in a less helpful “other” Stack.

When new files are added to the desktop, they automatically appear in their associated pile, so long as you stay in Stacks mode. When the mode is enabled, files are essentially stuck to these spots like a grid. You can drag and drop them into apps, but can’t move them around the desktop.

Once everything is sorted, clicking on the top of the stack will spread it out so you can once again view everything all once. Click the top of the pile again, and poof, everything goes back into the pile. You also can hover over the top with the cursor and swipe the trackpad left or right with two fingers to scrub through the list. I find the method a bit less useful, but some will no doubt prefer it.

If you decide the whole cleanliness thing isn’t for you, two-finger tap the wallpaper again. Click “Use Stacks” and poof, everything gets sent back to its original entropic position on the desktop. Good on Apple for letting users revert back to the madness.

Apple’s added a LOT of different features — from Launchpad to Tags — designed to help users get better organized. For my part, I’ve largely tried and failed to incorporate them into my daily usage. Stacks, on the other hand, is a genuinely useful addition and a strong contender for the most useful feature Apple has brought to macOS in recent memory.

Desktop

Gallery View is an interesting addition for similar reasons as Dark Mode. The feature is a spiritual successor to the familiar Cover Flow. It’s less dynamic, relying on a bottom scroll bar, rather than large images up top. It puts meta data front and center much more than before. This is especially apparent when dealing with images, giving you an almost light-room level of detail on photos.

The information includes, but is not limited to: dimensions, resolution, color space, color profile, device make, device model, aperture value, exposure time, focal length, ISO speed, flash, F number, metering mode and white balance. It’s lot for most users. In fact, it’s probably overkill for a majority of us, but it’s clearly another indication that Apple’s working to maintain its hold on the creative professional category by building that intense level of detail directly into the Finder.

Tucked down in the bottom-right corner of Finder windows are Quick Actions. There are a handful of handy features for editing images and PDF docs, including Rotate Left (as found in the iOS Photos app), markup (as found in Adobe Acrobat), Add Password and Create PDF, which turns files into PDFs, as advertised.

It’s an interesting system-level embrace of Adobe’s file format, and also makes the need for Preview somewhat redundant, as it’s baked directly into Finder. The options are dependent on file type — so, if you have, say, an audio or video file, you can trim it directly in the Finder window. For most tasks, you’ll probably want to open an editing app, but I would love to see more personalized actions down here. For my own needs, something like file cropping and resizing would be great to have built directly into the Finder window, saving me a trip to Photoshop or some online editing tool. I realize my needs aren’t the same as everyone’s — but all the more reason to offer some manner of customization down there, akin to what Apple offers with the MacBook Touch Bar.

Screenshots

File previews are getting a lot of love here, throughout. I’m not sure how often normal people use screenshots, but I take them all the damn time, so any addition here is welcome. Beyond general usefulness, I suspect a lot of people simply don’t take screenshots because the key command is fairly convoluting. Shift-Command-5 isn’t exactly easier to remember than other, similar combinations, but it does bring up a hand control window overlay.

From there, you can choose to capture a full screen, a window, a selection you outline yourself, record a video of the entire screen (which I used for the above Stacks GIF) or record a video of a selection. It certainly saves from having to memorize all of the different commands. The new screenshots also make it possible to set a timer of five or 10 seconds before snapping a photo.

Apple’s taking a play from iOS, offering up a small window in the right-hand corner of the screen once the screenshot has been snapped. You can click directly into that, or just wait for it to disappear. From there, you can markup the file, drag and drop it it into a document or have it automatically sent to the desktop, documents, Mail, Messages, Preview or a Clipboard, so they don’t all wind up in the same spot.

Continuity Camera

Not sure how often this feature will actually prove handy for most users, but it’s a cool feature, nonetheless. Continuity Camera essentially uses an iPhone as a surrogate camera for the desktop. It’s a clever bit of cross device synergy.

Say you’re in Page. Go to Edit > Insert from Your iPhone and choose Photo. Take a shot, approve it on the device, and it will automatically insert itself in the doc. It works like a charm. The scan feature also works surprisingly well here. I took a shot of a crumpled receipt and it looked pretty pristine, regardless. As someone who recently went through a lengthy visa process, I wish I’d had access to this thing a few weeks back.

FaceTime

This one definitely wowed the crowd. FaceTime’s macOS/iOS-only is the main thing that’s hampered my own use of the service, but there are some really nice additions here that are making me rethink the decision. The ability to add up to 32 users is far and away the most fascinating, and Apple’s done a good job managing that kind of unruly number.

Similar to services like Google Meet, the system automatically detects who’s speaking and places them front and center in the app. Also like Meet, you can manually prioritize the users on whom you’d like to focus.

Other users will shrink down and eventually populate the carousel at the bottom. You can get the list of participants by clicking the Info button. And invitations for more users can be extended while the chat is in progress.

iOS apps

Apple made a point of addressing longtime rumors of a convergence between the company’s desktop and mobile operating systems, flashing a giant “No” onstage. That said, the two OSes are getting even more shared DNA. The biggest news on this front is the porting of three iOS apps to the Mac. This is clearly the first step toward a larger convergence of some kind, but more to the point, it’s a way to start getting app developers to port their iOS apps to the desktop.

Sure, macOS had a huge head start, but iOS has been getting all of the developer love in recent years. Making it easier to create apps cross-system means devs don’t have to decide. It also means that the app that come to macOS through this method will be more likely to do so through the Mac App Store — a distribution method Apple clearly prefers over more traditional downloads, for myriad reasons.

To start, Apple has brought over News, Stocks, Voice Memos and Home. In my time with Mojave thus far, News is the one I now use pretty regularly. I was a bit hesitant to move to a more walled approach to news delivery, but I do appreciate having a centralize hub of the trusted news sources I visit regularly, coupled with alerts that populate the Notification Center at right.

It’s probably not going to replace my use of TweetDeck for work-related news, as, among other things, it just seems to update more slowly. But it’s a nice tool to have churning in the background, along with a check-in once or twice a day, to make sure I haven’t missed a moment of the horror show that is news in 2018. Fun!

Voice Memos probably has the most limited scope of the bunch. I’ve switched over from various third-party tools I use to record meetings from time to time, and it’s nice having that sharing across devices. Students will likely find it handy for lectures as well, but beyond that, it’s probably not going to get a ton of play for most users.

Home is the most interesting addition of the bunch. Certainly it makes sense, as Apple makes a bigger push to remain competitive against the likes of Amazon and Google in the smart home. The Mac isn’t designed to be a hub in this case — that’s still the job of Apple TV and HomePod, so far as the company is concerned. But the desktop OS does make for a nice control panel, and it’s handy to be able to check in on your place remotely from the comfort of your MacBook.

Given that they are, in fact, ports, not much has changed from a design standpoint. That means it’s essentially the same layout as the one you’ll get on your iPad, with a grid of tidy little boxes representing your various connected home devices. It’s pretty hard to shake the compulsion to reach out and touch the things. Apple, of course, has taken a hard-line against incorporating touch into its laptops and desktops, so reaching out won’t get you very far in this particular case.

Odds and ends

  • The Mac App Store gets an overhaul here, including search filtering and new content categories. Apple’s also added the kind of editorial curation it’s had on iOS and other apps.
  • More privacy permissions is always a good thing. In addition to the standard access to Contacts, Calendar Photos and Reminders, Apple’s added notifications for apps accessing the camera, mic and sensitive data. That means more pop-ups to click through, but more importantly, some extra peace of mind.
  • The system now does “password auditing,” to make sure you don’t reuse the same passwords over and over again.
  • Siri gets a couple of additions on the desktop here, including the ability to add passwords with voice.

 



from Apple – TechCrunch https://ift.tt/2MrDT1Y