When Pokemon Go launched in 2016, 500 million people installed it in the first 60 days. Players walked through parks, scanned buildings, and pointed their phones at landmarks to catch virtual creatures layered over the real world. It was a gaming craze. But it was also, as we now know, one of the largest crowdsourced data collection efforts in history.
Niantic, the company behind Pokemon Go, has revealed that photos and AR scans collected through the game produced a dataset of over 30 billion real-world images. That data is now being used to train AI navigation systems for delivery robots.
How did 30 billion images get collected?
Pokemon Go required players to physically visit specific locations and interact with their surroundings through their phone cameras. Every time someone visited a Pokestop, fought at a gym, or completed a task, the game was recording visual data from the phone’s camera.
The collection effort got more direct in 2020 when Niantic added “Field Research” tasks. These prompted players to scan real-world statues, landmarks, and other public locations with their cameras in exchange for in-game rewards. A good chunk of the data also came from Pokemon battle arenas, which were tied to physical locations that players visited repeatedly.
Because millions of people scanned the same spots at different times of day, in different weather, from different angles and heights, the dataset ended up being far more diverse than anything a fleet of mapping cars could produce. Niantic didn’t deploy a single camera vehicle. It gave people a game and let them do the work.
What is Niantic Spatial?
In May 2025, Niantic sold Pokemon Go to Scopely, a gaming company owned by Saudi Arabia’s Savvy Games Group. At the same time, Niantic spun off a separate AI company called Niantic Spatial, which retained the mapping data and technology.
Brian McClendon, CTO of Niantic Spatial, told MIT Technology Review that the company has built a Visual Positioning System (VPS) trained on those 30 billion images. The system works differently from GPS. Instead of relying on satellite signals, VPS figures out where a device is by analyzing what its camera sees and matching it against Niantic’s database of real-world images.
McClendon said the system covers more than one million locations worldwide and can pinpoint a device’s position within a few centimeters. Each image in the dataset comes with detailed metadata, including the phone’s exact position, orientation, direction, speed, and whether it was moving when the photo was captured.
The Coco Robotics partnership
On March 10, 2026, Niantic Spatial announced a partnership with Coco Robotics, a startup that operates small sidewalk delivery robots for food and groceries. Coco currently has about 1,000 suitcase-sized robots deployed in Los Angeles, Chicago, Jersey City, Miami, and Helsinki. The company says its robots have completed over 500,000 deliveries so far.
These robots travel at roughly 5 miles per hour on sidewalks. To compete with human delivery riders, they need to know exactly where they are at all times, and GPS alone isn’t reliable enough for that, especially in cities where tall buildings create signal interference.
That’s where Niantic Spatial’s VPS comes in. The idea is simple: the robot’s onboard cameras capture what’s around them, and the VPS matches those images against its database to determine the robot’s exact location. John Hanke, CEO of Niantic Spatial, put it this way: getting a virtual Pikachu to run around realistically and getting a delivery robot to navigate safely through a city turn out to be the same problem.
Why Pokemon Go data is valuable for robotics
A delivery robot has to deal with rain, nighttime, construction, parked cars, and constantly changing street conditions. Staged photography and controlled camera setups don’t capture that kind of variety. Pokemon Go players did, millions of times, across thousands of cities.
For each of the million-plus locations in the dataset, Niantic Spatial has thousands of images taken from slightly different positions, at different hours, under different weather conditions. That density of coverage is what allows the VPS to work even when a location looks different from one day to the next.
Niantic’s longer-term goal goes beyond delivery robots. McClendon has said the company wants to build a “living map” of the real world, one that updates as new data comes in. The plan is to keep collecting spatial data from robots and other devices that use the system, creating a feedback loop where the map gets more accurate over time.
This isn’t the first time user data was quietly repurposed
The Pokemon Go situation has drawn comparisons to Google’s CAPTCHA system. For years, users clicked on images of traffic lights, crosswalks, and bicycles to prove they were human. Computer scientists have long suspected that this data was also used to train AI vision models, particularly for self-driving car projects.
The difference with Pokemon Go is the scale. Thirty billion images with precise spatial metadata, collected over eight years from hundreds of millions of players, is a dataset that no traditional mapping company could have assembled on the same timeline or budget.
At its peak in 2016, Pokemon Go had around 230 million monthly active players. Scopely reported the game still had over 100 million players in 2024. Some estimates put the current active player count at around 50 million, with roughly 5.4 to 5.7 million playing daily as of March 2026.
What this means for players
Niantic has not suggested any plans to share VPS data with law enforcement or other third parties beyond commercial partners like Coco. But the technology itself, a system that can identify exactly where a photo was taken by analyzing the buildings and landmarks in it, raises obvious questions about how this data could be used down the road.
The Scopely acquisition is part of a broader pattern of Saudi investment in the gaming industry, with Savvy Games Group aggressively expanding its portfolio across mobile and PC gaming. Where this data ends up long-term is worth watching.
For now, the immediate takeaway is straightforward. All those hours spent catching Pokemon, scanning landmarks, and walking to gyms weren’t just gameplay. They were building one of the largest real-world visual datasets ever created, and that dataset is now helping robots deliver pizza.

