Python Script to Map Cell Tower Locations from an Android Device Report in Cellebrite

Recently Ed Michael showed me that Cellebrite now parses cell tower locations from several models of Android phones. He said that this information has been useful a few times but manually finding and mapping the cell tower locations by hand has been a pain in the butt. I figured that it should be easy enough to automate and Anaximander was born.

Anaximander consists of two python 2.7 scripts. One you only need to run once to dump the cell tower location information into a SQLite database and the second script you run each time to generate a Google Earth KML file with all of the cell tower locations on it. As an added bonus, the KML file also respects the timestamps in the file so modern versions of Google Earth will have a time slider bar across the top to let you create animated movies or only view results between a specific start and end time.

Step one is to acquire the cell tower location. For this we go to and sign up for a free API. Once we get the API key (instantly) we can download the latest repository of cell phone towers.


Currently the tower data is around 2.2 GB and contained in a CSV file. Once that file downloads you can unzip it to a directory and run the script from Anaximander. The short and simple script creates a SQLite database named “cellTowers.sqlite” and inserts all of the records into that database. The process should take 3-4 minutes and the resulting database will be around 2.6 GB.

Once the database is populated, the next time you dump an Android device with Cellebrite and it extracts the cell towers from the phone, you’ll be ready to generate a map.

From The “Cell Towers” section of your Cellebrite results, export the results in “XML”. Place that xml file and the file in the same directory as your cellTowers.sqlite database and then run –t <YourCellebriteExport.xml> . The script will start parsing through the XML file to extract cell towers and query the SQLite database for the location of the tower. Due to the size of the database the queries can take a second or two each so the script can take a while to run if the report contains a large number of towers.


Ed was kind enough to provide two reports from different Android devices and both parsed with no issues. Once the script is finished it will let you know how many records it parsed and that it generated a KML file.


This is what the end results look like.


The script can be downloaded from:

This is the first version and there are several improvements to make but I wanted to get a working script out to the community to alleviate the need for examiners to map the towers one at a time. Special thanks again to Ed Michael for the idea for this (and one other) script as well as for providing test data to validate the script.

Follow my blog for up to date digital forensics news and tips:

About Matt:

Matt performs technical duties for the U.S. government and is a Principal at Argelius Labs, where he performs security assessments and consulting work. Matt’s extensive experience with digital forensics includes conducting numerous examinations and testifying as an expert witness on multiple occasions.

A recognized expert in his field with a knack for communicating complicated technical issues to non-technical personnel, Matt routinely provides cyber security instruction to individuals from the Department of Defense, Department of Justice, Department of Homeland Security, Department of Interior, as well as other agencies, and has spoken frequently at information security conferences and meetings. Matt is a member of the SANS Advisory Board and holds 11 GIAC certifications. Among them: GREM, GCFA, GPEN, GCIH, GWAPT, GMOB and GCIA.



DIY app forensics: What does it take?

Digital evidence from the millions of apps currently available in the Google Play Store is frequently material to criminal and civil cases and investigations. Yet app evidence is time consuming and costly to decode, analyze, and produce while facing deadlines and a backlog of cases.

What’s in app support? At Mobile Forensics World this year, you have a chance to find out. On Tuesday, June 3, John Carney and Don Huettl, of Minneapolis (Minnesota, US)-based Carney Forensics, are presenting a two-part lecture and live demo on what it took for them to develop plugin support for the Burner Android app. We took the time to sit down with John and get the story behind the lectures.

Cellebrite: What first drove you to start developing plug-ins to support third party apps?

John Carney: We’ve seen a dramatic change in mobile phone architecture in recent years as smart phone and tablet makers rely on apps as basic building blocks.

This makes for an industry challenge faced by tools vendors and examiners alike.  Over one million iOS apps and one million Android apps are available today through app stores, but automated forensic analysis is supported for only a few hundred.

And, even though scripting capabilities exist for examiners to develop their own forensic app support, very few are decoding apps and writing the scripts and plug-ins to probe their device evidence.  We wanted to attempt to show examiners a path forward and how to get involved.

CB: How did you come to choose this particular app?

JC: Mobile messaging apps are an extremely interesting family of mobile apps that phone users are shifting to in great numbers all over the world as they abandon traditional text messaging offered through the service providers.

We noticed examples of these apps that support message deletion and user-specified retention periods after which they are deleted.  Snapchat is perhaps the best example.  TigerText is another.  We chose to support Burner.

We wanted to see if we could find message evidence after the message was deleted or “burned”, and to support a new app that the tools vendors did not support.  Cellebrite now supports Burner on iOS, but ours is the only Burner plug-in or script available for Android.

CB: What challenges did you face at the outset?

JC: We had to choose a reasonably interesting app that was supportable and an app platform that made sense for us. We made our determination using three criteria:

  1. We wanted to add something of value to existing app support. For example, because GoSMSPro uses the same core data structures that UFED already supports to decode other SMS, we found there was really no work to be done.
  2. The app data couldn’t be too difficult to acquire. It would be fruitless to try to support an app whose data is encrypted.
  3. Along similar lines, we wanted to support an app that would give us plenty of artifacts to uncover. Some app developers, who are experienced with writing secure apps, do a lot of garbage collection and data wiping along the way. They don’t leave much behind as a result.

Burner, as it turned out, gave us an almost “Sherlock Holmesian” opportunity—after the phone number is burned, we found we had a shot at finding artifacts left behind, and we did!

Then, we had to construct a development environment that gave us about half a dozen features that would make our research, development and testing flow more easily. Basically, we built a “nest” for doing productive work: in the short term, nimble, fast, cost effective results, and for the long term, investment in future development.

For example, virtual phone support—Android emulators—allowed for experimentation across makes and models without a significant cost outlay. We could then create two virtual phones and have them call and text each other from a single platform.

For another example, platform virtualization allows us to take advantage of various computing architectures. Developers can use Mac, Windows or Linux platforms for full flexibility in the development environment.

Another challenge was that we had to learn how to decode mobile apps evidence, which proved to be one of our most critical challenges. We also had to learn how Cellebrite encodes phone evidence for reporting our results, and advanced analytic options like timelines, maps, and activity analytics.

On the other hand, having looked at other plug-in writing environments, we can say that UFED Physical Analyzer offers the best support for developers. It is equipped with advanced SQLite and plist decoding, highly modular decoding chains, and it provides an excellent debugger. We don’t have to worry about flash translation layers, reconstructing file systems, or parsing common phone data structures.

We wanted to be 80% done with plug-in development from the moment we started, and UFED gave us that level of advanced and broad-based support in a way that many other tools do not.

CB: What did you find you needed in terms of resources (time, team members, etc.)?

JC: We needed a skilled software engineer with digital forensics training who understood object-oriented development and who could quickly learn Python.  Don Huettl had those skills and was also a clever designer who constructed a highly innovative development environment. Don came to us as part of an internship with a degree program from a nearby academic institution, where I serve on the advisory board. In addition to the right people, we needed time to decode our app, and write and test our Python code.  We also had to learn how to present our project so that examiners could understand and appreciate what we had done.

This took several iterations of slide decks, including a comprehensive live demo of our development environment. Don shows how we decode the app, take the script and turn it into a plug-in, put it on a decoding chain, perform the examination, and then create a report—all in a way that anyone could understand, even if they don’t have a background in scripting.

Documentation is key to this process. It’s good scientific practice anyway, but in this case, it provides the framework for learning how to do this. Besides documentation of our own methods, we found that the Iron Python libraries and .NET libraries were critical to our success, and important for sharing with the community. Finally, we found that we needed more than one UFED Physical Analyzer license to support the decoding, development, and testing of our plug-in.

CB: What skills did you and your team members already have, and what skills needed to be developed or sourced?

JC: We had software architecture, design, and engineering skills.  I was a software engineer and architect in a former life and an experienced mobile device forensics examiner for the past five years.

Don was an experienced software engineer who learned computer and mobile forensics and got certified during his degree program.  He was looking for a challenging internship.  We didn’t need any more skills than that.

CB: What technical challenges did you face at various stages in the project?

JC: We had to learn how to decode mobile apps including SQLite app databases and how to expose other artifacts and files in our mobile app.

We had to find phone emulators for Android phone models and learn how they worked and what didn’t work. The quality of the emulators and how many features they support or don’t support figured into this research.

For example, creating two different virtual devices—different makes and models—with a full range of functionality might mean that different VOIP apps, or forwarding rather than simply sending and receiving text messages, crash the emulator. We had to figure out how to work around the bugs.

We also had to learn how UFED Physical Analyzer organizes and structures phone data for presentation to examiners. In other words, we had to figure out how to plug the examination results back into UFED PA so that reporting and analytics would work on the back end.

We had to learn and develop debugging techniques for perfecting our Python script and plug-in. Even for a software engineer with plenty of experience, the debugger, which provides an atomic level look at code execution and data, is important to figure out why something isn’t working.

Fortunately, the UFED’s support for the debugging environment in Python shell made this trial and error process much easier.

CB: What have you learned thus far about the plug-in development process?

JC: We’ve learned that the process is very dependent on the specific mobile app that we have targeted to support.  We have to become experts on our app. This involves understanding the app’s user model, what the app’s purpose is, what it does and doesn’t do, and so forth.

Decoding the app, in turn, requires understanding the connection between the user model and the data model. You can’t have just a passing knowledge of the app and expect to be able to write a plug-in; you need to understand the app at the same level as its own developer.

We’ve learned that encryption and cleansed data are not our friends as we attempt to acquire and report phone evidence.

We’ve learned that leveraging UFED in our work is like standing on the shoulders of a giant.  Physical Analyzer helps us with decoding, reporting, and debugging.  And all of the various pre-existing UFED plug-ins acquire, translate, reconstruct, and prepare mobile app data for us so that we can do our best work.

We’ve learned that we have to document our process and our code so that we can remain nimble, grow our team, and develop quality plug-ins.

CB: What will you be exploring in future research and development?

JC: Many app families are interesting to us including personal navigation, spyware and malware, and also payment. We want to explore additional mobile apps that have not been decoded and automated by any of the tools vendors yet, but that are desperately needed by examiners.

Because we’ve only developed one plug-in, we don’t yet have a quantitative idea what kind of time commitment is required for different kinds of apps.

However, understanding that mobile examiners are busy people, it may become possible and necessary for people to plug in to the process at different points and share their skills and aptitudes. Rather than developing “cradle to grave” plug-ins, in other words, one person might focus on decoding, another on script testing, etc.

We also want to construct a development environment for iOS including iDevice emulators so that we can develop multi-platform app plug-ins.

Join John and Don for their two-part presentation in Oleander A on Tuesday, June 3. From 11:00 – 11:50 a.m., John will present “A Case Study in Mobile App Forensics Plug-in Development – Examiners/Developers to the Rescue (Part 1). From 4:30 – 5:20 PM, Don will present “A Case Study in Mobile App Forensics Plug-in Development – Build Your Own Plug-ins (Part 2). We hope to see you there!

Partnership with the CCL Group brings new Android password carver to UFED Physical Analyzer

As useful as our Android pattern/PIN/password lock bypass is to so many of our customers, at times, the password itself is needed. Perhaps a forensics examiner wants to validate extraction results manually, or believes the same password protects a different device.

Still, not all physical extractions are automatically decoded. Without the file system reconstruction that decoding provides, examiners must manually carve the password from wherever it is stored within the device’s operating system. This can add time to the forensic process, especially if the examiner must refer the device to a specialist. It might even be impossible if the examiner lacks carving skills, or the access to an expert who has them.

With our soon-to-be-released UFED Physical Analyzer 3.7, we’re pleased to introduce a new Android password carver—thanks to the efforts of the CCL Group, the United Kingdom’s largest private digital forensics company. Having produced 300 scripts as part of its digital forensics research and development efforts, last year CCL Group’s lab developed a Python code that could carve a numeric password from an Android physical extraction or from third-party image files.

The premise, as they explained in their blog:

As with the pattern lock the code is sensibly not stored in the plain, instead being hashed before it is stored. The hashed data (both SHA-1 and MD5 hash this time) are stored as an ASCII string in a file named passcode.key which can be found in the same location on the file system as our old friend gesture.key, in the /data/system folder.

However, unlike the pattern lock, the data is salted before being stored. This makes a dictionary attack unfeasible – but if we can reliably recover the salt it would still be possible to attempt a brute force attack.

The CCL developers made their code openly available for other researchers to dig into. Cellebrite’s co-CEO and Chief Technology Officer, Ron Serber, believed that the code was a natural fit within the UFED Physical Analyzer platform.

However, the code was written independently of our infrastructure. With CCL’s permission and partnership, we rewrote the Python code so that it could be used within our platform. On its own or as part of a plugin chain, the carver enables recovery of numeric passwords from physical image files extracted by UFED, JTAG, chip-off or other tools.

We’re introducing the carver together with UFED Physical Analyzer 3.7 in just a few days. Current license holders will receive an email with download links; if you’re not a current customer, please download our free UFED Physical Analyzer 30-day demo.