Supplier Spotlight

Show Reviews

Automotive Testing Expo Europe 2017 Show Review

Click here to read


Automotive Testing Expo Europe 2016 Show Review

Click here to read


Automotive Testing Expo Europe 2015 Show Review

Click here to read


Automotive Testing Expo North America 2014 Show Review

Click here to read


Automotive Testing Expo India 2014 Show Review

Click here to read


Latest Video

Groupe Renault reveals Symbioz prototype


Symbioz features Level 4 automation and will be used by Renault to test and develop autonomous vehicle functions and technologies

Click here to watch the video

TRI's next-gen automated research vehicle


Engineers explain their goals for Platform 3.0, which included to radically increase the car's sensor capabilities

Click here to watch the video

Will dynamic watermarks be the solution to cyber attack threats on autonomous vehicles?

Industry Opinion

« back to blog listings

The journey from trains to self-driving cars

Sravan Puttagunta, co-founder and CEO of Civil Maps, shares the company’s unique approach to solving many of the toughest mapping and localization challenges facing the autonomous driving industry today. 

In 2013, we secured our first contracts with railroad companies who were dealing with serious data management issues while trying to create 3D maps for their train control algorithms. Our heavy-industry customers with small-scale deployments were generating roughly a few terabytes of data each day. Their internal R&D and IT teams were not prepared to handle this, and they were falling behind their processing schedules.

When working with us, they would send our team the collected point cloud data on hard drives or they would upload it to our cloud infrastructure. While our platform was faster than manual/human processing, we soon discovered that even cloud-based infrastructure logistics were overwhelmed by the large amounts of inbound 3D point cloud data. Throughout the railroad industry as well as other heavy industries, this kind of raw, geospatial data often sat in warehouses for months and the delays affected safety, since the data could not be processed fast enough to produce accurate, up-to-date maps. The environment would change between the period of the data collection and when the maps were actually published.

From cloud-based, asynchronous processing towards the edge
In response, we made a pivotal decision to shift our product roadmap from a cloud-based mapping solution to edge-based map creation, processing the map data in-vehicle, near the actual source.

With autonomous vehicle research programs ramping up in the USA and Asia, we soon found a new type of client beyond railroad operators – auto makers and mobility companies. Seeing the greater challenge with the millions of miles in public roads and millions of cars versus thousands of miles of train tracks and thousands of locomotives, we decided to focus our technology on enabling self-driving cars. Specifically, we set out to develop software that could provide autonomous vehicles with cognition, similar to the mental routines in biological cognition.

At this time, these new clients were dependent on very data-heavy base maps. Many companies use base maps to help their cars localize and navigate. They are known throughout the industry as the ‘building blocks’ upon which data layers can be added, such as map updates.

A ‘heavy’ map of a city can easily consist of several terabytes of data. For example, a map of San Francisco can take up to 4 terabytes. While it is certainly possible to process large amounts of data for maps in the cloud or the edge, it requires a more substantial, ongoing investment in infrastructure, such as server farms and/or expensive compute capabilities. Based on experiences, we didn’t consider that to be a practical solution.

We knew that edge processing with a leaner base map would be a better methodology for the long term. For short term needs in small, drivable research areas, spending money on extensive infrastructure overheads can make sense. However, once any budget pressure sets in and the need to refresh the map increases while scaling, the cost quickly becomes difficult to justify.

With conventional vs. edge mapping for example, conventional base map creation using lidar is an expensive and slow process, taking weeks and sometimes months to build an acceptable base map. A trained data-collection driver must go out on a survey trip using an expensive mapping car, equipped with in-vehicle storage arrays to harvest the data. Upon return, the storage system must be physically removed and shipped to a data center for processing.

We consider this method to be ‘data-rich, but operationally poor’. Though one can collect a wealth of data from a lot of powerful sensors to enable mapping and localization, the data itself makes the undertaking operationally poor; it is difficult and inefficient to collect and manage. It necessitates heavy, in-car compute resources as well as a sizable investment in storage and bandwidth, consequently leading to high energy requirements to power the system. We do not expect these methods to scale.

Map or go home
Moreover, the impracticality of conventional systems will actually hinder widespread adoption and full autonomy – the car being able to drive with no human intervention wherever it needs to go. This is because conventional base maps actually restrict the scale of autonomous vehicle operations by limiting where the car can drive itself.

In addition, a passenger’s travel options are tied to how much base map data can be physically stored in the vehicle. Let’s look at a scenario involving an autonomous ridesharing business under these conditions. Passengers in San Francisco who want to be driven 50 miles south to San Jose will not be able to do so if the robo-taxi does not have a map of San Jose already stored in the car.

Driving around with a map of San Francisco and all the cities needed to travel through in order to get to San Jose would mean the vehicle could need approximately 10-20 terabytes of map data to transport the passenger to their destination and any impromptu stops along the way. All that data must be updated at least daily, if not more often. Road infrastructure and traffic rules change day-to-day (sometimes hourly), and a map for an autonomous vehicle must reflect that, otherwise it becomes a significant safety issue, as it did for our early railroad clients.

Lightweight, fingerprint base map
This is one of the problems that Civil Maps is addressing, and this is where edge-processing really shines. Instead of producing a multi-terabyte base map of San Francisco that has to be processed at a data center, we can create a Fingerprint Base Map of approximately 400 megabytes only for that same region. Using depth data gathered from a vehicle, we extract that critical map data, process it in-vehicle, and transform it into lightweight, environmental ‘fingerprints’.

Because we are taking gigabytes of map data and reducing them to kilobytes, we can then send our fingerprint data to the cloud over existing cellular networks (3G and 4G) for map aggregation and for sharing with other vehicles in our network. Changes detected on the edge can also be sent over air. The vehicle uses our fingerprints for localization, which we are able to achieve in 6DOF.

Civil Maps’s advantage is that our localization software can use a much lighter base map to accomplish this task. Whenever a different or new base map is needed, the car can download it while driving and use it immediately. For example, if the vehicle is in Tahoe City, California, and it needs a map of Nevada to cross the state line and get to Incline Village, our system enables the car to download the area map of Nevada while on the go.

This is the future we are working toward. A few years ago, our early railroad customers saw their predicament and began moving to edge-based, scalable map creation. We see the same thing happening to the autonomous vehicle industry, and Civil Maps is pre-emptively preparing for that transition.

In our industry today, what we have is exponential growth in depth data generation, aggravated by the fact that our ability to process this data is only growing linearly. It does not have to be this way, and we are confident that our solutions will enable truly scalable autonomous driving.

November 29, 2017



There are currently no comments.

If you would like to post a comment about this blog, please click here.
Read Latest Issue
Read Latest Issue

Your email address:

Web Exclusives

Keep it on the hush

Volvo uses LMS Test.Lab software to better understand the source of noise emissions
Click here to read more

Improved leakage testing

New technology from Sensing Precision helps reduce potential bottlenecks in production associated with cabin leakage testing
Click here to read more

Ford's plans to return home

Key autonomous and electric vehicle business and strategy teams are be moved to the city where the Blue Oval began its life
Click here to read more

Safer analysis of onboard chargers

How Keysight's compact, two-quadrant, regenerative power converter test solution with integrated safety features accelerates test time, and protects both users and devices under test
Click here to read more

SEAT uses HBM autonomous DAQ system

SEAT and Polytechnic University of Catalonia (BarcelonaTech) have jointly developed a unique data acquisition enabling more precise control over instrumented parts and the entire data acquisition process  
Click here to read more

Supplier Spotlight

Supplier SpotlightClick here for listings and information on leading suppliers covering all aspects of the automotive testing industry. Want to see your company included? Contact for more details.

Submit your industry opinion

Industry BlogDo you have an opinion you'd like to share with the automotive testing community? Good or bad, we'd like to hear your views and opinions on the leading issues shaping the industry. Share your comments by sending up to 500 words to


Recruitment AdTo receive information on booking an advertising banner please email