By clicking “Accept all cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

State of the art hardware

Packed EyeVi mobile mapping system hardware
Quick setup  

Get it up and mapping within an hour. The equipment comes within two padded and secure boxes. It’s light and easy to assemble – watch our assembly video and be convinced yourself.

EyeVi mobile mapping system hardware mounted on a car
Easy to use

All pieces of the equipment sit together easily and securely. No specialist required to assemble and use it. Be it a Mustang or a Beetle or anything in between – you can fit it onto any type of car!

EyeVi mobile mapping system payload
Market-leading sensors

We develop our hardware from market-leading sensors so that it would meet the highest of demands. It consists of a panoramic camera, LiDAR and a GNSS/INS.

EyeVi’s software consists of four components

Efficient fieldwork operations

EyeVi DataCapture

A specialized fieldwork software that allows rapid and efficient data capture. It handles the collection of GNSS/INS data, LIDAR data, and raw panoramic camera imagery automatically.
Automated data processing pipeline

EyeVi DataFlow

An automated data processing pipeline that generates three ready-to-use datasets: 360° panoramic images, orthophotos, and a full coverage point cloud from the raw sensor input. These datasets together with position and orientation metadata serve as inputs for Feature Factory (AI module).
Automated feature extraction from multiple visual inputs

Feature Factory

In the Feature Factory (AI module) and depending on the AI task, one or a combination of input datasets are used. The output of the task is a georeferenced data layer (in the format of .csv or .shp) usable together with the imagery. Based on the applied input data, the AI process is divided into 3 categories: orthophoto-based feature extraction, panoramic imagery based feature extraction, and 3D point cloud feature extraction. The AI algorithm-based automated feature detection is run as a cloud-based process.
Web application for verification and correction

Web Application

The detected layer ends up in the visual interface of EyeVi Web Application where verification and correction, as well as quality check throughout the data, is being conducted.

Hardware specifications

360° panoramic camera
  • Resolution 30 MP with 6 high sensitivity  5 MP Sony CCD global shutter sensors

  • Pixel size 10 m from the camera 0.78 cm

  • Images captured in every 3 m (customizable)

  • Image data output 12 or 16 bits

LiDAR scanner
  • Measurement range up to 100 m

  • Accuracy: +/- 3 cm

  • Field of view (vertical): 30° (+15° to -15°)

  • 360° coverage

  • Class 1 – eye safe

GNSS/INS system
  • Dual GNSS antennas

  • Positioning accuracy 0.8–1.5 cm (post-processed trajectory, no GNSS outage)

  • Supported navigation systems: GPS L1, L2, L5; GLONASS L1, L2; GALILEO E1, E5; BeiDou B1, B2

EyeVi logomark.

Join our journey for mapping the future

Want to optimize your costs and think we could be the perfect solution for that? Contact us by sending an email to or schedule a call with us on our contact page.

Contact us

Gaspar Anton

Founder and CEO

We are more than happy to help you!