The computer that powers AImotive’s driverless car.  The computer that powers AImotive’s driverless car. The AImotive office is in a small converted house at the end of a quiet residential street in sunny Mountain View, spitting distance from Google’s headquarters. Outside is a branded Toyota Prius covered in cameras, one of three autonomous cars the Hungarian company is testing in the sleepy neighbourhood. It’s a popular testing ground: one of Google’s driverless cars, now operating under spin-out company Waymo, zips past the office each lunchtime.


While other autonomous car projects, including those from Waymo and Uber, rely on an expensive (but very useful) radar-like system called Lidar for depth perception and obstacle detection (as well as cameras for seeing the colour of traffic lights and signs), AImotive is trying to do the same using regular cameras combined with artificial intelligence. This means the company can convert a regular car into a driverless one for a fraction of the price – around $6,000 – as opposed to $70,000-$100,000.
“The whole traffic system is based on the visual system,” explained founder and CEO Laszlo Kishonti. “Drivers don’t have bat ears and sonars, you just look around and drive.”
 AImotive, founded in 2015, wants to replicate that human ability. “The only way to do this is with AI.”

The car features four fish-eye cameras, covering each side of the vehicle, as well as dual stereo cameras facing forward and backward. The trunk of the car is filled with a chunky, power-intensive PC that stitches feeds from the cameras – the “eyes” of the car – together in real time to create a three-dimensional model of the environment.
Artificial intelligence is applied to the video feeds to make sense of the surrounding environment, so the system understands what’s around the car and at what distance. Is the object a human, a car, a cat, a road or a sidewalk? If it’s a human or animal, where are they going? Are they about to stop? Or run across the road? These are relatively simple tasks for humans, but challenging for computers faced with frames of pixels.
In a test drive around Mountain View, the Guardian got to see through the many different eyes of the machine, watching it map distances and objects in real time. The car’s live view is combined with a GPS-type “location engine” that places the vehicle on a map. AI also powers the “motion engine” which plots the vehicle’s driving path on the road, and this is fed into a “control engine” which informs the steering, braking and acceleration of the car.

Unfortunately, AImotive has yet to get its California driverless car licence, so the Prius was controlled by a human for our 10-minute ride. However the fully autonomous function has been tested on the roads and underground car parks in Budapest, where AImotive is headquartered.
Lidar became the standard sensing technology for autonomous vehicles after cars fitted with the technology did well in DARPA’s driverless car competition a decade ago, Kishonti said.
By launching in 2015, AImotive hit a sweet spot in artificial intelligence and computer vision. “The speed of research in the field has been remarkable,” said COO Niko Eiden.

The drawback of a vision-based system rather than a Lidar system is that it’s limited by what it can see – just like humans. This means performance is poorer when there’s fog or snow. On the flip-side, in theory Almotive’s approach is more flexible and the car could be dropped into any city and know what to do – without previously creating a detailed 3D map of the roads as a reference point.
This should mean that AImotive’s cars can handle unpredictable changes to the route, for example when roadworks redirect traffic onto the “wrong” side of the road.
“Every day there’s a temporary traffic sign somewhere, what does the first car that sees that do?” asks Kishonti.

The AImotive team uses a more extreme scenario as a thought experiment: “What if an elephant escapes from the circus?” It doesn’t matter how many times a car drives round Mountain View, it’s not going to be able to account for these types of “black swan” events.
“We would recognise it as a big mass of body, but probably would not be able to forecast what this animal will do,” admitted Kishonti.
To teach the system how to handle extreme conditions, AImotive uses a simulator to drive millions and millions of virtual miles to see how its car reacts in a wide range of driving conditions and interactions with other cars, people and animals.
“You can go to Scandinavia and see snow but there’s no New York in Scandinavia. How do you test all types of traffic with snow?” asks Kishonti. With simulations, that’s how.
AImotive doesn’t have any plans to build its own cars from scratch, but is working with companies like Volvo to provide the self-driving “brains” of cars and trucks.
Like Otto, the truck automation company that Uber recently acquired (and is currently the target of a major lawsuit over allegedly stealing trade secrets from Waymo), AImotive believes that the trucking industry is ripe for automation.

“Trucking companies are competing for 1-2% price difference, but 60% of the cost is the driver,” said Kishonti. However, he recognizes that there are political hurdles over and above the technical ones.
“Trucking companies would be happy if the drivers could be eliminated but the trade unions will have different ideas,” he said.
In the meantime, AImotive can provide truck companies with safety features, such as detecting cyclists in drivers’ blind spots.
For now, the company is focused on getting the licence to drive its cars legally in California and then Nevada. Once the paperwork’s in place it will launch a highway pilot “in a few months” before experimenting in urban settings in early 2018.