Our drone carrying Anywave, taking off a potato field,

Drone Startup — What we were doing

Vladimir Akopyan
Quickbird
11 min readSep 27, 2016

--

About the time when I was finishing university I got absolutely obsessed with drones. For some reason the idea of flying robots took root in my mind and over the next few months I was pestering my friend with all kinds of impractical uses for them was mostly banter until I stumbled into agriculture.

I learned that crops we grow and eat are constantly being attacked by pests and diseases. Turns out just one disease, potato blight, only in UK does a whopping £55 million worth of damage, and it’s managing to do that damage while farmers are spending same amount of money to spray fungicide to protect the plants.

The blights is called Phytophtora infestans, it’s a fungus that attacks potatoes and tomatoes. Image from Wikipedia.

Because the fields are so huge you can’t regularly inspect all of it on foot. If the farmer could have accurate information as soon as disease appears, he could save on chemicals and save the crops. It was evident that could acquire that accurate information using a drone, and infrared camera and some clever software. And this is basically the idea on which I founded Quickbird with my co-founder Animesh.

The article is split in two parts. Part 1 tells our story, Part 2 is a lot more technical and talks about the design decisions and struggles we faced.

As always, you can find a link to a git repository at the end of the article with all the code and CAD models. All images and materials are mine or my teammates’.

How does that work and what’s Anywave?

Infrared

Image below illustrates wavelength of light vs colour we see. It turns out that when it comes to monitoring plants, the most interesting things happen in the near infrared. Written as NIR, this region starts at 700 nm continues from there.

Wavelength in nanometers [nm] vs visible colour. Infrared is > 700 nm. Image from Wikipedia

That’s because plants ‘consume’ most of visible light for photosynthesis, so you can’t see a whole lot. Graph below shows how a potato plant changes as it gets sicker. All the interesting stuff happens in the >700 nm region, just beyond what human eye can see.

How ‘colour’ of potato plant changes as it gets infected. Image is from a research paper, colour coding is arbitrary.

In the image above, the bump between 500 nm and 600 nm is what makes plants look green. That bump reflects back only 15% of the light at it’s peak.
If we could see the infrared light that’s bouncing off the plant, they would be incredibly bright, like snow on a sunny day.

The Market failure

We wanted a camera that can capture this information, but the market for infrared cameras is an absolute failure. The ones that exist cost a fortune, and they aren’t even good: most use old 1–2 megapixel sensors, meaning you would have to fly the drone really low a spend a lot of time just to cover one field.

There is absolutely no reason for it to be this way. The normal camera sensors that we produce by the billion and stick in every mobile phone, can detect near infrared, along with red, green, blue, and UV all at the same time. The only reason cameras produce colour pictures instead of a strange mess is because we put filters in front of these sensors.

  • One called Hot Mirror is a piece of glass placed between sensor an the lens to block infrared light.
  • Another, called Bayer filter, separates remaining light into three main colours. It is applied directly to the sensor, and cannot be removed.

On the graph, the grey line shows what kind of light an image sensor detects ‘naturally’ and shows what the filters do.

Bayer filter forms a pattern that allows ‘normal’ cameras to produce colour images. Each square is a pixel. Illustration from WIkipedia

All you have to do to image infrared, is to change filters. Because no major company is doing it, a cottage industry has sprung up. They take apart semi-professional photo-cameras, remove the Hot Mirror and use that on drones. You can’t remove the Bayer filter and it still gets in the way, but quality and resolution of image sensors in ‘prosumer’ cameras has gotten so good, that the results aren’t bad at all. I’ve written about this previously

The Idea behind Anywave

Sony and other sensor manufacturers actually sell Monochrome versions of their sensors, that’s the ones without the Bayer Filter. You could buy that sensor and use it to build a proper infrared camera.

One of the smallest machine vision cameras

If you have a few hundred thousand ££ and hard-core hardware developers you would develop custom electronics that controls the image sensor, reads the image, saves it to sd card, etc. That’s how all the ‘normal’ cameras are made, and of-course we could not afford that.

Fortunately there is also a market for something called Machine Vision Cameras. You could think of them as webcams on steroids: a companies like Point Grey, IDS Imaging or Ximea buy sensors, add some relatively simple control electronics and an interface to plug it into a computer.

They also provide an SDK that lets you control all the possible settings. These cameras are usually used for factory automation or robots.

Previously these cameras required a special Camera Link interface, but recently they adopted USB 3, thanks to it’s great performance, and now you can connect a machine vision camera to any computer without any specialised adaptors. At the same time, in the wake of Raspberry Pi there was an explosion of miniature computers. We realised that for the first time you could build a viable imaging system just by writing software, making it cheaper and simpler than ever before. Because software would be doing all the work, you could change model, resolution or number of cameras, the data format, etc. at any time.We went for it, and called the project Anywave.

Proving the concept

Unlike Sony we can’t place a different filter onto each pixel to get colours (more appropriately called channels). Instead we have to get several cameras and place a different filter onto each camera. We got our cameras from Ximea (they seem to have the smallest ones) and filters from Vision Light Tech, as they were kind enough to make us a custom filter of any shape we asked.

We started with two channels, infrared and red, because they are commonly used to estimate leaf cover. Cameras and lenses arrived in the post, and I got to work.

Even though I had little experience, the coding went easy and mostly worked. It was also important to align the two cameras accurately, you couldn’t just tape them to a block of wood, I tried and the images were badly out of alignment.

Initially I taped them to this block of wood. It didn’t work.

I took to 3D printing. Autodesk is kind enough to provide students with their entire CAD suite for free, and I was fortunate to register back when I was still at university. After spending considerable time figuring out what the hell is what in their catalogue, I tried Inventor and got the hang of it.

I struggled with the mechanical side, got lens threads and sizing wrong and things that I thought would fit together didn’t. Much money and duct-tape was wasted.

There are plenty of 3D printing companies out there, and innovative Birmingham was home to zero of them at the time. That added delay to my process of education by trial-and-error. Nevertheless a few weeks later and few hundred quid poorer, we had what we needed.

After a brief foray into 3D pritning we had this masterpiece!

The two cameras where two dozen pixels out of alignment, and turns out that’s close enough to be corrected digitally. Below is a False-colour Image. Infrared light is saved into what’s normally the red channel, so you see it as red. Red light is saved into green channel. The blue channel is all zeroes.

One of the first images we produced

The separation of channels on the window frame is caused by parallax. it becomes negligibly small on distant objects. So now we had a camera that worked in principle, it was time to take it to ‘real world’

Proving it in flight

We wanted to get infrared images of an actual farm. The trouble is, we’ve never seen a drone, knew no farmers, and still needed to put the the cameras and a computer into a package or casing that could be attached to a drone.

How maps are made

A flying camera takes a series of pictures

You can use a “flying camera” to take a series of images and produce a 3D map. The technique is known as Structure From Motion and is widely used to produce 3D models for Google Maps using aerial photography.

Software packages, such as Agisoft PhotoScan process the photographs and produce a 3D map and a 2D map. They can then be exported to ordinary mapping software like QGIS.

To produce a good 3D map you need a lot of images taken regularly. Yellow lines show the flight path.
An example of a 3D map. You can find many more impressive ones online

Design of the camera system

I came up with the following system that you could put on a drone:

  • A few 18650 batteries and a voltage regulator would provide power to a tiny single-board computer
  • I wrote an imaging program that would run as soon as the embedded computer starts. It could control the cameras, take images and save them.
  • The same program would broadcast a Wi-Fi network
  • A laptop would connect to the network
  • I wrote a second program that acts as a ‘Control Panel’. You could use it to issue commands to the embedded computer, start imaging, etc.

I made a point of finding a computer, where the flash-drive was protected against data corruption due to power loss. That was annoyingly hard because few manufacturers specify this.

Getting the right people

First order of business was to get an aircraft and a pilot. We would not get by just with some king of DJI phantom, we needed a big industrial drone to lift all the gear. If we bought one, that would be all the money we had and most likely we would get it stuck in the nearest tree.

Before graduating we took part in a ‘business idea’ competition amongst students. One of the judges told us about a developer hangout in Birmingham. There, in an incredible stroke of luck, we met a guy that knew a guy that knew Manuel. At the time Manuel was burning through his savings, creating a drone startup. He had piloting skills, high-end gear, and experience. Previously he worked with a charity in Brazil, using drones to map the Atlantic Forest to detect illegal logging. We hit it off right away.

Manuel doing what he does best
His drone, in flight, mapping the area.

Second order of business was to find someplace where we could fly about and test, preferably a farm. We hoped to convert our early testers into first customers. Animesh was hard at work.

Animesh doing his thing

First Results

While Animesh was looking for farmers, I worked with Manuel to get the system flight ready. We strapped all the electronics to his drone, did tests made changes, and did it again. Rinse-repeat until we got first results:

Cable ties kept all the electronics from falling off the drone. We also got smaller lenses and put the filters inside the cameras. Circuit-board with an orange heat-sink is the computer. White wires are the USB 3 cables with plastic insulation stripped away.
Trees and bushes in Sutton park. Exciting!

We also discovered that this setup had a nasty habit of killing the drone’s GPS reception. That was a deal-breaker, because the drone needs to follow a set of GPS way-points to produce a proper map.

After speaking to a friend of mine that’s more competent in electronics, we tracked down the issue to a 5 V voltage regulator from hobbyist manufacturer Pololu. The GPS issue disappeared as soon as we replaced it.

Mapping a farm

By this time Animesh established a relationship with a potato farmer, and we had greet light to fly over his fields and wreak havoc.

We picked a small 9 hectare field for our first test, set the drone to fly at a modest height of 70 meters, Flights went smoothly, and we got a good 3 hundred images. Once we returned, we chucked all the images into AgiSoft and twiddled our thumbs for a few hours. To our great surprise we got a proper map on the first try!

Our first field map. This is about 10 hectares of land. Note the shadowns caused by clouds.

Clouds have really messed with the image, however we aren’t looking for a pretty picture, we want to measure of how well the plants are doing. We used the image above to produce a Normalized Difference Vegetation Index, or NDVI. It’s one of the simplest indexes (there are dozens more) and it’s generally used to distinguish plants from the soil and measure their leaf area. It’s calculated with the following equation for every pixel.

The index measures relative amount of light in each channel, so shaded regions still provide useful data, it’s just of lower quality.

The image shows how density of the crop’s leaf cover varies throughout the field. You can clearly see marks left by the tractors, a farmhouse in the lower left and a small pond next to it.

Continue Reading

Sources

--

--

Making Internets great again. Slicing crystal balls with Occam's razor.