Presenting our work at the Royal Agricultural University. Interesting Atmosphere.

Drone Startup, the tech we built

Vladimir Akopyan
Quickbird
13 min readSep 27, 2016

--

The article is split in two parts. Part 1 tells our story, Part 2 is a lot more technical and talks about the design decisions and struggles we faced.

As always, you can find a link to a git repository at the end of the article with all the code and CAD models. All images and materials are mine or my teammates’.

Refining the Software Design

At this point we’ve proven the system in principle, flew over a farm and produced a decent map. However using the system was still a pain in the rear and it needed a lot of work.

I made the UI black-and-white for maximum contrast, with huge buttons to make it as easy as possible to read in the sun.

Software — Exposure

Cameras form images during exposure, which basically means collecting light for a certain period of time. We don’t need to deal with mechanical shutters or anything like that — we just issue a command to expose the sensor, and everything happens electronically. You need to expose the sensor for the right amount of time to get a useful image.

We used to enter exposure time manually, and struggled to get it right, wasting a lot of flights that produced overblown or dark images. To deal with the problem we added light sensors, measured ambient light and the program calculated optimal exposure. We also considered automatically adjusting imaging according to light level in flight to deal dealing with clouds but haven’t tested the feature as it could potentially affect consistency.

The light-sensors were from YoctoPuse, an awesome Swiss company that makes USB-connected sensors. Unlike much of their competition, they don’t resort to hacks like emulating a serial port. They have a proper USB-HID interface and supply libraries for every programming language imaginable, with documentation so detailed that in printed form it can stop a bullet.

New version of the software also showed a ‘live view’of what the camera can see. This was incredibly useful to check alignment and exposure.

Software — Registering a.k.a Alignment

Multi-camera arrays like Anywave produce an image by combining information from several cameras, one for each channel. These channels need to be digitally aligned, this process is known as registering. This need comes from tiny tolerances in the manufacturing process, which make each camera point in a slightly different direction. For example if the CCD sensor is held by two supports X and Y, and the difference in their height is just 0.1mm, it results in 0.2° misalignment. It may not sound like much, but with a 60° lens and 1920p wide image, the image will be out of alignment by 6 pixels! That needs to be corrected digitally or the images will look like anaglyph 3D.

On-the-fly image alignment. Each image can be shifted by several pixels until they align perfectly.

Initially we were correcting images in Photoshop after each flight, and with several hundred photos it was incredibly tedious. I improved the software to do correction on the fly.

Software — Exposure control

I already mentioned that the plants are several times brighter in the infrared than they are in the visible part of the spectrum, and that presents a problem. If we set exposure of the camera to match the brightness of the infrared channel, the red one will be dark and provide noisy data. Do it the other way round, and the IR channel will be overblown and totally useless.

Infrared region starts at 700 nanometers. Plants are far brighter in this region than they are in visible light.

To get optimal images, we found that exposing the Red camera 4 times longer than the IR camera provided good image quality. However, now the Red channel is 4 times brighter than it should be, and will produce incorrect values for NDVI and other calculations.

We can deal with this problem by taking advantage of the way data is stored. Ordinary photographs are stored using an 8-bit number for each channel, that’s just 256 levels of brightness.
Like many professional cameras, we save out data in 16-bit, that gives us 65,536 levels of brightness. Meanwhile the camera we use produces 1024 levels of brightness — some can more, but not by much. That means the other 64 thousand levels are available for us to play with.

In illustration below X marks valid data and the padding is an area for which we have nothing to write, so we always put 0.

A 16-bit unsigned integer represents data for a single channel, for a single pixel.

In underexposed image, highest significance bits will be zeroes, loosing some accuracy. In an overexposed image, highest significance bits will be 1’s, making the data invalid.

We then divide all values in the red channel by 4. That has the same effect as shifting useful data down by 2 places, and putting zeroes in front. Now we can save that data with correct brightness and optimal image quality.

UI for these settings is actually simple

Software — Image Registrator

I also build a small program to align images in bulk if we happen to screw up the on-the-fly settings. We never really used it.

Refining the Hardware Design

The system consisted of multiple parts, each held to the body of the drone with a cable tie and exposed to the elements. That had to change so that we could attach and detach it from the drone with ease.

We also wanted to increase the number of channels to 4. With just 2 channels, we had to fly over the fields twice — once with a normal camera and once with Anywave. With 4 we could produce colour and IR images in one flight.

Hardware — Mounting sensors and lenses

I started by designing an enclosure that would house the image sensors and their spectral filters. Eventually I arrived at a single-panel design, where all circuit-boards were mounted to a single piece of plastic. Having no knowledge of mechanics whatsoever, I reasoned that taking the sensors out of their metal enclosures and attaching them all to the same part should minimise any misalignment between them.

On the left is the first prototype, which we discarded. On the right is single-panel design that stuck with,

Black rings on the image above were filter holders, they snapped in tightly on top of the glass filter and held it in place. Initially I expected that we would have to hold them in place with glue or tape, but to my surprise this, out of all things, worked on the first try. Besides those rings, I screwed up every second part by getting tolerances or measurements wrong.

This is how the panel looks with the sensors mounted on it.

Hardware — Power

Up to this point camera had a separate battery, adding to the already considerable weight of the system. We also had no practical way of checking it’s charge in the field, so we wanted to run it off the drone’s barriers.

After some search I discovered that Intel NUC has an on-board power regulator that can handle the wide range of voltages produced by 6S drone batteries as they go from fully charged to discharged. Being able to power the computer without external regulators, weight and EMI noise, was great news.

Hardware — Enclosure

The last step was to build a box that can house the computer and attaches to the drone. As I was designing it, I found that I had to have a large box that was mostly empty space. That was because of awkward placement, rigidity and excessive length of the USB 3 cables connecting computer to the cameras.

That presented a problem because most 3D printers can’t make huge pieces. Even when they can, you have a problem with plastic warping or distorting itself under it’s own weight while it’s hot. I tried to work around this problem by designing an enclosure made of 10 separate parts, where each could be printed separately and then put together. When I estimated it’s weight it clocked in at an astounding 0.9 kg — far too much dead weight for a drone.

‘Lego’ enclosure without side panels.

After some head-scratching and experimentation I came up with a new design that I made as light as possible, and it ended up weighing about a third of that.

Weight-optimised enclosure. The hex pattern that makes it looks like a grandma basket was my attempt at reducing weight.

On the lower left you can see the mounting for image sensors and their lenses, that I wrote about previously. A black cover is there to block any stray light. It has openings for USB 3 connectors, and threaded holes that match fixation screws on the USB cables, to minimise stress on the connector.

After all the parts on the left are put together, they are attached to the bottom of the black box with screws. This box also houses a computer, which is attached to the while ‘shelf’ near the top. Circular mounts at the top allow it to be attached to a drone.

It a big piece, and I had to speak with few 3D printing companies to find one that would take on the job without making us bankrupt. It would take 38 hours to print and cost us £500. I parted with the cash and prayed that I don’t receive some malformed piece of plastic through the post. After three days I got the parts, and they came out surprisingly well, without any plastic going sideways. They were designed almost correctly — in just 30 minutes with a drill and a file, we successfully mounted it on our drone and couldn’t wait to test it.

Our creation takes flight

We got out to another potato field, attached our brand new Imager and started a flight. All seemed to be going well. However when we got back, we discovered that some images were missing. it appeared that Anywave stopped taking images at some point during the flight, and that’s why the field looks cropped off.

Otherwise the images were good. At the time I ascribed it to a programming error.

The Cloud system

Now that our system mostly worked, it was time to find out it anyone wants it. First order of business was to figure out how we are going to show farmers the results of our work. Sensing them source files would not be a good idea, they are typically large images, sometimes measured in GB, and we don’t want them trying to open it in Windows Photo Viewer. It’s more appropriate to display the images on a map

We started looking for a way to display our maps online. It turns out that there was an almost out-of-the-box solution for us — ArcGis Server, is meant to help mapping professionals work together, like git for software developers. Great thing about was that we are already using ArcMap, and ArcGis server is integrated with it. Once Agisoft has produced a map, we would import it into ArcMap for geo-referencing, and upload straight to ArcGis server,to share it publicly.

Our mini-cloud that we used for demo purposes

Although it’s a system that’s overpowered for the task and uses a lot of proprietary software, it saved us the time we’d have to spend developing our own database, an endpoint for parsing geographic data and uploading to the database, and an API server. At the time we were already stretched and couldn’t take on another big chunk of development. We deployed Geoserver and SQL server in a dedicated VM in Azure.

Next we needed a web-app to present this data to the end user. Esri, the company that makes the ArcGis server and other mapping tools, provides building blocks and API’s, and we were able to put together a demo app in short order.

our demo app, version 2

It displayed map data and included basic tools such as area measurements. We were finally in a position to properly show-case the results of our work.

Ghost in he shell

While we were attempting to find farmer-customers, I attempted to resolve the issue, where Anywave would suddenly stop taking images mid-flight. At first I assumed that there was a problem in my code. I poured over the camera APIs, sizes of circular buffers, etc. but could not find anything that could cause imaging to seize. Despite many attempts, I could not reproduce the problem. I made changes to the code that I hoped would make it more reliable, fastened USB cables better, then did another flight. No avail. The problem only occurred during a flight randomly and I could not find any pattern for how or when it happens.

Every time I found something that I thought could be the cause, I made a change, and then we had to test it in flight to see if the problem was fixed. That made the process very slow, painful and expensive. I added code for logging activity of the camera, hoping to determine where in the imaging process does the ‘freeze’ happen, but it wasn’t conclusive.

Then Anywave produced an image that looked like this:

Errors that we could not reproduce

This told me:

  1. This was a hardware problem without a doubt
  2. It affects one camera at a time

I took apart the camera assembly, and tested each camera one by one. I found that shaking them vigorously would cause a similar failure. Perhaps vibrations of the drone were triggering the problem?

I’ve put the cameras back in their original metal enclosure, and shake-tested all of them. I found that 2 cameras were having problems all the time, 1 was completely reliable, and another was sort-of ok.

I got together everything I found out, and contacted the camera manufacturer, Ximea, to speak about the problem. They were surprised to find out about my enclosure design, and immediately pointed out that it did not follow any of their guidelines for heat dissipation or supporting fragile connectors. Most likely my foolish testing has damaged the cameras that did not operate reliably any more.

As I found out, if I wanted to design a custom enclosure, I should have asked them fora special document for OEMs that details their requirements. They turned out to be pretty strict, and require a metal radiator and very specific shape for the slots that let our USB connectors and protect them from damage. It is most likely what caused all of my problems.

The situation was such, that to repair Anywave I would have to buy new cameras and re-design the enclosure correctly. At the same time we were struggling to get farmers to pay for the service — there was a lot of interest, but as soon as we asked for hard cash, it would vanish. One of the responses was literally:

BIG no

It appeared that for most farms, the amount of money they would save could not justify regular monitoring.

At the same time, I was expecting some major manufacturer to come out with a proper multispectral imaging system that’s integrated at the hardware level. Such a product would be cheaper and more compact that anything we could build. Under those circumstances, we decided not to pursue further development.

Looking back

To this date I do not see a great multi-spectral camera on the market, their resolution is still terrible, and people are still hacking DSLR’s and they are actually getting pretty good at it.

We could make a much better system now. Back when we started, most manufacturers were just transitioning their products to USB 3. Now Sony has rolled out a new generation of industrial CMOS sensors that are dramatically cheaper than their predecessors. Both factors mean that you can now pick up a miniature machine vision camera that’s cheaper and with resolution 2–3 times higher than we could before.
If you pursue a similar project, don’t make the mistake of getting rolling shutter cameras — they will cause distortions that are very hard to deal with.

Lastly, if one was to make such a program from scratch, making control software for a phone/tablet instead of a laptop would make it much more convenient to use. At the same time replacing hot-spot with Wi-Fi would make the connection more reliable and so fast that you could view uncompressed video feed in real-time.

The mechanical design deserved a professional approach, don’t repeat our mistakes.

And the most important part — reach out to people that are professionals in the field and ask them for help. Don’t be embarrassed or discouraged if you fail to communicate correctly the first time. Don’t be afraid to look like a fool, you probably are and it’s better to find out sooner rather than later.

Read Part 1

Sources

--

--

Making Internets great again. Slicing crystal balls with Occam's razor.