
Openlander touchdown impediment detection – sUAS Information – The Enterprise of Drones
On Github Stephan Sturges has launched the newest model of a Free-to-use ground-level obstacle-detection segmentation AI for UAV which you’ll be able to deploy as we speak utilizing low-cost off-the-shelf sensors from Luxonis. He writes:-

The default neural community now includes a 3-class output with the detection of people on a separate output layer! That is to permit finer granularity impediment avoidance: if it’s important to fall out of the sky now you can resolve whether or not it’s greatest to drop your drone on prime of a constructing or on somebody’s head 😉
You will want any Luxonis gadget with an RGB digital camera and the right model of the depthai-python library put in to your platform and gadget mixture. When it comes to real-world use I might suggest that you just get a tool with a worldwide shutter RGB digital camera with excessive mild sensitivity and comparatively low optical distortion.
If you don’t but personal an OAK-series digital camera from Luxonis and wish one to make use of with this repository, your greatest wager is to get an OAK-1 gadget modified with an OV9782 sensor with the “customary FOV”. That is the way to do it:
- Go to the OAK-1 on the Luxonis retailer and add it to your cart https://shop.luxonis.com/collections/usb/products/oak-1
- Go the the “customization coupon” within the Luxonis retailer and add a kind of https://shop.luxonis.com/collections/early-access/products/modification-cupon
- In your buying cart, add “please exchange RGB sensor with customary FOV OV9782” within the “directions to vendor” field
… after which wait per week or so to your global-shutter, fixed-focus, high-sensitivity sensor to reach 🙂
Within the beginner {and professional} UAV house there’s a want for easy and low-cost instruments that can be utilized to find out secure emergency touchdown spots, avoiding crashes and potential hurt to individuals.
The neural community performs pixelwise segmentation, and is educated from my very own pipeline of artificial information. This public model is educated on about 500Gb of knowledge. There’s a new model educated on 4T of knowledge that I’ll publish quickly, if you wish to check it simply contact me through e mail.
some examples of training images




Real world pics!
These are unfortunately all made with an old version of the neural network, but I don’t have my own drone to make more :-p The current gen network performs at least 5x better on a mixed dataset, and is a huge step up in real-world use.
(masked area is “landing safe”)






Full-fat version
FYI there is a more advanced version of OpenLander that I am developing as a commercial product, which includes depth sensing, IMU, more advanced neural networks, custom-developed sensors and a whole lot more stuff. If you’re intersted in that feel free to contact me via email (my name @ gmail).
Here’s a quick screengrab of deconflicting landing spots with depth sensing (this runs in parallel to the DNN system): depth_video.mov
There will be updates in the future, but I am also developing custom versions of the neural network for specific commercial use cases and I won’t be adding everything to OpenLander. OpenLander will remain free to use and is destined to improving safety of UAVs for all who enjoy using them!
Some code taken from the excellent https://github.com/luxonis/depthai-experiments from Luxonis.