Compare Products

You have no items to compare.

EN
Categories

Shopping Cart

0 item £0.00

My Cart -

0 item

Luxonis - DepthAI OAK-D (LUX-D) with Onboard Cameras and Enclosure (USB3C)


Availability: In stock

Exc. VAT: £146.33 Inc. VAT: £175.60

SKU:

LX-OAKDINTL

Quick overview

DepthAI is a platform built to allow the power of Spatial AI to be embedded into products.

Qty:

DepthAI is a platform built to allow the power of Spatial AI to be embedded into products.


The DepthAI hardware, firmware, and software suite combine depth perception, object detection (neural inference), and object tracking and gives you this power in a simple, easy-to-use Python API.


This DepthAI variant includes three onboard cameras and BNO085 IMU and interfaces over USB3C to the host, allowing use with your (embedded) host platform of choice, including the Raspberry Pi and other popular embedded hosts.


Embed Human-Like Perception in Your Devices.

Embedded, Performant, Spatial AI & CV opens up a world of new applications, allowing for human-like perception anywhere, regardless of internet connectivity. We’re making this technology productizable and easier-to-use than ever before, so you can leverage this power in your industry




EMBEDDED MACHINE LEARNING FOR ALL

megaAI and DepthAI enable for embedded artificial intelligence and spatial AI/CV what the original Pi did for software programming education. It allows an engineer to get a proof of concept up quick and fast... and then take that same concept straight to production with low technical risk.


DepthAI is a platform - a complete ecosystem of custom haredware, firmware, software, and AI training - which combines neural inference, depth vision, and feature tracking into an easy-to-use, works-in-30-seconds solution. This is an all-in-one solution for folks who need the power of AI, depth, and tracking in a single device. It is also open source - including hardware, software, and AI training. So it allows easy/fast productization as all our DepthAI designs serve as references designs for integrating the power of DepthAI into your own products.

We have launched a couple CrowdSupply campaigns including for DepthAI and megAI to bring the power of the Myriad X to your design. Since then, we have have the designs (hardware and software) for all DepthAI boards, host software, AI training, and a slew of use-case examples. See our list of Githubs in our documentation here. This is all to enable engineers and makers to build their own solutions, leveraging our existing products for prototyping, and our design files and source code to build their own products.


Details

When you use the DepthAI System on Module in your design, you inherit the power of DepthAI firmware, software, and training suite, all for free. These features include:


  • Neural Inference (e.g. object detection, image classification, etc., including two-stage)
  • Stereo Depth (including median filtering)
  • 3D Object Localization (augmenting 2D object detectors with 3D position in meters)
  • Object Tracking (including in 3D space)
  • H.264 and H.265 Encoding (HEVC, 1080p & 4K Video)
  • JPEG Encoding
  • MJPEG Encoding
  • Warp/Dewarp

  • DepthAI is engineered to allow direct MIPI connection to offload the host (which enables using microcontroller hosts):




    And to allow this power to be integrate-able into actual products and to make hardware integration much easier Luxonis made a System on Module (SoM) to hold the Myriad X:




    Use cases

    DepthAI provides real-time human-level perception in myopic ways. So it can approach human-level perception when trained for specific menial tasks that a human might otherwise do (but would dislike doing), like detecting which fruit is bad, and where, real-time. Or detecting anomalies on a site. Below are some examples of where DepthAI is currently used.


    Health and Safety

    The real-time and completely-on-device nature of DepthAI is what makes it suitable for new use-cases in health and safety applications.

    Did you know that the number one cause of injury and death in oil and gas actually comes from situations of trucks and other vehicles impacting people? The size, weight, power, and real-time nature of DepthAI enables use-cases never before possible.

    Imagine a smart helmet for factory workers that warns the worker when a fork-lift is about to run him or her over. Or even a smart fork-lift that can tell what objects are, where they are, and prevents the operator from running over the person - or hitting critical equipment. All while allowing the human operator to go about business as usual. The combination of Depth+AI allows such a system to make real-time ‘virtual walls’ around people, equipment, etc.

    Another sort of application is a smart tail-light for bikes (or motorcycles) keep people who ride bikes safe from distracted drivers.

    Food processing

    DepthAI is hugely useful in food processing. To determine if a food product is safe, many factors need to be taken into account, including size (volume), weight, and appearance. DepthAI allows some very interesting use-cases here. First, since it has real-time (at up to 120FPS) depth mapping, multiple DepthAI can be used to very accurately get the volume and weight of produce without costly, error-prone mechanical weight sensors. And importantly, since mechanical weight sensors suffer from vibration error, etc., they limit how fast the food product can move down the line.

    Using DepthAI for optical weighing and volume, the speed of the line can be increased significantly while also achieving a more accurate weight - with the supplemental data of full volumetric data - so you can sort with extreme granularity.

    In addition, one of the most painful parts about inspecting food items with computer vision is that for many foods there’s a huge variation of color, appearance, etc. that are all ‘good’ - so traditional algorithmic solutions fall apart (often resulting in 30% false-disposal rates when enabled, so they’re disabled and teams of people do the inspection/removal by hand instead). But humans, looking at these food products can easily tell good/bad. And AI has been proven to be able to do the same.

    So DepthAI would be able to weigh the food, get it’s real-time size/shape, and be able to run a neural model in real-time to produce good/bad criteria (and other grading) - which can be mechanically actuated to sort the food product in real-time.

    And most importantly, this is all quantified. So not only can it achieve equivalent functionality of a team of people, it can also deliver data on size, shape, ‘goodness’, weight, etc. for every good product that goes through the line.

    That means you can have a record and can quantify in real-time and over time all the types of defects, diseases seen, packaging errors, etc. to be able to optimize all of the processes involved in the short-term, the long-term, and across seasonal variations.

    Manufacturing

    Similar to food processing, there are many places where DepthAI solves difficult problems that previously were not solvable with technology (i.e., required in-line human inspection and/or intervention) or where traditional computer vision systems do function, but are brittle, expensive, and require top experts in the field to develop and maintain the algorithms as products evolve and new products are added to the manufacturing line.

    DepthAI allows neural models to perform the same functions, while also measuring dimensions, size, shape, mass in real-time - removing the need for personnel to do mind-numbing and error prone inspection while simultaneous providing real-time quantified business insights - and without the huge NRE required to pay for algorithmic solutions.

    Mining

    This one is very interesting, as working in mines is very hazardous but you often want or need human perception in the mine to know what to do next. DepthAI allows that sort of insight without putting a human at risk. So the state of the mine and the mining equipment can be monitored in real-time and quantified - giving alerts when things are going wrong (or right). This amplifies personnel’s capability to keep people and equipment safe while increasing visibility into overall mining performance and efficiency.

    Autonomy (Including Autonomous Agriculture)

    When programming an autonomous platform to move about the world, the two key pieces of info needed are (1) "what are the things around me" and (2) "what is their location relative to me." DepthAI provides this data in a simple API which allows straightforward business logic for driving the platform.

    In the aerial use case, this includes drone sense-and-avoid, emergency recovery (where to land or crash without harming people or property if the prop fails and the UAV only has seconds to respond), and self-navigation in GPS-denied environments.

    For ground platforms, this allows unstructured navigation: understanding what is around, and where, without a-priori knowledge, and responding accordingly.

    A good out-of-the-box example of this is a visually-impaired assistance device (see here), which aims to aid in the autonomy of sight-impaired people. With DepthAI, such a system no longer has to be simple ‘avoid the nebulous blob over there’ but rather, ‘there’s a park bench 2.5 meters to your left and all five seats are open’.

    Another more straightforward example is autonomous lawn mowing while safely avoiding unexpected obstacles, which has similar functional requirements to the visual-assistance device above, and could even use the same, or similar hardware and software.

    And another good example is autonomous strawberry picking, where DepthAI can tell a robotic picker where strawberries are along with their approximate ripeness so the robot can pick only ones above a certain ripeness, not pick bad ones, and sort by ripeness during picking/packing - all autonomously. And example of DepthAI being used to do this is here.

    And many more! In fact every week we learn about a new use-case we hadn't thought of.

    Why it’s Awesome

    In our bicycle-crash-preventing-prototype above - which proves that such a solution is now possible - the CPU was at 100% and the framerate was ~2-3FPS. That’s because it had to help with depth calculation, and it had to shuffle the data around between the depth camera, the CPU, through multiple USB devices, to the neural inference device (the NCS). So it had zero CPU left for… anything else, and ran really slowly. You can see how slowly below, showing the case where only AI is being run on a Raspberry Pi with an NCS2, which maxes at 8FPS (with maxed-out host CPU), whereas DepthAI can achieve over 25FPS doing AI, depth, and encoding (with no host CPU use at all).



    In DepthAI, though that’s not a problem. The DepthAI firmware does everything: The depth, the object detection, the re-projection to x,y,z position of the object in meters (and a slew of other things like h.264/h.265 encoding, feature tracking, etc.) And all of this is on DepthAI, directly from the image sensors, with no load to the host. So you can get a JSON stream of objects, their XYZ positions, encoded 1080p or 4K video, while using 0% of the host CPU. Leaving you 100% to run your business logic for your application.

    We architected DepthAI to be compatible with everything. We've had customers get it working with operating systems we had never even heard of. This is afforded by the open-source nature of the DepthAI ecosystem - customers can simply compile the DepthAI API for their host. So any host that runs OpenCV works with DepthAI (see here), including Linux (Ubuntu, Raspbian, etc.), Mac, and Windows. And we also have an SPI-variant version coming out soon (see here), so any microcontroller with SPI will be able to work with DepthAI.

    We even have a new DepthAI variant with an onboard ESP32 under development (see here), and a Power-Over-Ethernet variant (see here), both of which will be open source of course.



    Documentation and Resources:


  • Product Brief: here
  • DepthAI Documentation: here
  • DepthAI Discussion Forum: here
  • Discord Community: here
  • Python Github: here
  • C++ API Github: here
  • Hardware Github: here
  • DepthAI Models Overview: here

  • Part Number: LUX-D-INTL / OAK-D-INTL


    Note: This was previously referred to as the 'A00110-AB' and 'A00110-INTL' and now is the 'LUX-D-AB' and 'LUX-D-INTL'



    EAN / UPC: 850034031002
    Manufacturer part No: LUX-D-INTL / OAK-D-INTL