congatec and the AI experts at Japanese company Hacarus have unveiled what is claimed to be the world’s first embedded computing kit for Artificial Intelligence (AI) that uses Sparse Modeling technology. Sparse Modeling needs little training data to make highly accurate predictions. This is an advantage for vision-based inspection systems, among others, because the reject rate is naturally lower when manufacturing quality is high.
With Sparse Modeling it is possible to create a new inspection model starting with 50 or even fewer images. This is significantly less than the 1,000 or more images required for traditional AI. The Sparse Modeling Kit, available from Hacarus, can be used stand-alone or as an add-on to existing inspection systems. Primary customers are vision system providers and system integrators. Another group of users includes machine and system builders who want to use vision-based AI in their devices but have been reluctant to do so up to now, because the wide variety of individual customer installations requires algorithms to be adapted, which was previously too costly.
Christian Eder, Director of Marketing at congatec explains: “With Sparse Modeling, developers are able to build next-generation inspection systems that can be trained for individual requirements and can therefore function anywhere. There is no longer a need for optimal conditions, such as constant lighting. OEMs also gain greater flexibility to adapt to changing production processes, which is essential for the move to industrial IoT/Industry 4.0 controlled batch size production.”
Essentially, Sparse Modeling is a data modelling approach that focuses on identifying unique characteristics. Simply put, Sparse Modeling interprets data similar to the human brain, instead of analysing every single hair and every millimetre of a person.
Summarising the benefits of Sparse Modeling, Hacarus CTO Takashi Someda says: “Humans can recognise friends and family on the basis of key characteristics – such as eyes or ears. Sparse Modeling integrates a comparable logic into intelligent image processing systems. It is therefore not necessary to process the entire volume of big data – as is the case with conventional AI – but only a few select data. Algorithms based on Sparse Modeling reduce the data to these unique characteristics.” This also makes for a much smaller AI footprint, which is well suited for fanless low-power systems that are in continuous 24/7 use and have only a limited power consumption margin to integrate AI.
Starter kit with scalable hardware platform
The new starter kit based on congatec hardware and Hacarus software can instantly be deployed and tested in any GigE and USB 3.x environment. Designed on the basis of palm sized Computer-on-Modules, the system measures only 173 × 88 × 21.7mm (6.81 × 3.46 × 0.85 in). It is not only slim but also offers extraordinary performance thanks to the latest Intel Atom and Celeron processors (Codename Apollo Lake) that are all available for series production today. Despite its small size, the system has a rich set of I/Os, enabling many different end user setups. Standard interfaces are 2 × GbE application ready for GigE Vison, 1 × USB3.0/2.0, 4 × USB2.0 and 1 × UART (RS-232). Extensions are possible with 2 × Mini-PCIe with USIM socket, 1 × mSATA socket and 16-bit programmable GPIO. The wide range DC voltage input is 9V–32V.
Learn more about the embedded computing kit for AI at www.congatec.com.